Machine learning (ML) is the process of constructing a mathematical model of a system based on a sample dataset collected from that system.
One of the main goals of training an ML model is to teach the model to separate the signal present in the data from the noise inherent in system and in the data collection process. If this is done effectively, the model can then be used to make accurate predictions about the system when presented with new, similar data. Additionally, introspecting on an ML model can reveal key information about the system being modeled, such as which inputs and transformations of the inputs are most useful to the ML model for learning the signal in the data, and are therefore the most predictive.
There are a variety of ML problem types. Supervised learning describes the case where the collected data contains an output value to be modeled and a set of inputs with which to train the model. EvalML focuses on training supervised learning models.
EvalML supports three common supervised ML problem types. The first is regression, where the target value to model is a continuous numeric value. Next are binary and multiclass classification, where the target value to model consists of two or more discrete values or categories. The choice of which supervised ML problem type is most appropriate depends on domain expertise and on how the model will be evaluated and used.
AutoML is the process of automating the construction, training and evaluation of ML models. Given a data and some configuration, AutoML searches for the most effective and accurate ML model or models to fit the dataset. During the search, AutoML will explore different combinations of model type, model parameters and model architecture.
An effective AutoML solution offers several advantages over constructing and tuning ML models by hand. AutoML can assist with many of the difficult aspects of ML, such as avoiding overfitting and underfitting, imbalanced data, detecting data leakage and other potential issues with the problem setup, and automatically applying best-practice data cleaning, feature engineering, feature selection and various modeling techniques. AutoML can also leverage search algorithms to optimally sweep the hyperparameter search space, resulting in model performance which would be difficult to achieve by manual training.
EvalML supports all of the above and more.
In its simplest usage, the AutoML search interface requires only the input data, the target data and a problem_type specifying what kind of supervised ML problem to model.
problem_type
** Graphing methods, like AutoMLSearch, on Jupyter Notebook and Jupyter Lab require ipywidgets to be installed.
** If graphing on Jupyter Lab, jupyterlab-plotly required. To download this, make sure you have npm installed.
To provide data to EvalML, it is recommended that you create a DataTable object using the Woodwork project.
DataTable
EvalML also accepts and works well with pandas DataFrames. But using the DataTable makes it easy to control how EvalML will treat each feature, as a numeric feature, a categorical feature, a text feature or other type of feature. Woodwork’s DataTable includes features like inferring when a categorical feature should be treated as a text feature. For this reason, if you don’t provide Woodwork objects, EvalML will raise a warning.
DataFrames
[1]:
import evalml X, y = evalml.demos.load_breast_cancer() import woodwork as ww X_dt = ww.DataTable(X) y_dc = ww.DataColumn(y) automl = evalml.automl.AutoMLSearch(problem_type='binary') automl.search(X_dt, y_dc)
Using default limit of max_batches=1. Generating pipelines to search over... ***************************** * Beginning pipeline search * ***************************** Optimizing for Log Loss Binary. Lower score is better. Searching up to 1 batches for a total of 9 pipelines. Allowed model families: extra_trees, random_forest, decision_tree, catboost, linear_model, lightgbm, xgboost
Batch 1: (1/9) Mode Baseline Binary Classification P... Elapsed:00:00 Starting cross validation Finished cross validation - mean Log Loss Binary: 12.868 Batch 1: (2/9) Decision Tree Classifier w/ Imputer Elapsed:00:00 Starting cross validation Finished cross validation - mean Log Loss Binary: 1.965 High coefficient of variation (cv >= 0.2) within cross validation scores. Decision Tree Classifier w/ Imputer may not perform as estimated on unseen data. Batch 1: (3/9) LightGBM Classifier w/ Imputer Elapsed:00:00 Starting cross validation Finished cross validation - mean Log Loss Binary: 0.130 High coefficient of variation (cv >= 0.2) within cross validation scores. LightGBM Classifier w/ Imputer may not perform as estimated on unseen data. Batch 1: (4/9) Extra Trees Classifier w/ Imputer Elapsed:00:01 Starting cross validation Finished cross validation - mean Log Loss Binary: 0.146 Batch 1: (5/9) Elastic Net Classifier w/ Imputer + S... Elapsed:00:02 Starting cross validation Finished cross validation - mean Log Loss Binary: 0.504 Batch 1: (6/9) CatBoost Classifier w/ Imputer Elapsed:00:02 Starting cross validation Finished cross validation - mean Log Loss Binary: 0.382 Batch 1: (7/9) XGBoost Classifier w/ Imputer Elapsed:00:03 Starting cross validation Finished cross validation - mean Log Loss Binary: 0.111 High coefficient of variation (cv >= 0.2) within cross validation scores. XGBoost Classifier w/ Imputer may not perform as estimated on unseen data. Batch 1: (8/9) Random Forest Classifier w/ Imputer Elapsed:00:04 Starting cross validation Finished cross validation - mean Log Loss Binary: 0.122 Batch 1: (9/9) Logistic Regression Classifier w/ Imp... Elapsed:00:05 Starting cross validation Finished cross validation - mean Log Loss Binary: 0.076 Search finished after 00:07 Best pipeline: Logistic Regression Classifier w/ Imputer + Standard Scaler Best pipeline Log Loss Binary: 0.075715
The AutoML search will log its progress, reporting each pipeline and parameter set evaluated during the search.
There are a number of mechanisms to control the AutoML search time. One way is to set the max_batches parameter which controls the maximum number of rounds of AutoML to evaluate, where each round may train and score a variable number of pipelines. Another way is to set the max_iterations parameter which controls the maximum number of candidate models to be evaluated during AutoML. By default, AutoML will search for a single batch. The first pipeline to be evaluated will always be a baseline model representing a trivial solution.
max_batches
max_iterations
The AutoML interface supports a variety of other parameters. For a comprehensive list, please refer to the API reference.
EvalML includes a simple method, detect_problem_type, to help determine the problem type given the target data.
detect_problem_type
This function can return the predicted problem type as a ProblemType enum, choosing from ProblemType.BINARY, ProblemType.MULTICLASS, and ProblemType.REGRESSION. If the target data is invalid (for instance when there is only 1 unique label), the function will throw an error instead.
[2]:
import pandas as pd from evalml.problem_types import detect_problem_type y = pd.Series([0, 1, 1, 0, 1, 1]) detect_problem_type(y)
<ProblemTypes.BINARY: 'binary'>
AutoMLSearch takes in an objective parameter to determine which objective to optimize for. By default, this parameter is set to auto, which allows AutoML to choose LogLossBinary for binary classification problems, LogLossMulticlass for multiclass classification problems, and R2 for regression problems.
objective
auto
LogLossBinary
LogLossMulticlass
R2
It should be noted that the objective parameter is only used in ranking and helping choose the pipelines to iterate over, but is not used to optimize each individual pipeline during fit-time.
To get the default objective for each problem type, you can use the get_default_primary_search_objective function.
get_default_primary_search_objective
[3]:
from evalml.automl import get_default_primary_search_objective binary_objective = get_default_primary_search_objective("binary") multiclass_objective = get_default_primary_search_objective("multiclass") regression_objective = get_default_primary_search_objective("regression") print(binary_objective.name) print(multiclass_objective.name) print(regression_objective.name)
Log Loss Binary Log Loss Multiclass R2
AutoMLSearch.search runs a set of data checks before beginning the search process to ensure that the input data being passed will not run into some common issues before running a potentially time-consuming search. If the data checks find any potential errors, an exception will be thrown before the search begins, allowing users to inspect their data to avoid confusing errors that may arise later during the search process.
AutoMLSearch.search
This behavior is controlled by the data_checks parameter which can take in either a DataChecks object, a list of DataCheck objects, None, or valid string inputs ("disabled", "auto"). By default, this parameter is set to auto, which runs the default collection of data sets defined in the DefaultDataChecks class. If set to "disabled" or None, no data checks will run.
data_checks
DataChecks
DataCheck
None
"disabled"
"auto"
DefaultDataChecks
EvalML’s AutoML algorithm generates a set of pipelines to search with. To provide a custom set instead, set allowed_pipelines to a list of custom pipeline classes. Note: this will prevent AutoML from generating other pipelines to search over.
[4]:
from evalml.pipelines import MulticlassClassificationPipeline class CustomMulticlassClassificationPipeline(MulticlassClassificationPipeline): component_graph = ['Simple Imputer', 'Random Forest Classifier'] automl_custom = evalml.automl.AutoMLSearch(problem_type='multiclass', allowed_pipelines=[CustomMulticlassClassificationPipeline])
Using default limit of max_batches=1.
To stop the search early, hit Ctrl-C. This will bring up a prompt asking for confirmation. Responding with y will immediately stop the search. Responding with n will continue the search.
Ctrl-C
y
n
A summary of all the pipelines built can be returned as a pandas DataFrame which is sorted by score. The score column contains the average score across all cross-validation folds while the validation_score column is computed from the first cross-validation fold.
[5]:
automl.rankings
Each pipeline is given an id. We can get more information about any particular pipeline using that id. Here, we will get more information about the pipeline with id = 1.
id
id = 1
[6]:
automl.describe_pipeline(1)
*************************************** * Decision Tree Classifier w/ Imputer * *************************************** Problem Type: binary Model Family: Decision Tree Pipeline Steps ============== 1. Imputer * categorical_impute_strategy : most_frequent * numeric_impute_strategy : mean * categorical_fill_value : None * numeric_fill_value : None 2. Decision Tree Classifier * criterion : gini * max_features : auto * max_depth : 6 * min_samples_split : 2 * min_weight_fraction_leaf : 0.0 Training ======== Training for binary problems. Total training time (including CV): 0.3 seconds Cross Validation ---------------- Log Loss Binary MCC Binary AUC Precision F1 Balanced Accuracy Binary Accuracy Binary # Training # Testing 0 1.508 0.854 0.936 0.903 0.909 0.928 0.932 379.000 190.000 1 2.547 0.843 0.910 0.901 0.901 0.921 0.926 379.000 190.000 2 1.840 0.886 0.903 0.941 0.928 0.940 0.947 380.000 189.000 mean 1.965 0.861 0.916 0.915 0.913 0.930 0.935 - - std 0.531 0.023 0.018 0.023 0.013 0.010 0.011 - - coef of var 0.270 0.026 0.019 0.025 0.015 0.010 0.012 - -
We can get the object of any pipeline via their id as well:
[7]:
pipeline = automl.get_pipeline(1) print(pipeline.name) print(pipeline.parameters)
Decision Tree Classifier w/ Imputer {'Imputer': {'categorical_impute_strategy': 'most_frequent', 'numeric_impute_strategy': 'mean', 'categorical_fill_value': None, 'numeric_fill_value': None}, 'Decision Tree Classifier': {'criterion': 'gini', 'max_features': 'auto', 'max_depth': 6, 'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0}}
If we specifically want to get the best pipeline, there is a convenient accessor for that.
[8]:
best_pipeline = automl.best_pipeline print(best_pipeline.name) print(best_pipeline.parameters)
Logistic Regression Classifier w/ Imputer + Standard Scaler {'Imputer': {'categorical_impute_strategy': 'most_frequent', 'numeric_impute_strategy': 'mean', 'categorical_fill_value': None, 'numeric_fill_value': None}, 'Logistic Regression Classifier': {'penalty': 'l2', 'C': 1.0, 'n_jobs': -1, 'multi_class': 'auto', 'solver': 'lbfgs'}}
The AutoML search algorithm first trains each component in the pipeline with their default values. After the first iteration, it then tweaks the parameters of these components using the pre-defined hyperparameter ranges that these components have. To limit the search over certain hyperparameter ranges, use make_pipeline to define a pipeline with a custom hyperparameter range. Hyperparameter ranges can be found through the API reference for each component. Note: The default value of the component must be included in any specified hyperparameter range for AutoMLSearch to succeed. Additionally, the parameter value must be specified as a list, even for just one value.
make_pipeline
[9]:
from evalml import AutoMLSearch from evalml.demos import load_fraud from evalml.pipelines.components.utils import get_estimators from evalml.model_family import ModelFamily from evalml.pipelines.utils import make_pipeline import woodwork as ww X, y = load_fraud() # example of setting parameter to just one value custom_hyperparameters = {'Imputer': { 'numeric_impute_strategy': ['mean'] }} # limit the numeric impute strategy to include only `mean` and `median` # `mean` is the default value for this argument, and it needs to be included in the specified hyperparameter range custom_hyperparameters = {'Imputer': { 'numeric_impute_strategy': ['mean', 'median'] }} # AutoMLSearch uses woodwork, so we want to use ww to convert the appropriate types when making pipelines # ww changes the original X and y data, so we pass that instead of X_dt, y_dc X_dt = ww.DataTable(X) y_dc = ww.DataColumn(y) estimators = get_estimators('binary', [ModelFamily.EXTRA_TREES]) pipelines_with_custom_hyperparameters = [make_pipeline(X, y, estimator, 'binary', custom_hyperparameters) for estimator in estimators] automl = AutoMLSearch(problem_type='binary', allowed_pipelines=pipelines_with_custom_hyperparameters) automl.search(X, y) automl.best_pipeline.hyperparameters
Number of Features Boolean 1 Categorical 6 Numeric 5 Number of training examples: 99992 Targets False 84.82% True 15.18% Name: fraud, dtype: object Using default limit of max_batches=1. `X` passed was not a DataTable. EvalML will try to convert the input as a Woodwork DataTable and types will be inferred. To control this behavior, please pass in a Woodwork DataTable instead. `y` passed was not a DataColumn. EvalML will try to convert the input as a Woodwork DataTable and types will be inferred. To control this behavior, please pass in a Woodwork DataTable instead. ***************************** * Beginning pipeline search * ***************************** Optimizing for Log Loss Binary. Lower score is better. Searching up to 1 batches for a total of 2 pipelines. Allowed model families: extra_trees
Batch 1: (1/2) Mode Baseline Binary Classification P... Elapsed:00:00 Starting cross validation Finished cross validation - mean Log Loss Binary: 5.243 Batch 1: (2/2) Extra Trees Classifier w/ Imputer + D... Elapsed:00:01 Starting cross validation Finished cross validation - mean Log Loss Binary: 0.373 Search finished after 00:26 Best pipeline: Extra Trees Classifier w/ Imputer + DateTime Featurization Component + One Hot Encoder Best pipeline Log Loss Binary: 0.372568
{'Imputer': {'categorical_impute_strategy': ['most_frequent'], 'numeric_impute_strategy': ['mean', 'median']}, 'DateTime Featurization Component': {}, 'One Hot Encoder': {}, 'Extra Trees Classifier': {'n_estimators': Integer(low=10, high=1000, prior='uniform', transform='identity'), 'max_features': ['auto', 'sqrt', 'log2'], 'max_depth': Integer(low=4, high=10, prior='uniform', transform='identity')}}
The AutoMLSearch class records detailed results information under the results field, including information about the cross-validation scoring and parameters.
AutoMLSearch
results
[10]:
automl.results
{'pipeline_results': {0: {'id': 0, 'pipeline_name': 'Mode Baseline Binary Classification Pipeline', 'pipeline_class': evalml.pipelines.classification.baseline_binary.ModeBaselineBinaryPipeline, 'pipeline_summary': 'Baseline Classifier', 'parameters': {'Baseline Classifier': {'strategy': 'mode'}}, 'score': 5.243060307948459, 'high_variance_cv': False, 'training_time': 1.249321460723877, 'cv_data': [{'all_objective_scores': OrderedDict([('Log Loss Binary', 5.243353291477845), ('MCC Binary', 0.0), ('AUC', 0.5), ('Precision', 0.0), ('F1', 0.0), ('Balanced Accuracy Binary', 0.5), ('Accuracy Binary', 0.848189373256128), ('# Training', 66661), ('# Testing', 33331)]), 'score': 5.243353291477845, 'binary_classification_threshold': 0.5}, {'all_objective_scores': OrderedDict([('Log Loss Binary', 5.243353291477846), ('MCC Binary', 0.0), ('AUC', 0.5), ('Precision', 0.0), ('F1', 0.0), ('Balanced Accuracy Binary', 0.5), ('Accuracy Binary', 0.848189373256128), ('# Training', 66661), ('# Testing', 33331)]), 'score': 5.243353291477846, 'binary_classification_threshold': 0.5}, {'all_objective_scores': OrderedDict([('Log Loss Binary', 5.242474340889684), ('MCC Binary', 0.0), ('AUC', 0.5), ('Precision', 0.0), ('F1', 0.0), ('Balanced Accuracy Binary', 0.5), ('Accuracy Binary', 0.8482148214821482), ('# Training', 66662), ('# Testing', 33330)]), 'score': 5.242474340889684, 'binary_classification_threshold': 0.5}], 'percent_better_than_baseline_all_objectives': {'Log Loss Binary': 0, 'MCC Binary': nan, 'AUC': 0, 'Precision': nan, 'F1': nan, 'Balanced Accuracy Binary': 0, 'Accuracy Binary': 0}, 'percent_better_than_baseline': 0, 'validation_score': 5.243353291477845}, 1: {'id': 1, 'pipeline_name': 'Extra Trees Classifier w/ Imputer + DateTime Featurization Component + One Hot Encoder', 'pipeline_class': evalml.pipelines.utils.make_pipeline.<locals>.GeneratedPipeline, 'pipeline_summary': 'Extra Trees Classifier w/ Imputer + DateTime Featurization Component + One Hot Encoder', 'parameters': {'Imputer': {'categorical_impute_strategy': 'most_frequent', 'numeric_impute_strategy': 'mean', 'categorical_fill_value': None, 'numeric_fill_value': None}, 'DateTime Featurization Component': {'features_to_extract': ['year', 'month', 'day_of_week', 'hour']}, 'One Hot Encoder': {'top_n': 10, 'features_to_encode': None, 'categories': None, 'drop': None, 'handle_unknown': 'ignore', 'handle_missing': 'error'}, 'Extra Trees Classifier': {'n_estimators': 100, 'max_features': 'auto', 'max_depth': 6, 'min_samples_split': 2, 'min_weight_fraction_leaf': 0.0, 'n_jobs': -1}}, 'score': 0.37256770849294946, 'high_variance_cv': False, 'training_time': 24.949636459350586, 'cv_data': [{'all_objective_scores': OrderedDict([('Log Loss Binary', 0.37845854046043), ('MCC Binary', 0.0), ('AUC', 0.8338264129934962), ('Precision', 0.0), ('F1', 0.0), ('Balanced Accuracy Binary', 0.5), ('Accuracy Binary', 0.848189373256128), ('# Training', 66661), ('# Testing', 33331)]), 'score': 0.37845854046043, 'binary_classification_threshold': 0.5}, {'all_objective_scores': OrderedDict([('Log Loss Binary', 0.3817963926457255), ('MCC Binary', 0.0), ('AUC', 0.8205026366073251), ('Precision', 0.0), ('F1', 0.0), ('Balanced Accuracy Binary', 0.5), ('Accuracy Binary', 0.848189373256128), ('# Training', 66661), ('# Testing', 33331)]), 'score': 0.3817963926457255, 'binary_classification_threshold': 0.5}, {'all_objective_scores': OrderedDict([('Log Loss Binary', 0.3574481923726928), ('MCC Binary', 0.0), ('AUC', 0.8379499815935184), ('Precision', 0.0), ('F1', 0.0), ('Balanced Accuracy Binary', 0.5), ('Accuracy Binary', 0.8482148214821482), ('# Training', 66662), ('# Testing', 33330)]), 'score': 0.3574481923726928, 'binary_classification_threshold': 0.5}], 'percent_better_than_baseline_all_objectives': {'Log Loss Binary': 92.8940792855627, 'MCC Binary': nan, 'AUC': 66.15193541295599, 'Precision': nan, 'F1': nan, 'Balanced Accuracy Binary': 0, 'Accuracy Binary': 0}, 'percent_better_than_baseline': 92.8940792855627, 'validation_score': 0.37845854046043}}, 'search_order': [0, 1], 'errors': []}
Stacking is an ensemble machine learning algorithm that involves training a model to best combine the predictions of several base learning algorithms. First, each base learning algorithms is trained using the given data. Then, the combining algorithm or meta-learner is trained on the predictions made by those base learning algorithms to make a final prediction.
AutoML enables stacking using the ensembling flag during initalization; this is set to False by default. The stacking ensemble pipeline runs in its own batch after a whole cycle of training has occurred (each allowed pipeline trains for one batch). Note that this means a large number of iterations may need to run before the stacking ensemble runs. It is also important to note that only the first CV fold is calculated for stacking ensembles because the model internally uses CV folds.
ensembling
False
[11]:
X, y = evalml.demos.load_breast_cancer() X_dt = ww.DataTable(X) y_dc = ww.DataColumn(y) automl_with_ensembling = AutoMLSearch(problem_type="binary", allowed_model_families=[ModelFamily.RANDOM_FOREST, ModelFamily.LINEAR_MODEL], max_batches=5, ensembling=True) automl_with_ensembling.search(X_dt, y_dc)
Generating pipelines to search over... ***************************** * Beginning pipeline search * ***************************** Optimizing for Log Loss Binary. Lower score is better. Searching up to 5 batches for a total of 20 pipelines. Allowed model families: random_forest, linear_model
Batch 1: (1/20) Mode Baseline Binary Classification P... Elapsed:00:00 Starting cross validation Finished cross validation - mean Log Loss Binary: 12.868 Batch 1: (2/20) Elastic Net Classifier w/ Imputer + S... Elapsed:00:00 Starting cross validation Finished cross validation - mean Log Loss Binary: 0.503 Batch 1: (3/20) Random Forest Classifier w/ Imputer Elapsed:00:00 Starting cross validation Finished cross validation - mean Log Loss Binary: 0.165 High coefficient of variation (cv >= 0.2) within cross validation scores. Random Forest Classifier w/ Imputer may not perform as estimated on unseen data. Batch 1: (4/20) Logistic Regression Classifier w/ Imp... Elapsed:00:02 Starting cross validation Finished cross validation - mean Log Loss Binary: 0.080 High coefficient of variation (cv >= 0.2) within cross validation scores. Logistic Regression Classifier w/ Imputer + Standard Scaler may not perform as estimated on unseen data. Batch 2: (5/20) Logistic Regression Classifier w/ Imp... Elapsed:00:02 Starting cross validation Finished cross validation - mean Log Loss Binary: 0.155 Batch 2: (6/20) Logistic Regression Classifier w/ Imp... Elapsed:00:02 Starting cross validation Finished cross validation - mean Log Loss Binary: 0.098 High coefficient of variation (cv >= 0.2) within cross validation scores. Logistic Regression Classifier w/ Imputer + Standard Scaler may not perform as estimated on unseen data. Batch 2: (7/20) Logistic Regression Classifier w/ Imp... Elapsed:00:03 Starting cross validation Finished cross validation - mean Log Loss Binary: 0.079 High coefficient of variation (cv >= 0.2) within cross validation scores. Logistic Regression Classifier w/ Imputer + Standard Scaler may not perform as estimated on unseen data. Batch 2: (8/20) Logistic Regression Classifier w/ Imp... Elapsed:00:03 Starting cross validation Finished cross validation - mean Log Loss Binary: 0.088 Batch 2: (9/20) Logistic Regression Classifier w/ Imp... Elapsed:00:04 Starting cross validation Finished cross validation - mean Log Loss Binary: 0.091 High coefficient of variation (cv >= 0.2) within cross validation scores. Logistic Regression Classifier w/ Imputer + Standard Scaler may not perform as estimated on unseen data. Batch 3: (10/20) Random Forest Classifier w/ Imputer Elapsed:00:04 Starting cross validation Finished cross validation - mean Log Loss Binary: 0.182 High coefficient of variation (cv >= 0.2) within cross validation scores. Random Forest Classifier w/ Imputer may not perform as estimated on unseen data. Batch 3: (11/20) Random Forest Classifier w/ Imputer Elapsed:00:06 Starting cross validation Finished cross validation - mean Log Loss Binary: 0.166 High coefficient of variation (cv >= 0.2) within cross validation scores. Random Forest Classifier w/ Imputer may not perform as estimated on unseen data. Batch 3: (12/20) Random Forest Classifier w/ Imputer Elapsed:00:11 Starting cross validation Finished cross validation - mean Log Loss Binary: 0.120 Batch 3: (13/20) Random Forest Classifier w/ Imputer Elapsed:00:17 Starting cross validation Finished cross validation - mean Log Loss Binary: 0.121 Batch 3: (14/20) Random Forest Classifier w/ Imputer Elapsed:00:24 Starting cross validation Finished cross validation - mean Log Loss Binary: 0.120 Batch 4: (15/20) Elastic Net Classifier w/ Imputer + S... Elapsed:00:31 Starting cross validation Finished cross validation - mean Log Loss Binary: 0.662 Batch 4: (16/20) Elastic Net Classifier w/ Imputer + S... Elapsed:00:31 Starting cross validation Finished cross validation - mean Log Loss Binary: 0.203 Batch 4: (17/20) Elastic Net Classifier w/ Imputer + S... Elapsed:00:31 Starting cross validation Finished cross validation - mean Log Loss Binary: 0.398 Batch 4: (18/20) Elastic Net Classifier w/ Imputer + S... Elapsed:00:32 Starting cross validation Finished cross validation - mean Log Loss Binary: 0.453 Batch 4: (19/20) Elastic Net Classifier w/ Imputer + S... Elapsed:00:32 Starting cross validation Finished cross validation - mean Log Loss Binary: 0.662 Batch 5: (20/20) Stacked Ensemble Classification Pipeline Elapsed:00:32 Starting cross validation Finished cross validation - mean Log Loss Binary: 0.069 Search finished after 00:41 Best pipeline: Stacked Ensemble Classification Pipeline Best pipeline Log Loss Binary: 0.069287
We can view more information about the stacking ensemble pipeline (which was the best performing pipeline) by calling .describe().
.describe()
[12]:
automl_with_ensembling.best_pipeline.describe()
******************************************** * Stacked Ensemble Classification Pipeline * ******************************************** Problem Type: binary Model Family: Ensemble Pipeline Steps ============== 1. Stacked Ensemble Classifier * input_pipelines : [GeneratedPipeline(parameters={'Imputer':{'categorical_impute_strategy': 'most_frequent', 'numeric_impute_strategy': 'mean', 'categorical_fill_value': None, 'numeric_fill_value': None}, 'Logistic Regression Classifier':{'penalty': 'l2', 'C': 2.2697010454116437, 'n_jobs': -1, 'multi_class': 'auto', 'solver': 'lbfgs'},}), GeneratedPipeline(parameters={'Imputer':{'categorical_impute_strategy': 'most_frequent', 'numeric_impute_strategy': 'mean', 'categorical_fill_value': None, 'numeric_fill_value': None}, 'Random Forest Classifier':{'n_estimators': 934, 'max_depth': 10, 'n_jobs': -1},})] * final_estimator : None * cv : None * n_jobs : 1