EvalML uses the standard Python logging package. Default logging behavior prints WARNING level logs and above (ERROR and CRITICAL) to stdout. To configure different behavior, please refer to the Python logging documentation.
To see up-to-date feedback as
AutoMLSearch runs, use the argument
verbose=True when instantiating the object. This will temporarily set up a logging object to print INFO level logs and above to stdout, as well as display a graph of the best score over pipeline iterations.
EvalML provides a command-line interface (CLI) tool prints the version of EvalML and core dependencies installed, as well as some basic system information. To use this tool, just run
evalml info in your shell or terminal. This could be useful for debugging purposes or tracking down any version-related issues.
/usr/bin/sh: 1: evalml: not found