Competition Class¶
After deploying a Model Playground, users can create a model competition. Creating a competition allows you to:
Verify the Model Playground performance metrics on aimodelshare.org
Submit models to a leaderboard
Grant access to other users to submit models to the leaderboard
Easily compare model performance and structure
submit_model¶
Submits model/preprocessor to machine learning competition using live prediction API url generated by AI Modelshare library. The submitted model gets evaluated and compared with all existing models and a leaderboard can be generated.
- Competition.submit_model(model_filepath, preprocessor_filepath, prediction_submission, sample_data=None, reproducibility_env_filepath=None, custom_metadata=None)¶
- Parameters:
model_filepath (string -ends with '.onnx') – Value - Absolute path to model file [REQUIRED] to be set by the user. .onnx is the only accepted model file extension. “example_model.onnx” filename for file in directory. “/User/xyz/model/example_model.onnx” absolute path to model file from local directory.
preprocessor_filepath (string) – value - absolute path to preprocessor file. [REQUIRED] to be set by the user. “./preprocessor.zip”. Searches for an exported zip preprocessor file in the current directory. File is generated from preprocessor module using export_preprocessor function from the AI Modelshare library.
prediction_submission (One-hot encoded prediction data for classification. List of values for regression.) – Values - predictions for test data. [REQUIRED] for evaluation metrics of the submitted model.
sample_data –
reproducibility_env_filepath (string) – [OPTIONAL] to be set by the user- absolute path to environment environment json file. Example: “./reproducibility.json”. File is generated using export_reproducibility_env function from the AI Modelshare library
custom_metadata (Dictionary) – Dictionary of custom metadata metrics (keys) and values for the model to be submitted.
- Returns:
Model version if the model is submitted sucessfully.
Example:
#-- Generate predicted values (sklearn)
prediction_labels = model.predict(preprocessor(X_test))
#-- Generate predicted values (keras)
prediction_column_index=model.predict(preprocessor(X_test)).argmax(axis=1)
# Extract correct prediction labels
prediction_labels = [y_train.columns[i] for i in prediction_column_index]
# Submit Model to Competition Leaderboard
mycompetition.submit_model(model_filepath = "model.onnx",
preprocessor_filepath="preprocessor.zip",
prediction_submission=prediction_labels)
instantiate_model¶
Import a model previously submitted to the competition leaderboard to use in your session.
- Competition.instantiate_model(version=None, trained=False, reproduce=False)¶
- Parameters:
version (integer) – Model version number from competition leaderboard.
trained (bool, default=False) – If True, a trained model is instantiated, if False, the untrained model is instantiated
reproduce (bool, default=False) – Set to True to instantiate a model with reproducibility environment setup
- Returns:
Model chosen from leaderboard
Example:
# Instantiate Model 1 from the leaderboard, pre-trained
mymodel = mycompetition.instantiate_model(version=1, trained=True, reproduce=False)
Note
If reproduce = True
, an untrained model will be instantiated, regardless of the trained
parameter value.
inspect_model¶
Examine structure of model submitted to a competition leaderboard.
- Competition.inspect_model(version=None, naming_convention=None)¶
- Parameters:
version (integer) – Model version number from competition leaderboard.
naming_convention (string - either "keras" or "pytorch") – Either “keras” or “pytorch” depending on which kinds of layer names should be displayed
- Returns:
inspect_pd : dictionary of model summary & metadata
compare_models¶
Compare the structure of two or more models submitted to a competition leaderboard. Use in conjunction with stylize_compare to visualize data.
- Competition.compare_models(version_list='None', verbose=1, naming_convention=None)¶
- Parameters:
version_list (list of integers) – list of model version numbers to compare (previously submitted to competition leaderboard).
verbose (integer) – Controls the verbosity: the higher, the more detail
naming_convention (string - either "keras" or "pytorch") – Either “keras” or “pytorch” depending on which kinds of layer names should be displayed
- Returns:
data : dictionary of model comparison information.
Example
# Compare two or more models
data=mycompetition.compare_models([7,8], verbose=1)
mycompetition.stylize_compare(data)
stylize_compare¶
Stylizes data received from compare_models to highlight similarities & differences.
- Competition.stylize_compare(compare_dict, naming_convention=None)¶
- Parameters:
compare_dict (dictionary) – Model data from compare_models()
naming_convention (string - either "keras" or "pytorch") – Either “keras” or “pytorch” depending on which kinds of layer names should be displayed
- Returns:
Formatted table of model comparisons.
Example
# Compare two or more models
data=mycompetition.compare_models([7,8], verbose=1)
mycompetition.stylize_compare(data)
inspect_y_test¶
Examines structure of y-test data to hep users understand how to submit models to the competition leaderboard.
- Competition.inspect_y_test()¶
- Parameters:
none –
- Returns:
Dictionary of a competition’s y-test metadata.
Example:
mycompetition.inspect_y_test()
get_leaderboard¶
Get current competition leaderboard to rank all submitted models. Use in conjunction with stylize_leaderboard to visualize data.
- Competition.get_leaderboard(verbose=3, columns=None)¶
- Parameters:
verbose (integer) – (Optional) controls the verbosity: the higher, the more detail.
columns (list of strings) – (Optional) List of specific column names to include in the leaderboard, all else will be excluded. Performance metrics will always be displayed.
- Returns:
Dictionary of leaderboard data.
Example:
data = mycompetition.get_leaderboard()
mycompetition.stylize_leaderboard(data)
stylize_leaderboard¶
Stylizes data received from get_leaderbord.
- Competition.stylize_leaderboard(leaderboard, naming_convention="keras"
- Parameters:
leaderboard (dictionary) – Data dictionary object returned from get_leaderboard
- Returns:
Formatted competition leaderboard
Example:
data = mycompetition.get_leaderboard()
mycompetition.stylize_leaderboard(data)
update_access_list¶
Updates list of authenticated participants who can submit new models to a competition.
- Competition.update_access_list(email_list=[], update_type='Replace_list')¶
- Parameters:
email_list (list of strings) – [REQUIRED] list of comma separated emails for users who are allowed to submit models to competition.
options (update_type:[REQUIRED]) –
string
: ‘Add’, ‘Remove’,’Replace_list’,’Get. Add appends user emails to original list, Remove deletes users from list, ‘Replace_list’ overwrites the original list with the new list provided, and Get returns the current list.
- Returns:
“Success” upon successful request
Example
# Add, remove, or completely update authorized participants for competition later
emaillist=["newemailaddress@gmail.com"]
mycompetition.update_access_list(email_list=emaillist,update_type="Add")