sphinx-quickstart on Tue Oct 3 11:18:09 2017. You can adapt this file completely to your liking, but it should at least contain the root toctree directive.

Welcome to ibm-watson-machine-learning’s(V4) documentation!

ibm-watson-machine-learning is a python library that allows you to work with Watson Machine Learning services. Train, store, deploy your models and score them using the APIs and integrate them with your application development. ibm-watson-machine-learning can be used for working with IBM Cloud v2 instance plans and IBM Cloud Pak® for Data version 3.5. For older versions of IBM Cloud Pak® for Data, use client from link

Sample notebooks for cloud v4 GA: Refer to Watson Machine Learning samples github link

IMPORTANT: For cloud, refer to the documentation for the migration process to be able to migrate your assets to new instance plans and access new features if you are using Watson Machine Learning Python client V3 or Watson Machine Learning Python client V4 Beta.

NOTE: DEPRECATED!! Python 3.6 framework is deprecated. For Cloud, it will be removed on Jan 20th, 2021. It will be read-only mode starting Nov 20th, 2020. i.e you won’t be able to create new assets using this client. Use Python 3.7 instead. For details, see Supported Frameworks. For IBM Cloud Pak® for Data, Python 3.6 framework is supported only for python 3.6 users upgrading from 3.0.1 to 3.5 and will be removed from a future release. We recommend you migrate your assets and use a Python 3.7 framework instead. For details, see Supported Frameworks

NOTE: DEPRECATED!! The Watson Machine Learning Python client V3 and Watson Machine Learning Python client V4 Beta are being deprecated starting Sep 1st, 2020 on cloud and will be discontinued at the end of the migration period. Migrate to IBM Watson Machine Learning Client V4 GA packaged in the name of ibm-watson-machine-learning on cloud.

Contents

Installation

It comes integrated into the Watson Studio Jupyter notebook. Also, the package is available on pypi. Please use below command to install it to use with IBM cloud services.

$pip install ibm-watson-machine-learning

Requirements (Applicable only for IBM Cloud)

To create a Watson Machine Learning service instance, refer to the documentation.

API (V4) for IBM Cloud

To use Watson Machine learning APIs, user must create an instance of APIClient with authentication details.

Authentication

Authentication for IBM Cloud

IBM Cloud users can create an instance of Watson Machine learning python client by providing IAM token or apikey.

Example of creating the client using apikey:

Note: There is no instance_id to be provided. instance_id will be picked up from associated space/project for the new(v2) instance plans. For accessing your assets from old(v1) instance plans, provide your old v1 instance_id in the wml_credentials below and create a client object

from ibm_watson_machine_learning import APIClient

wml_credentials = {
                   "url": "https://us-south.ml.cloud.ibm.com",
                   "apikey":"***********"
                  }

client = APIClient(wml_credentials)

Example of creating the client using token:

from ibm_watson_machine_learning import APIClient

wml_credentials = {
                   "url": "https://us-south.ml.cloud.ibm.com",
                   "token":"***********",
                  }

client = APIClient(wml_credentials)

Example of creating the client using v1 Lite instance plan:

from ibm_watson_machine_learning import APIClient

wml_credentials = {
                   "url": "https://example.ml.cloud.ibm.com",
                   "apikey":"**********",
                   "instance_id": "v1 lite instance_id"
                  }

client = APIClient(wml_credentials)

Note

  • Setting default space/project id is mandatory. Refer to client.set.default_space() API in this document.

Supported machine learning frameworks

For the list of supported machine learning frameworks (models) on IBM cloud, please refer to Watson Machine Learning Documentation.

Samples

Refer to the samples for IBM Watson Machine Learning usage in notebooks.

APIs to Migrate V4 Beta and V3 assets to V4

Migration APIs for v4 GA Cloud to migrate assets from v3 or v4 beta. For details and examples of migration process refer to the document.

class client.Migrationv4GACloud(client)[source]

Migration APIs for v4 GA Cloud. This will be applicable only till the migration period Refer to the documentation at ‘https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/wml-ai.html’ for details on migration

cancel(migration_id, space_id=None, project_id=None)[source]

Cancel a migration job. ‘migration_id’ and ‘space_id’( or ‘project_id’ has to be provided ) Note: To delete a migratin job, use delete() api

Parameters

Important

  1. migration_id: Migration identifier

    type: str

  2. space_id: Space identifier

    type: str

  3. project_id: Project identifier

    type: str

Output

Important

returns: status (“SUCCESS” or “FAILED”)

return type: str

Example

>>> client.v4ga_cloud_migration.cancel(migration_id='6213cf1-252f-424b-b52d-5cdd9814956c',
>>>                                    space_id='3421cf1-252f-424b-b52d-5cdd981495fe')
delete(migration_id, space_id=None, project_id=None)[source]

Deletes a migration job. ‘migration_id’ and ‘space_id’( or ‘project_id’ has to be provided )

Parameters

Important

  1. migration_id: Migration identifier

    type: str

  2. space_id: Space identifier

    type: str

  3. project_id: Project identifier

    type: str

Output

Important

returns: status (“SUCCESS” or “FAILED”)

return type: str

Example

>>> client.v4ga_cloud_migration.delete(migration_id='6213cf1-252f-424b-b52d-5cdd9814956c',
>>>                                    space_id='3421cf1-252f-424b-b52d-5cdd981495fe')
get_details(migration_id=None, space_id=None, project_id=None, limit=None)[source]

Get metadata of of the migration job function(s). If no migration_id is specified all migration jobs metadata is returned.

Parameters

Important

  1. migration_id: Migration identifier

    type: str

  2. space_id: Space identifier

    type: str

  3. project_id: Project identifier

    type: str

  4. limit: limit number of fetched records (optional)

    type: int

Output

Important

returns: migration(s) metadata

return type: dict

dict (if migration_id is not None) or {“resources”: [dict]} (if migration_id is None)

Example

>>> migration_details = client.v4ga_cloud_migration.get_details(migration_id='6213cf1-252f-424b-b52d-5cdd9814956c',
>>>                                                             space_id='3421cf1-252f-424b-b52d-5cdd981495fe')
list(space_id=None, project_id=None, limit=None)[source]

List the migration jobs. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all migration jobs in a table format.

return type: None

Example

>>> client.v4ga_cloud_migration.list()
start(meta_props)[source]

Migration APIs for v4 GA Cloud to migrate assets from v3 or v4 beta. This will be applicable only till the migration period. Refer to the documentation at ‘https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/wml-ai.html’ for details on migration.

You will need to have a space or project created before you start migration of assets to that space or project You should have read access to the instance from which migration is required You should have ‘editor’ or ‘admin’ access on the target ‘space’/’project’

SKIP_MIGRATED_ASSETS meta prop: If this is True (the default) and if the target assets still exist and if there are completed jobs (that were not deleted) then any assets that were already migrated will be skipped and the details will be returned in the skipped collection in the response. If this is False then it is possible that duplicate assets will be created in the target space or project if the asset was already migrated

Parameters

Important

  1. meta_props: meta data. To see available meta names use client.v4ga_cloud_migration.ConfigurationMetaNames.get()

    type: str or dict

Output

Important

returns: Initial state of migration

return type: dict

Example

>>> metadata = {
>>>    client.v4ga_cloud_migration.MigrationMetaNames.DESCRIPTION: "Migration of assets from v3 to v4ga",
>>>    client.v4ga_cloud_migration.MigrationMetaNames.OLD_INSTANCE_ID: "df40cf1-252f-424b-b52d-5cdd98143aec",
>>>    client.v4ga_cloud_migration.MigrationMetaNames.SPACE_ID: "3fc54cf1-252f-424b-b52d-5cdd9814987f",
>>>    client.v4ga_cloud_migration.MigrationMetaNames.FUNCTION_IDS: ["all"],
>>>    client.v4ga_cloud_migration.MigrationMetaNames.MODEL_IDS: ["afaecb4-254f-689f-4548-9b4298243291"],
>>>    client.v4ga_cloud_migration.MigrationMetaNames.MAPPING: {"dfaecf1-252f-424b-b52d-5cdd98143481": "4fbc211-252f-424b-b52d-5cdd98df310a"}
>>>    client.v4ga_cloud_migration.MigrationMetaNames.SKIP_MIGRATED_ASSETS: True
>>> }
>>> details = client.v4ga_cloud_migration.start(meta_props=metadata)
class metanames.Migrationv4GACloudMetaNames[source]

Set of MetaNames for v4ga Cloud migration.

Available MetaNames:

MetaName

Type

Required

Example value

Schema

DESCRIPTION

str

N

Testing migration

string

OLD_INSTANCE_ID

str

Y

df40cf1-252f-424b-b52d-5cdd98143aec

string

SPACE_ID

str

N

3fc54cf1-252f-424b-b52d-5cdd9814987f

string

PROJECT_ID

str

N

4fc54cf1-252f-424b-b52d-5cdd9814987f

string

MODEL_IDS

list

N

['afaecb4-254f-689f-4548-9b4298243291']

['string']

FUNCTION_IDS

list

N

['all']

['string']

EXPERIMENT_IDS

list

N

['ba2ecb4-4542-689a-2548-ab4232b43291']

['string']

PIPELINE_IDS

list

N

['4fabcb4-654f-489b-9548-9b4298243292']

['string']

SKIP_MIGRATED_ASSETS

bool

N

False

MAPPING

dict

N

{'dfaecf1-252f-424b-b52d-5cdd98143481': '4fbc211-252f-424b-b52d-5cdd98df310a'}

connections

class client.Connections(client)[source]

Store and manage your Connections.

ConfigurationMetaNames = <ibm_watson_machine_learning.metanames.ConnectionMetaNames object>

MetaNames for Connection creation.

create(meta_props)[source]

Creates a connection. Input to PROPERTIES field examples

  1. MySQL
    client.connections.ConfigurationMetaNames.PROPERTIES: {

    “database”: “database”, “password”: “password”, “port”: “3306”, “host”: “host url”, “ssl”: “false”, “username”: “username”

    }

  2. Google Big query

    1. Method1: Use service account json. The service account json generated can be provided as

      input as-is. Provide actual values in json. Example is only indicative to show the fields. Refer to Google big query documents how to generate the service account json

      client.connections.ConfigurationMetaNames.PROPERTIES: {

      “type”: “service_account”, “project_id”: “project_id”, “private_key_id”: “private_key_id”, “private_key”: “private key contents” “client_email”: “client_email”, “client_id”: “client_id”, “auth_uri”: “https://accounts.google.com/o/oauth2/auth”, “token_uri”: “https://oauth2.googleapis.com/token”, “auth_provider_x509_cert_url”: “https://www.googleapis.com/oauth2/v1/certs”, “client_x509_cert_url”: “client_x509_cert_url”

      }

    1. Method2: Using OAuth Method. Refer to Google big query documents how to generate OAuth token

      client.connections.ConfigurationMetaNames.PROPERTIES: {

      “access_token”: “access token generated for big query” “refresh_token”: “refresh token” “project_id”: “project_id”, “client_secret”: “This is your gmail account password”, “client_id”: “client_id”

    }

  3. MS SQL
    client.connections.ConfigurationMetaNames.PROPERTIES: {

    “database”: “database”, “password”: “password”, “port”: “1433”, “host”: “host”, “username”: “username”

    }

  4. Tera data
    client.connections.ConfigurationMetaNames.PROPERTIES: {

    “database”: “database”, “password”: “password”, “port”: “1433”, “host”: “host”, “username”: “username”

    }

Parameters

Important

  1. meta_props: meta data of the connection configuration. To see available meta names use:

    >>> client.connections.ConfigurationMetaNames.get()
    

    type: dict

Output

Important

returns: metadata of the stored connection

return type: dict

Example
>>> sqlserver_data_source_type_id = client.connections.get_datasource_type_uid_by_name('sqlserver')
>>> connections_details = client.connections.create({
>>> client.connections.ConfigurationMetaNames.NAME: "sqlserver connection",
>>> client.connections.ConfigurationMetaNames.DESCRIPTION: "connection description",
>>> client.connections.ConfigurationMetaNames.DATASOURCE_TYPE: sqlserver_data_source_type_id,
>>> client.connections.ConfigurationMetaNames.PROPERTIES: { "database": "database",
>>>                                                         "password": "password",
>>>                                                         "port": "1433",
>>>                                                         "host": "host",
>>>                                                         "username": "username"}
>>> })
delete(connection_id)[source]

Delete a stored Connection.

Parameters

Important

  1. connection_id: Unique id of the connection to be deleted.

    type: str

Output

Important

returns: status (“SUCCESS” or “FAILED”)

return type: str

Example

>>> client.connections.delete(connection_id)
get_datasource_type_uid_by_name(name)[source]

Get stored datasource types id for the given datasource type name.

Parameters

Important

  1. name: name of datasource type

    type: int

Output

Important

This method only prints the id of given datasource type name.

return type: Str

Example

>>> client.connections.get_datasource_type_uid_by_name('cloudobjectstorage')
get_details(connection_id)[source]

Get connection details for the given unique Connection id.

Parameters

Important

  1. connection_id: Unique id of Connection

    type: str

Output

Important

returns: Metadata of the stored Connection

return type: dict

Example

>>> connection_details = client.connections.get_details(connection_id)
static get_uid(connection_details)[source]

Get Unique Id of stored connection.

Parameters

Important

  1. connection_details: Metadata of the stored connection

    type: dict

Output

Important

returns: Unique Id of stored connection

return type: str

Example

>>> connection_uid = client.connection.get_uid(connection_details)
list()[source]

List all stored connections.

Output

Important

This method only prints the list of all connections in a table format.

return type: None

Example

>>> client.connections.list()
list_datasource_types()[source]

List stored datasource types assets.

Output

Important

This method only prints the list of datasources type in a table format.

return type: None

Example

>>> client.connections.list_datasource_types()
class metanames.ConnectionMetaNames[source]

Set of MetaNames for Spaces Specs.

Available MetaNames:

MetaName

Type

Required

Schema

NAME

str

Y

my_space

DESCRIPTION

str

N

my_description

DATASOURCE_TYPE

str

Y

1e3363a5-7ccf-4fff-8022-4850a8024b68

PROPERTIES

dict

Y

{'database': 'BLUDB', 'host': 'dashdb-txn-sbox-yp-dal09-04.services.dal.bluemix.net', 'password': 'a1b2c3d4#', 'username': 'usr21370'}

data_assets

class client.Assets(client)[source]

Store and manage your data assets.

ConfigurationMetaNames = <ibm_watson_machine_learning.metanames.AssetsMetaNames object>

MetaNames for Data Assets creation.

create(name, file_path)[source]

Creates a data asset and uploads content to it.

Parameters

Important

  1. name: Name to be given to the data asset

    type: str

  2. file_path: Path to the content file to be uploaded

    type: str

Output

Important

returns: metadata of the stored data asset

return type: dict

Example
>>> asset_details = client.data_assets.create(name="sample_asset",file_path="/path/to/file")
delete(asset_uid)[source]

Delete a stored data asset.

Parameters

Important

  1. asset_uid: Unique Id of data asset

    type: str

Output

Important

returns: status (“SUCCESS” or “FAILED”)

return type: str

Example

>>> client.data_assets.delete(asset_uid)
download(asset_uid, filename)[source]

Download the content of a data asset.

Parameters

Important

  1. asset_uid: The Unique Id of the data asset to be downloaded

    type: str

  2. filename: filename to be used for the downloaded file

    type: str

Output

returns: Path to the downloaded asset content

return type: str

Example

>>> client.data_assets.download(asset_uid,"sample_asset.csv")
get_details(asset_uid)[source]

Get data asset details.

Parameters

Important

  1. asset_details: Metadata of the stored data asset

    type: dict

Output

Important

returns: Unique id of asset

return type: str

Example

>>> asset_details = client.data_assets.get_details(asset_uid)
static get_href(asset_details)[source]

Get url of stored data asset.

Parameters

Important

  1. asset_details: stored data asset details

    type: dict

Output

Important

returns: href of stored data asset

return type: str

Example

>>> asset_details = client.data_assets.get_details(asset_uid)
>>> asset_href = client.data_assets.get_href(asset_details)
static get_id(asset_details)[source]

Get Unique Id of stored script asset.

Parameters

Important

  1. asset_details: Metadata of the stored script asset

    type: dict

    type: dict

Output

Important

returns: Unique Id of stored data asset

return type: str

Example

>>> asset_id = client.data_assets.get_id(asset_details)
static get_uid(asset_details)[source]

Get Unique Id of stored data asset. Deprecated!! Use get_id(details) instead

Parameters

Important

  1. asset_details: Metadata of the stored data asset

    type: dict

    type: dict

Output

Important

returns: Unique Id of stored asset

return type: str

Example

>>> asset_uid = client.data_assets.get_uid(asset_details)
list(limit=None)[source]

List stored data assets. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all data assets in a table format.

return type: None

Example

>>> client.data_assets.list()
store(meta_props)[source]

Creates a data asset and uploads content to it.

Parameters

Important

  1. meta_props: meta data of the space configuration. To see available meta names use:

    >>> client.data_assets.ConfigurationMetaNames.get()
    

    type: dict

    Example

    Example for data asset creation for files :

    >>> metadata = {
    >>>  client.data_assets.ConfigurationMetaNames.NAME: 'my data assets',
    >>>  client.data_assets.ConfigurationMetaNames.DESCRIPTION: 'sample description',
    >>>  client.data_assets.ConfigurationMetaNames.DATA_CONTENT_NAME: 'sample.csv'
    >>> }
    >>> asset_details = client.data_assets.store(meta_props=metadata)
    

    Example of data asset creation using connection:

    >>> metadata = {
    >>>  client.data_assets.ConfigurationMetaNames.NAME: 'my data assets',
    >>>  client.data_assets.ConfigurationMetaNames.DESCRIPTION: 'sample description',
    >>>  client.data_assets.ConfigurationMetaNames.CONNECTION_ID: '39eaa1ee-9aa4-4651-b8fe-95d3ddae',
    >>>  client.data_assets.ConfigurationMetaNames.DATA_CONTENT_NAME: 't1/sample.csv'
    >>> }
    >>> asset_details = client.data_assets.store(meta_props=metadata)
    

    Example for data asset creation with database sources type connection:

    >>> metadata = {
     >>>  client.data_assets.ConfigurationMetaNames.NAME: 'my data assets',
     >>>  client.data_assets.ConfigurationMetaNames.DESCRIPTION: 'sample description',
     >>>  client.data_assets.ConfigurationMetaNames.CONNECTION_ID: '23eaf1ee-96a4-4651-b8fe-95d3dadfe',
     >>>  client.data_assets.ConfigurationMetaNames.DATA_CONTENT_NAME: 't1'
     >>> }
     >>> asset_details = client.data_assets.store(meta_props=metadata)
    
class metanames.AssetsMetaNames[source]

Set of MetaNames for Data Asset Specs.

Available MetaNames:

MetaName

Type

Required

Schema

NAME

str

Y

my_data_asset

DATA_CONTENT_NAME

str

Y

/test/sample.csv

CONNECTION_ID

str

N

39eaa1ee-9aa4-4651-b8fe-95d3ddae

DESCRIPTION

str

N

my_description

deployments

class client.Deployments(client)[source]

Deploy and score published artifacts (models and functions).

create(artifact_uid=None, meta_props=None, rev_id=None, **kwargs)[source]

Create a deployment from an artifact. As artifact we understand model or function which may be deployed.

Parameters

Important

  1. artifact_uid: Published artifact UID (model or function uid)

    type: str

  2. meta_props: metaprops. To see the available list of metanames use:

    >>> client.deployments.ConfigurationMetaNames.get()
    

    type: dict

Output

Important

returns: metadata of the created deployment

return type: dict

Example
>>> meta_props = {
>>> wml_client.deployments.ConfigurationMetaNames.NAME: "SAMPLE DEPLOYMENT NAME",
>>> wml_client.deployments.ConfigurationMetaNames.ONLINE: {},
>>> wml_client.deployments.ConfigurationMetaNames.HARDWARE_SPEC : { "id":  "e7ed1d6c-2e89-42d7-aed5-8sb972c1d2b"}
>>> }
>>> deployment_details = client.deployments.create(artifact_uid, meta_props)
create_job(deployment_id, meta_props)[source]

Create an asynchronous deployment job.

Parameters

Important

  1. deployment_id: Unique Id of Deployment

    type: str

  2. meta_props: metaprops. To see the available list of metanames use:

    >>> client.deployments.ScoringMetaNames.get() or client.deployments.DecisionOptimizationmetaNames.get()
    

    type: dict

Output

Important

returns: metadata of the created async deployment job

return type: dict

Note

  • The valid payloads for scoring input are either list of values, pandas or numpy dataframes.

Example

>>> scoring_payload = {wml_client.deployments.ScoringMetaNames.INPUT_DATA: [{'fields': ['GENDER','AGE','MARITAL_STATUS','PROFESSION'], 'values': [['M',23,'Single','Student'],['M',55,'Single','Executive']]}]}
>>> async_job = client.deployments.create_job(deployment_id, scoring_payload)
delete(deployment_uid)[source]

Delete deployment.

Parameters

Important

  1. deployment uid: Unique Id of Deployment

    type: str

Output

Important

returns: status (“SUCCESS” or “FAILED”)

return type: str

Example

>>> client.deployments.delete(deployment_uid)
delete_job(job_uid, hard_delete=False)[source]

Cancels a deployment job that is currenlty running. This method is also be used to delete metadata details of the completed or canceled jobs when hard_delete parameter is set to True.

Parameters

Important

  1. job_uid: Unique Id of deployment job which should be canceled

    type: str

  2. hard_delete: specify True or False.

    True - To delete the completed or canceled job. False - To cancel the currently running deployment job. Default value is False.

    type: Boolean

Output

Important

returns: status (“SUCCESS” or “FAILED”)

return type: str

Example

>>> client.deployments.delete_job(job_uid)
download(virtual_deployment_uid, filename=None)[source]

Downloads file deployment of specified deployment Id. Currently supported format is Core ML.

Parameters
  • virtual_deployment_uid ({str_type}) – Unique Id of virtual deployment

  • filename ({str_type}) – filename of downloaded archive (optional)

Returns

path to downloaded file

Return type

{str_type}

get_details(deployment_uid=None, limit=None)[source]

Get information about your deployment(s). If deployment_uid is not passed, all deployment details are fetched.

Parameters

Important

  1. deployment_uid: Unique Id of Deployment (optional)

    type: str

  2. limit: limit number of fetched records (optional)

    type: int

Output

Important

returns: metadata of deployment(s)

return type: dict

dict (if deployment_uid is not None) or {“resources”: [dict]} (if deployment_uid is None)

Note

If deployment_uid is not specified, all deployments metadata is fetched

Example

>>> deployment_details = client.deployments.get_details(deployment_uid)
>>> deployment_details = client.deployments.get_details(deployment_uid=deployment_uid)
>>> deployments_details = client.deployments.get_details()
static get_download_url(deployment_details)[source]

Get deployment_download_url from deployment details.

Parameters

deployment_details (dict) – Created deployment details

Returns

deployment download URL that is used to get file deployment (for example: Core ML)

Return type

{str_type}

A way you might use me is:

>>> deployment_url = client.deployments.get_download_url(deployment)
static get_href(deployment_details)[source]

Get deployment_href from deployment details.

Parameters

Important

  1. deployment_details: Metadata of the deployment

    type: dict

Output

Important

returns: deployment href that is used to manage the deployment

return type: str

Example

>>> deployment_href = client.deployments.get_href(deployment)
static get_id(deployment_details)[source]

Get deployment id from deployment details.

Parameters

Important

  1. deployment_details: Metadata of the deployment

    type: dict

Output

Important

returns: deployment ID that is used to manage the deployment

return type: str

Example

>>> deployment_id = client.deployments.get_id(deployment)
get_job_details(job_uid=None, limit=None)[source]

Get information about your deployment job(s). If deployment job_uid is not passed, all deployment jobs details are fetched.

Parameters

Important

  1. job_uid: Unqiue Job ID (optional)

    type: str

  2. limit: limit number of fetched records (optional)

    type: int

Output

Important

returns: metadata of deployment job(s)

return type: dict

dict (if job_uid is not None) or {“resources”: [dict]} (if job_uid is None)

Note

If job_uid is not specified, all deployment jobs metadata associated with the deployment Id is fetched

Example

>>> deployment_details = client.deployments.get_job_details()
>>> deployments_details = client.deployments.get_job_details(job_uid=job_uid)
get_job_href(job_details)[source]

Get the href of the deployment job.

Parameters

Important

  1. job_details: metadata of the deployment job

    type: dict

Output

Important

returns: href of the deployment job

return type: str

Example

>>> job_details = client.deployments.get_job_details(job_uid=job_uid)
>>> job_status = client.deployments.get_job_href(job_details)
get_job_status(job_id)[source]

Get the status of the deployment job.

Parameters

Important

  1. job_id: Unique Id of the deployment job

    type: str

Output

Important

returns: status of the deployment job

return type: dict

Example

>>> job_status = client.deployments.get_job_status(job_uid)
get_job_uid(job_details)[source]

Get the Unique Id of the deployment job.

Parameters

Important

  1. job_details: metadata of the deployment job

    type: dict

Output

Important

returns: Unique Id of the deployment job

return type: str

Example

>>> job_details = client.deployments.get_job_details(job_uid=job_uid)
>>> job_status = client.deployments.get_job_uid(job_details)
static get_scoring_href(deployment_details)[source]

Get scoring url from deployment details.

Parameters

Important

  1. deployment_details: Metadata of the deployment

    type: dict

Output

Important

returns: scoring endpoint url that is used for making scoring requests

return type: str

Example

>>> scoring_href = client.deployments.get_scoring_href(deployment)
static get_uid(deployment_details)[source]

Get deployment_uid from deployment details. Deprecated!! Use get_id(deployment_details) instead

Parameters

Important

  1. deployment_details: Metadata of the deployment

    type: dict

Output

Important

returns: deployment UID that is used to manage the deployment

return type: str

Example

>>> deployment_uid = client.deployments.get_uid(deployment)
list(limit=None)[source]

List deployments. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all deployments in a table format.

return type: None

Example

>>> client.deployments.list()
list_jobs(limit=None)[source]

List the async deployment jobs. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all async jobs in a table format.

return type: None

Note

  • This method list only async deployment jobs created for WML deployment.

Example

>>> client.deployments.list_jobs()
score(deployment_id, meta_props)[source]

Make scoring requests against deployed artifact.

Parameters

Important

  1. deployment_id: Unique Id of the deployment to be scored

    type: str

  2. meta_props: Meta props for scoring

    >>> Use client.deployments.ScoringMetaNames.show() to view the list of ScoringMetaNames.
    

    type: dict

  3. transaction_id: transaction id to be passed with records during payload logging (optional)

    type: str

Output

Important

returns: scoring result containing prediction and probability

return type: dict

Note

  • client.deployments.ScoringMetaNames.INPUT_DATA is the only metaname valid for sync scoring.

  • The valid payloads for scoring input are either list of values, pandas or numpy dataframes.

Example

>>> scoring_payload = {wml_client.deployments.ScoringMetaNames.INPUT_DATA:
>>>        [{'fields':
>>>            ['GENDER','AGE','MARITAL_STATUS','PROFESSION'],
>>>            'values': [
>>>                ['M',23,'Single','Student'],
>>>                ['M',55,'Single','Executive']
>>>                ]
>>>         }
>>>       ]}
>>> predictions = client.deployments.score(deployment_id, scoring_payload)
>>> predictions = client.deployments.score(deployment_id, scoring_payload,async=True)
update(deployment_uid, changes)[source]

Updates existing deployment metadata. If ASSET is patched, then ‘id’ field is mandatory and it starts a deployment with the provided asset id/rev. Deployment id remains same

Parameters

Important

  1. deployment_uid: Unqiue Id of deployment which should be updated

    type: str

  2. changes: elements which should be changed, where keys are ConfigurationMetaNames

    type: dict

Output

Important

returns: metadata of updated deployment

return type: dict

Example

>>> metadata = {
>>> client.deployments.ConfigurationMetaNames.NAME:"updated_Deployment",
>>> client.deployments.ConfigurationMetaNames.ASSET: { "id": "ca0cd864-4582-4732-b365-3165598dc945", "rev":"2" }
>>> }
>>>
>>> deployment_details = client.deployments.update(deployment_uid, changes=metadata)
class metanames.DeploymentNewMetaNames[source]

Set of MetaNames for Deployments Specs.

Available MetaNames:

MetaName

Type

Required

Example value

Schema

TAGS

list

N

['string1', 'string2']

['string']

NAME

str

N

my_deployment

DESCRIPTION

str

N

my_deployment

CUSTOM

dict

N

{}

ASSET

dict

N

{'id': '4cedab6d-e8e4-4214-b81a-2ddb122db2ab', 'rev': '1'}

HARDWARE_SPEC

dict

N

{'id': '3342-1ce536-20dc-4444-aac7-7284cf3befc'}

HYBRID_PIPELINE_HARDWARE_SPECS

list

N

[{'node_runtime_id': 'autoai.kb', 'hardware_spec': {'id': '3342-1ce536-20dc-4444-aac7-7284cf3befc', 'num_nodes': '2'}}]

ONLINE

dict

N

{}

BATCH

dict

N

{}

R_SHINY

dict

N

{'authentication': 'anyone_with_url'}

VIRTUAL

dict

N

{}

class metanames.ScoringMetaNames[source]

Set of MetaNames for Scoring.

Available MetaNames:

MetaName

Type

Required

Example value

Schema

NAME

str

N

jobs test

INPUT_DATA

list

N

[{'fields': ['name', 'age', 'occupation'], 'values': [['john', 23, 'student']]}]

[{'name(optional)': 'string', 'id(optional)': 'string', 'fields(optional)': 'array[string]', 'values': 'array[array[string]]'}]

INPUT_DATA_REFERENCES

list

N

[{'id(optional)': 'string', 'name(optional)': 'string', 'type(required)': 'string', 'connection(required)': {'endpoint_url(required)': 'string', 'access_key_id(required)': 'string', 'secret_access_key(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}, 'schema(optional)': {'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}}]

OUTPUT_DATA_REFERENCE

dict

N

{'name(optional)': 'string', 'type(required)': 'string', 'connection(required)': {'endpoint_url(required)': 'string', 'access_key_id(required)': 'string', 'secret_access_key(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}, 'schema(optional)': {'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}}

EVALUATIONS_SPEC

list

N

[{'id': 'string', 'input_target': 'string', 'metrics_names': ['auroc', 'accuracy']}]

[{'id(optional)': 'string', 'input_target(optional)': 'string', 'metrics_names(optional)': 'array[string]'}]

ENVIRONMENT_VARIABLES

dict

N

{'my_env_var1': 'env_var_value1', 'my_env_var2': 'env_var_value2'}

class metanames.DecisionOptimizationMetaNames[source]

Set of MetaNames for Decision Optimization.

Available MetaNames:

MetaName

Type

Required

Example value

Schema

INPUT_DATA

list

N

[{'fields': ['name', 'age', 'occupation'], 'values': [['john', 23, 'student']]}]

[{'name(optional)': 'string', 'id(optional)': 'string', 'fields(optional)': 'array[string]', 'values': 'array[array[string]]'}]

INPUT_DATA_REFERENCES

list

N

[{'fields': ['name', 'age', 'occupation'], 'values': [['john', 23, 'student']]}]

[{'name(optional)': 'string', 'id(optional)': 'string', 'fields(optional)': 'array[string]', 'values': 'array[array[string]]'}]

OUTPUT_DATA

list

N

[{'name(optional)': 'string'}]

OUTPUT_DATA_REFERENCES

list

N

{'name(optional)': 'string', 'type(required)': 'string', 'connection(required)': {'endpoint_url(required)': 'string', 'access_key_id(required)': 'string', 'secret_access_key(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}, 'schema(optional)': {'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}}

SOLVE_PARAMETERS

dict

N

hardware_specifications

class client.HwSpec(client)[source]

Store and manage your hardware specs.

ConfigurationMetaNames = <ibm_watson_machine_learning.metanames.HwSpecMetaNames object>

MetaNames for Hardware Specification.

get_details(hw_spec_uid)[source]

Get hardware specification details.

Parameters

Important

hw_spec_uid : Unique id of the hardware spec ** type**: str

Output

Important

returns: Metadata of the hardware specifications

return type: dict

of hw_spec

Example

>>> hw_spec_details = client.hardware_specifications.get_details(hw_spec_uid)
static get_href(hw_spec_details)[source]

Get url of hardware specifications.

Parameters

Important

  1. hw_spec_details: hardware specifications details

    type: dict

Output

Important

returns: href of hardware specifications

return type: str

Example

>>> hw_spec_details = client.hw_spec.get_details(hw_spec_uid)
>>> hw_spec_href = client.hw_spec.get_href(hw_spec_details)
static get_id(hw_spec_details)[source]

Get ID of hardware specifications asset.

Parameters

Important

  1. asset_details: Metadata of the hardware specifications

    type: dict

    type: dict

Output

Important

returns: Unique Id of hardware specifications

return type: str

Example

>>> asset_uid = client.hardware_specifications.get_id(hw_spec_details)
get_id_by_name(hw_spec_name)[source]

Get Unique Id of hardware specification for the given name.

Parameters

Important

  1. hw_spec_name: name of the hardware spec

    type: str

Output

Important

returns: Unique Id of hardware specification

return type: str

Example

>>> asset_uid = client.hardware_specifications.get_id_by_name(hw_spec_name)
static get_uid(hw_spec_details)[source]

Get UID of hardware specifications asset. Deprecated!! Use get_id(hw_spec_details) instead

Parameters

Important

  1. asset_details: Metadata of the hardware specifications

    type: dict

    type: dict

Output

Important

returns: Unique Id of hardware specifications

return type: str

Example

>>> asset_uid = client.hardware_specifications.get_uid(hw_spec_details)
get_uid_by_name(hw_spec_name)[source]

Get Unique Id of hardware specification for the given name. Deprecated!! Use get_id_by_name(self, hw_spec_name) instead

Parameters

Important

  1. hw_spec_name: name of the hardware spec

    type: str

Output

Important

returns: Unique Id of hardware specification

return type: str

Example

>>> asset_uid = client.hardware_specifications.get_uid_by_name(hw_spec_name)
list(name=None)[source]

List hardware specifications.

Parameters

Important

name : Unique id of the hardware spec ** type**: str

Output

Important

This method only prints the list of all assets in a table format.

return type: None

Example

>>> client.hardware_specifications.list()
class metanames.HwSpecMetaNames[source]

Set of MetaNames for Software Specifications Specs.

Available MetaNames:

MetaName

Type

Required

Schema

NAME

str

Y

Python 3.6 with pre-installed ML package

DESCRIPTION

str

N

my_description

HARDWARE_CONFIGURATION

dict

N

{}

model_definitions

class client.ModelDefinition(client)[source]

Store and manage your model_definitions.

ConfigurationMetaNames = <ibm_watson_machine_learning.metanames.ModelDefinitionMetaNames object>

MetaNames for model_definition creation.

create_revision(model_definition_uid)[source]

Creates revision for the given model_definition. Revisions are immutable once created. The metadata and attachment at model_definition is taken and a revision is created out of it

Parameters

model_definition ({str_type}) – model_definition ID. Mandatory

Returns

stored model_definition revisions metadata

Return type

dict

>>> model_definition_revision = client.model_definitions.create_revision(model_definition_id)
delete(model_definition_uid)[source]

Delete a stored model_definition.

Parameters

Important

  1. model_definition_uid: Unique Id of stored Model definition

    type: str

Output

Important

returns: status (“SUCCESS” or “FAILED”)

return type: str

Example

>>> client.model_definitions.delete(model_definition_uid)
download(model_definition_uid, filename, rev_id=None)[source]

Download the content of a script asset.

Parameters

Important

  1. model_definition_uid: The Unique Id of the model_definition asset to be downloaded

    type: str

  2. filename: filename to be used for the downloaded file

    type: str

  3. rev_id: Revision id

    type: str

Output

returns: Path to the downloaded asset content

return type: str

Example

>>> client.script.download(asset_uid,"script_file.zip")
get_details(model_definition_uid)[source]

Get metadata of stored model_definition.

Parameters

Important

  1. model_definition_uid: Unique Id of model_definition

    type: str

Output

Important

returns: metadata of model definition

return type: dict dict (if model_definition_uid is not None)

Example

>>> model_definition_details = client.model_definitions.get_details(model_definition_uid)
get_href(model_definition_details)[source]

Get href of stored model_definition.

param model_definition_details

stored model_definition details

type model_definition_details

dict

returns

href of stored model_definition

rtype

{str_type}

EXAMPLE:

>>> model_definition_uid = client.model_definitions.get_href(model_definition_details)
get_id(model_definition_details)[source]

Get Unique Id of stored model_definition asset.

Parameters

Important

  1. model_definition_details: Metadata of the stored model_definition asset

    type: dict

    type: dict

Output

Important

returns: Unique Id of stored model_definition asset

return type: str

Example

>>> asset_uid = client.model_definition.get_id(asset_details)
get_revision_details(model_definition_uid, rev_uid=None)[source]

Get metadata of model_definition_uid revision.

Parameters
  • model_definition_uid ({str_type}) – model_definition ID. Mandatory

  • rev_uid (int) – Revision ID. If this parameter is not provided, returns latest revision if existing else error

Returns

stored model definitions metadata

Return type

dict

A way you might use me is:

>>> script_details = client.model_definitions.get_revision_details(model_definition_uid, rev_uid)
get_uid(model_definition_details)[source]

Get uid of stored model. Deprecated!! Use get_id(model_definition_details) instead

Parameters

model_definition_details (dict) – stored model_definition details

Returns

uid of stored model_definition

Return type

{str_type}

A way you might use me is:

>>> model_definition_uid = client.model_definitions.get_uid(model_definition_details)
list(limit=None)[source]

List stored model_definition assets. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all model_definition assets in a table format.

return type: None

Example

>>> client.model_definitions.list()
list_revisions(model_definition_uid, limit=None)[source]

List stored model_definition assets. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. model_definition_uid: Unique id of model_definition

    type: str

  2. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all model_definition revision in a table format.

return type: None

Example

>>> client.model_definitions.list_revisions()
store(model_definition, meta_props)[source]

Create a model_definitions.

Parameters

Important

  1. meta_props: meta data of the model_definition configuration. To see available meta names use:

    >>> client.model_definitions.ConfigurationMetaNames.get()
    

    type: dict

  1. model_definition: Path to the content file to be uploaded

    type: str

Output

Important

returns: Metadata of the model_defintion created

return type: dict

Example

>>>  client.model_definitions.store(model_definition, meta_props)
update(model_definition_id, meta_props=None, file_path=None)[source]

Update model_definition with either metadata or attachment or both. :param model_definition_id: model_definition ID :type model_definition_id: str :param file_path: Path to the content file to be uploaded :type file_path: str :returns: updated metadata of model_definition :rtype: dict A way you might use me is: >>> model_definition_details = client.model_definition.update(model_definition_id, meta_props, file_path)

class metanames.ModelDefinitionMetaNames[source]

Set of MetaNames for Model Definition.

Available MetaNames:

MetaName

Type

Required

Example value

Schema

NAME

str

Y

my_model_definition

DESCRIPTION

str

N

my model_definition

PLATFORM

dict

Y

{'name': 'python', 'versions': ['3.5']}

{'name(required)': 'string', 'versions(required)': ['versions']}

VERSION

str

Y

1.0

COMMAND

str

N

python3 convolutional_network.py

CUSTOM

dict

N

{'field1': 'value1'}

SPACE_UID

str

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

package_extensions

class client.PkgExtn(client)[source]

Store and manage your software Packages Extension specs.

ConfigurationMetaNames = <ibm_watson_machine_learning.metanames.PkgExtnMetaNames object>

MetaNames for Package Extensions creation.

delete(pkg_extn_uid)[source]

Delete a package extension.

Parameters

Important

  1. pkg_extn_uid: Unique Id of package extension

    type: str

Output

Important

returns: status (“SUCCESS” or “FAILED”)

return type: str

Example

>>> client.package_extensions.delete(pkg_extn_uid)
download(pkg_extn_id, filename)[source]

Download a package extension.

Parameters

Important

  1. pkg_extn_uid: The Unique Id of the package extension to be downloaded

    type: str

  2. filename: filename to be used for the downloaded file

    type: str

Output

returns: Path to the downloaded package extension content

return type: str

Example

>>> client.assets.download(asset_uid,"sample_conda.yml/custom_library.zip")
get_details(pkg_extn_uid)[source]

Get package extensions details.

Parameters

Important

  1. pkg_extn_details: Metadata of the package extensions

    type: dict

Output

Important

returns: pkg_extn UID

return type: str

Example

>>> pkg_extn_details = client.pkg_extn.get_details(pkg_extn_uid)
static get_href(pkg_extn_details)[source]

Get url of stored package extensions.

Parameters

Important

  1. asset_details: package extensions details

    type: dict

Output

Important

returns: href of package extensions details

return type: str

Example

>>> pkg_extn_details = client.package_extensions.get_details(pkg_extn_uid)
>>> pkg_extn_href = client.package_extensions.get_href(pkg_extn_details)
static get_id(pkg_extn_details)[source]

Get Unique Id of package extensions.

Parameters

Important

  1. asset_details: Metadata of the package extensions

    type: dict

    type: dict

Output

Important

returns: Unique Id of package extension

return type: str

Example

>>> asset_id = client.package_extensions.get_id(pkg_extn_details)
get_id_by_name(pkg_extn_name)[source]

Get ID of package extensions.

Parameters

Important

  1. asset_details: Metadata of the package extension

    type: dict

    type: dict

Output

Important

returns: Unique Id of package extension

return type: str

Example

>>> asset_id = client.package_extensions.get_id_by_name(pkg_extn_name)
static get_uid(pkg_extn_details)[source]

Get Unique Id of package extensions. Deprecated!! use get_id(pkg_extn_details) instead

Parameters

Important

  1. asset_details: Metadata of the package extensions

    type: dict

    type: dict

Output

Important

returns: Unique Id of package extension

return type: str

Example

>>> asset_uid = client.package_extensions.get_uid(pkg_extn_details)
get_uid_by_name(pkg_extn_name)[source]

Get UID of package extensions. Deprecated!! Use get_id_by_name(pkg_extn_name) instead

Parameters

Important

  1. asset_details: Metadata of the package extension

    type: dict

    type: dict

Output

Important

returns: Unique Id of package extension

return type: str

Example

>>> asset_uid = client.package_extensions.get_uid_by_name(pkg_extn_name)
list()[source]

List package extensions.

Output

Important

This method only prints the list of all package extensionss in a table format.

return type: None

Example

>>> client.package_extensions.list()
store(meta_props, file_path)[source]

Create a package extensions.

Parameters

Important

  1. meta_props: meta data of the pacakge extension. To see available meta names use:

    >>> client.package_extensions.ConfigurationMetaNames.get()
    

    type: dict

Output

Important

returns: metadata of the package extensions

return type: dict

Example

>>> meta_props = {
>>>    client.package_extensions.ConfigurationMetaNames.NAME: "skl_pipeline_heart_problem_prediction",
>>>    client.package_extensions.ConfigurationMetaNames.DESCRIPTION: "description scikit-learn_0.20",
>>>    client.package_extensions.ConfigurationMetaNames.TYPE: "conda_yml",
>>> }
Example
>>> pkg_extn_details = client.package_extensions.store(meta_props=meta_props,file_path="/path/to/file")
class metanames.PkgExtnMetaNames[source]

Set of MetaNames for Package Extensions Specs.

Available MetaNames:

MetaName

Type

Required

Schema

NAME

str

Y

Python 3.6 with pre-installed ML package

DESCRIPTION

str

N

my_description

TYPE

str

Y

conda_yml/custom_library

repository - Use this for working with models, functions, experiments and pipelines

class client.Repository(client)[source]

Store and manage your models, functions, spaces, pipelines and experiments using Watson Machine Learning Repository.

Important

  1. To view ModelMetaNames, use:

    >>> client.repository.ModelMetaNames.show()
    
  2. To view ExperimentMetaNames, use:

    >>> client.repository.ExperimentMetaNames.show()
    
  3. To view FunctionMetaNames, use:

    >>> client.repository.FunctionMetaNames.show()
    
  4. To view PipelineMetaNames, use:

    >>> client.repository.PipelineMetaNames.show()
    
clone(artifact_id, space_id=None, action='copy', rev_id=None)[source]

Creates a new resource(models, runtimes, libraries, experiments, functions, pipelines) identical with the model either in the same space or in a new space. All dependent assets will be cloned too.

Parameters

Important

  1. model_id: Guid of the artifact to be cloned:

    type: str

  2. space_id: Guid of the space to which the model needs to be cloned. (optional)

    type: str

  3. action: Action specifying “copy” or “move”. (optional)

    type: str

  4. rev_id: Revision ID of the artifact. (optional)

    type: str

Output

Important

returns: Metadata of the model cloned.

return type: dict

Example
>>> client.repository.clone(artifact_id=artifact_id,space_id=space_uid,action="copy")

Note

  • If revision id is not specified, all revisions of the artifact are cloned

  • Default value of the parameter action is copy

  • Space guid is mandatory for move action

create_experiment_revision(experiment_uid)[source]

Create a new version for a experiment.

Parameters

experiment_uid ({str_type}) – Unique ID of the experiment.

Returns

experiment version details

Return type

dict

Example:

>>> stored_experiment_revision_details = client.repository.create_experiment_revision(experiment_uid)

create_function_revision(function_uid)[source]

Create a new version for a function.

Parameters

function_uid ({str_type}) – Unique ID of the function.

Returns

function version details

Return type

dict

Example:

>>> stored_function_revision_details = client.repository.create_function_revision( function_uid)

create_member(space_uid, meta_props)[source]

Create a member within a space.

Parameters

Important

  1. meta_props: meta data of the member configuration. To see available meta names use:

    >>> client.spaces.ConfigurationMetaNames.get()
    

    type: dict

Output

Important

returns: metadata of the stored member

return type: dict

Note

  • client.spaces.MemberMetaNames.ROLE can be any one of the following “viewer, editor, admin”

  • client.spaces.MemberMetaNames.IDENTITY_TYPE can be any one of the following “user,service”

  • client.spaces.MemberMetaNames.IDENTITY can be either service-ID or IAM-userID

Example

>>> metadata = {
>>>  client.spaces.MemberMetaNames.ROLE:"Admin",
>>>  client.spaces.MemberMetaNames.IDENTITY:"iam-ServiceId-5a216e59-6592-43b9-8669-625d341aca71",
>>>  client.spaces.MemberMetaNames.IDENTITY_TYPE:"service"
>>> }
>>> members_details = client.repository.create_member(space_uid=space_id, meta_props=metadata)
create_model_revision(model_uid)[source]

Create a new version for a model.

Parameters

model_uid ({str_type}) – model ID

Returns

model version details

Return type

dict

Example:

>>> stored_model_revision_details = client.repository.create_model_revision( model_uid="MODELID")

create_pipeline_revision(pipeline_uid)[source]

Create a new version for a model.

Parameters

pipeline_uid ({str_type}) – Unique ID of the Pipeline

Returns

pipeline version details

Return type

dict

Example:

>>> stored_pipeline_revision_details = client.repository.create_pipeline_revision( pipeline_uid)

create_revision(artifact_uid)[source]

Create revision for passed artifact_uid.

Parameters

artifact_uid ({str_type}) – unique id of stored model, experiment, function or pipelines

Returns

artifact new revision metadata

Return type

dict

A way you might use me is:

>>> details = client.repository.create_revision(artifact_uid)
delete(artifact_uid)[source]

Delete model, experiment, pipeline, space, runtime, library or function from repository.

Parameters

Important

  1. artifact_uid: Unique id of stored model, experiment, function, pipeline, space, library or runtime

    type: str

Output

Important

returns: status (“SUCCESS” or “FAILED”)

return type: str

Example

>>> client.repository.delete(artifact_uid)
download(artifact_uid, filename='downloaded_artifact.tar.gz', rev_uid=None, format=None)[source]

Downloads configuration file for artifact with specified uid.

Parameters

Important

  1. artifact_uid: Unique Id of model, function, runtime or library

    type: str

  2. filename: Name of the file to which the artifact content has to be downloaded

    default value: downloaded_artifact.tar.gz

    type: str

Output

Important

returns: Path to the downloaded artifact content

return type: str

Note

If filename is not specified, the default filename is “downloaded_artifact.tar.gz”.

Example

>>> client.repository.download(model_uid, 'my_model.tar.gz')
get_details(artifact_uid=None)[source]

Get metadata of stored artifacts. If artifact_uid is not specified returns all models, experiments, functions, pipelines, spaces, libraries and runtimes metadata.

Parameters

Important

  1. artifact_uid: Unique Id of stored model, experiment, function, pipeline, space, library or runtime (optional)

    type: str

Output

Important

returns: stored artifact(s) metadata

return type: dict

dict (if artifact_uid is not None) or {“resources”: [dict]} (if artifact_uid is None)

Note

If artifact_uid is not specified, all models, experiments, functions, pipelines, spaces, libraries and runtimes metadata is fetched

Example

>>> details = client.repository.get_details(artifact_uid)
>>> details = client.repository.get_details()
get_experiment_details(experiment_uid=None, limit=None)[source]

Get metadata of experiment. If no experiment_uid is specified all experiments metadata is returned.

Parameters

Important

  1. experiment_uid: Unique Id of experiment (optional)

    type: str

  2. limit: limit number of fetched records (optional)

    type: int

Output

Important

returns: experiment(s) metadata

return type: dict

dict (if experiment_uid is not None) or {“resources”: [dict]} (if experiment_uid is None)

Note

If experiment_uid is not specified, all experiments metadata is fetched

Example

>>> experiment_details = client.respository.get_experiment_details(experiment_uid)
static get_experiment_href(experiment_details)[source]

Get href of stored experiment.

Parameters

Important

  1. experiment_details: Metadata of the stored experiment

    type: dict

Output

Important

returns: href of stored experiment

return type: str

Example
>>> experiment_details = client.repository.get_experiment_detailsf(experiment_uid)
>>> experiment_href = client.repository.get_experiment_href(experiment_details)
static get_experiment_id(experiment_details)[source]

Get Unique Id of stored experiment.

Parameters

Important

  1. experiment_details: Metadata of the stored experiment

    type: dict

Output

Important

returns: Unique Id of stored experiment

return type: str

Example
>>> experiment_details = client.repository.get_experiment_details(experiment_uid)
>>> experiment_uid = client.repository.get_experiment_id(experiment_details)
get_experiment_revision_details(experiment_uid, rev_id)[source]

Get metadata of experiment revision.

Parameters

Important

  1. experiment_uid: Unique Id of experiment

    type: str

  2. rev_id: Unique id of experiment revision

    type: str

Output

Important

returns: experiment revision metadata

return type: dict

Example
>>> experiment_rev_details = client.respository.get_experiment__revision_details(experiment_uid, rev_uid)
static get_experiment_uid(experiment_details)[source]

Get Unique Id of stored experiment.

Parameters

Important

  1. experiment_details: Metadata of the stored experiment

    type: dict

Output

Important

returns: Unique Id of stored experiment

return type: str

Example
>>> experiment_details = client.repository.get_experiment_detailsf(experiment_uid)
>>> experiment_uid = client.repository.get_experiment_uid(experiment_details)
get_function_details(function_uid=None, limit=None)[source]

Get metadata of function. If no function_uid is specified all functions metadata is returned.

Parameters

Important

  1. function_uid: Unique Id of function (optional)

    type: str

  2. limit: limit number of fetched records (optional)

    type: int

Output

Important

returns: function(s) metadata

return type: dict

dict (if function_uid is not None) or {“resources”: [dict]} (if function_uid is None)

Note

If function_uid is not specified, all functions metadata is fetched

Example

>>> function_details = client.respository.get_function_details(function_uid)
>>> function_details = client.respository.get_function_details()
static get_function_href(function_details)[source]

Get href of stored function.

Parameters

Important

  1. function_details: Metadata of the stored function

    type: dict

Output

Important

returns: href of stored function

return type: str

Example

>>> function_details = client.repository.get_function_detailsf(function_uid)
>>> function_url = client.repository.get_function_href(function_details)
static get_function_id(function_details)[source]

Get Id of stored function.

Parameters

Important

  1. function_details: Metadata of the stored function

    type: dict

Output

Important

returns: Id of stored function

return type: str

Example

>>> function_details = client.repository.get_function_details(function_uid)
>>> function_id = client.repository.get_function_id(function_details)
get_function_revision_details(function_uid, rev_id)[source]

Get metadata of function revision.

Parameters

Important

  1. function_uid: Unique Id of function

    type: str

  2. rev_id: Unique Id of function revision

    type: str

Output

Important

returns: function revision metadata

return type: dict

Example

>>> function_rev_details = client.respository.get_function_revision_details(function_uid, rev_id)
static get_function_uid(function_details)[source]

Get Unique Id of stored function. Deprecated!! Use get_function_id(function_details) instead

Parameters

Important

  1. function_details: Metadata of the stored function

    type: dict

Output

Important

returns: Unique Id of stored function

return type: str

Example

>>> function_details = client.repository.get_function_detailsf(function_uid)
>>> function_uid = client.repository.get_function_uid(function_details)
static get_member_href(member_details)[source]

Get member_href from member details.

Parameters

Important

  1. space_details: Metadata of the stored member

    type: dict

Output

Important

returns: member href

return type: str

Example

>>> member_details = client.repository.get_member_details(member_id)
>>> member_href = client.repository.get_member_href(member_details)
static get_member_uid(member_details)[source]

Get member_uid from member details.

Parameters

Important

  1. member_details: Metadata of the created member

    type: dict

Output

Important

returns: unique id of member

return type: str

Example

>>> member_details = client.repository.get_member_details(member_id)
>>> member_id = client.repository.get_member_uid(member_details)
get_members_details(space_uid, member_id=None, limit=None)[source]

Get metadata of members associated with a space. If member_uid is not specified, it returns all the members metadata.

Parameters

Important

  1. space_uid: Unique id of member (optional)

    type: str

  2. limit: limit number of fetched records (optional)

    type: int

Output

Important

returns: metadata of member(s) of a space

return type: dict dict (if member_id is not None) or {“resources”: [dict]} (if member_id is None)

Note

If member id is not specified, all members metadata is fetched

Example

>>> member_details = client.repository.get_member_details(space_uid,member_id)
get_model_details(model_uid=None, limit=None)[source]

Get metadata of stored model. If model_uid is not specified returns all models metadata.

Parameters

Important

  1. model_uid: Unique Id of Model (optional)

    type: str

  2. limit: limit number of fetched records (optional)

    type: int

Output

Important

returns: metadata of model(s)

return type: dict dict (if model_uid is not None) or {“resources”: [dict]} (if model_uid is None)

Note

If model_uid is not specified, all models metadata is fetched

Example

>>> model_details = client.repository.get_model_details(model_uid)
>>> models_details = client.repository.get_model_details()
static get_model_href(model_details)[source]

Get href of stored model.

Parameters

Important

  1. model_details: Metadata of the stored model

    type: dict

Output

Important

returns: href of stored model

return type: str

Example

>>> model_details = client.repository.get_model_detailsf(model_uid)
>>> model_uid = client.repository.get_model_href(model_details)
static get_model_id(model_details)[source]

Get Unique Id of stored model.

Parameters

Important

  1. model_details: Metadata of the stored model

    type: dict

Output

Important

returns: Unique Id of stored model

return type: str

Example

>>> model_details = client.repository.get_model_details(model_uid)
>>> model_uid = client.repository.get_model_id(model_details)
get_model_revision_details(model_uid, rev_uid)[source]

Get metadata of model revision.

Parameters

Important

  1. experiment_uid: Unique Id of model

    type: str

  2. limit: Unique id of model revision

    type: str

Output

Important

returns: model revision metadata

return type: dict

Example
>>> model_rev_details = client.respository.get_model_revision_details(model_uid, rev_uid)
static get_model_uid(model_details)[source]

Get Unique Id of stored model.

Parameters

Important

  1. model_details: Metadata of the stored model

    type: dict

Output

Important

returns: Unique Id of stored model

return type: str

Example

>>> model_details = client.repository.get_model_details(model_uid)
>>> model_uid = client.repository.get_model_uid(model_details)
get_pipeline_details(pipeline_uid=None, limit=None)[source]

Get metadata of stored pipelines. If pipeline_uid is not specified returns all pipelines metadata.

Parameters

Important

  1. pipeline_uid: Unique id of Pipeline(optional)

    type: str

  2. limit: limit number of fetched records (optional)

    type: int

Output

Important

returns: metadata of pipeline(s)

return type: dict dict (if pipeline_uid is not None) or {“resources”: [dict]} (if pipeline_uid is None)

Note

If pipeline_uid is not specified, all pipelines metadata is fetched

Example

>>> pipeline_details = client.repository.get_pipeline_details(pipeline_uid)
>>> pipeline_details = client.repository.get_pipeline_details()
static get_pipeline_href(pipeline_details)[source]

Get pipeline_hef from pipeline details.

Parameters

Important

  1. pipeline_details: Metadata of the stored pipeline

    type: dict

Output

Important

returns: pipeline href

return type: str

Example

>>> pipeline_details = client.repository.get_pipeline_details(pipeline_uid)
>>> pipeline_href = client.repository.get_pipeline_href(pipeline_details)
static get_pipeline_id(pipeline_details)[source]

Get pipeline_uid from pipeline details.

Parameters

Important

  1. pipeline_details: Metadata of the stored pipeline

    type: dict

Output

Important

returns: Unique Id of pipeline

return type: str

Example

>>> pipeline_details = client.repository.get_pipeline_details(pipeline_uid)
>>> pipeline_uid = client.repository.get_pipeline_id(pipeline_details)
get_pipeline_revision_details(pipeline_uid, rev_id)[source]

Get metadata of stored pipeline revision.

Parameters

Important

  1. pipeline_uid: Unique id of Pipeline

    type: str

  2. rev_id: Unique id Pipeline revision

    type: str

Output

Important

returns: metadata of revision pipeline(s)

return type: dict

Example

>>> pipeline_rev_details = client.repository.get_pipeline_revision_details(pipeline_uid, rev_id)
static get_pipeline_uid(pipeline_details)[source]

Get pipeline_uid from pipeline details.

Parameters

Important

  1. pipeline_details: Metadata of the stored pipeline

    type: dict

Output

Important

returns: Unique Id of pipeline

return type: str

Example

>>> pipeline_details = client.repository.get_pipeline_details(pipeline_uid)
>>> pipeline_uid = client.repository.get_pipeline_uid(pipeline_details)
get_space_details(space_uid=None, limit=None)[source]

Get metadata of stored space. If space_uid is not specified returns all model spaces metadata.

Parameters

Important

  1. space_uid: Unique id of Space (optional)

    type: str

  2. limit: limit number of fetched records (optional)

    type: int

Output

Important

returns: metadata of space(s)

return type: dict dict (if space_uid is not None) or {“resources”: [dict]} (if space_uid is None)

Note

If space_uid is not specified, all spaces metadata is fetched

Example

>>> space_details = client.repository.get_space_details(space_uid)
>>> space_details = client.repository.get_space_details()
static get_space_href(space_details)[source]

Get space_href from space details.

Parameters

Important

  1. space_details: Metadata of the stored space

    type: dict

Output

Important

returns: space href

return type: str

Example

>>> space_details = client.repository.get_space_details(space_uid)
>>> space_href = client.repository.get_space_href(space_details)
static get_space_uid(space_details)[source]

Get space_uid from space details.

Parameters

Important

  1. space_details: Metadata of the stored space

    type: dict

Output

Important

returns: Unique Id of space

return type: str

Example

>>> space_details = client.repository.get_space_details(space_uid)
>>> space_uid = client.repository.get_space_uid(space_details)
list()[source]

List stored models, pipelines, runtimes, libraries, functions, spaces and experiments. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all models, pipelines, runtimes, libraries, functions, spaces and experiments in a table format.

return type: None

Example

>>> client.repository.list()
list_experiments(limit=None)[source]

List stored experiments. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all experiments in a table format.

return type: None

Example

>>> client.repository.list_experiments()
list_experiments_revisions(experiment_uid, limit=None)[source]

List stored experiment revisions. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. experiment_uid: Uniquie Id of the experiment

    type: str

Important

  1. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all revisions of given experiment ID in a table format.

return type: None

Example

>>> client.repository.list_experiments_revisions(experiment_uid)
list_functions(limit=None)[source]

List stored functions. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all functions in a table format.

return type: None

Example

>>> client.respository.list_functions()
list_functions_revisions(function_uid, limit=None)[source]

List stored function revisions. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. function_uid: Uniquie Id of the function

    type: str

Important

  1. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all revisions of given function ID in a table format.

return type: None

Example

>>> client.repository.list_functions_revisions(function_uid)
list_members(space_uid, limit=None)[source]

List stored members of a space. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all members associated with a space in a table format.

return type: None

Example

>>> client.spaces.list_members()
list_models(limit=None)[source]

List stored models. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all models in a table format.

return type: None

Example

>>> client.repository.list_models()
list_models_revisions(model_uid, limit=None)[source]

List stored model revisions. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. model_uid: Uniquie Id of the model

    type: str

Important

  1. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all revisions of given model ID in a table format.

return type: None

Example

>>> client.repository.list_models_revisions(model_uid)
list_pipelines(limit=None)[source]

List stored pipelines. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all pipelines in a table format.

return type: None

Example

>>> client.repository.list_pipelines()
list_pipelines_revisions(pipeline_uid, limit=None)[source]

List stored pipeline revisions. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. model_uid: Uniquie Id of the pipeline

    type: str

Important

  1. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all revisions of given pipeline ID in a table format.

return type: None

Example

>>> client.repository.list_pipelines_revisions(pipeline_uid)
list_spaces(limit=None)[source]

List stored spaces. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all spaces in a table format.

return type: None

Example

>>> client.repository.list_spaces()
load(artifact_uid)[source]

Load model from repository to object in local environment.

Parameters

Important

  1. artifact_uid: Unique Id of model

    type: str

Output

Important

returns: model object

return type: object

Example

>>> model_obj = client.repository.load(model_uid)
store_experiment(meta_props)[source]

Create an experiment.

Parameters

Important

  1. meta_props: meta data of the experiment configuration. To see available meta names use:

    >>> client.experiments.ConfigurationMetaNames.get()
    

    type: dict

Output

Important

returns: Metadata of the experiment created

return type: dict

Example

>>> metadata = {
>>>  client.experiments.ConfigurationMetaNames.NAME: 'my_experiment',
>>>  client.experiments.ConfigurationMetaNames.EVALUATION_METRICS: ['accuracy'],
>>>  client.experiments.ConfigurationMetaNames.TRAINING_REFERENCES: [
>>>      {
>>>        'pipeline': {'href': pipeline_href_1}
>>>
>>>      },
>>>      {
>>>        'pipeline': {'href':pipeline_href_2}
>>>      },
>>>   ]
>>> }
>>> experiment_details = client.repository.store_experiment(meta_props=metadata)
>>> experiment_href = client.repository.get_experiment_href(experiment_details)
store_function(function, meta_props)[source]

Create a function.

Parameters

Important

  1. meta_props: meta data or name of the function. To see available meta names use:

    >>> client.repository.FunctionMetaNames.show()
    

    type: dict

  2. function: path to file with archived function content or function (as described above)

    • As a ‘function’ may be used one of the following:

    • filepath to gz file

    • ‘score’ function reference, where the function is the function which will be deployed

    • generator function, which takes no argument or arguments which all have primitive python default values and as result return ‘score’ function

    type: str or function

Output

Important

returns: Metadata of the function created.

return type: dict

Example

The most simple use is (using score function):

>>> meta_props = {
>>>    client.repository.FunctionMetaNames.NAME: "function",
>>>    client.repository.FunctionMetaNames.DESCRIPTION: "This is ai function",
>>>    client.repository.FunctionMetaNames.SOFTWARE_SPEC_UID: "53dc4cf1-252f-424b-b52d-5cdd9814987f"}
>>> def score(payload):
>>>      values = [[row[0]*row[1]] for row in payload['values']]
>>>      return {'fields': ['multiplication'], 'values': values}
>>> stored_function_details = client.repository.store_function(score, meta_props)

Other, more interesting example is using generator function. In this situation it is possible to pass some variables:

>>> wml_creds = {...}
>>> def gen_function(wml_credentials=wml_creds, x=2):
        def f(payload):
            values = [[row[0]*row[1]*x] for row in payload['values']]
            return {'fields': ['multiplication'], 'values': values}
        return f
>>> stored_function_details = client.repository.store_function(gen_function, meta_props)
store_model(model, meta_props=None, training_data=None, training_target=None, pipeline=None, feature_names=None, label_column_names=None, subtrainingId=None)[source]

Create a model.

Parameters

Important

  1. model:

    Can be one of following:

    • The train model object:

      • scikit-learn

      • xgboost

      • spark (PipelineModel)

    • path to saved model in format:

      • keras (.tgz)

      • pmml (.xml)

      • scikit-learn (.tar.gz)

      • tensorflow (.tar.gz)

      • spss (.str)

    • directory containing model file(s):

      • scikit-learn

      • xgboost

      • tensorflow

    • unique id of trained model

  2. training_data: Spark DataFrame supported for spark models. Pandas dataframe, numpy.ndarray or array supported for scikit-learn models

    type: spark dataframe, pandas dataframe, numpy.ndarray or array

  3. meta_props: meta data of the models configuration. To see available meta names use:

    >>> client.repository.ModelMetaNames.get()
    

    type: dict

  4. training_target: array with labels required for scikit-learn models

    type: array

  5. pipeline: pipeline required for spark mllib models

    type: object

  6. feature_names: Feature names for the training data in case of Scikit-Learn/XGBoost models. This is applicable only in the case where the training data is not of type - pandas.DataFrame.

    type: numpy.ndarray or list

  7. label_column_names: Label column names of the trained Scikit-Learn/XGBoost models.

    type: numpy.ndarray and list

Output

Important

returns: Metadata of the model created

return type: dict

Note

  • For a keras model, model content is expected to contain a .h5 file and an archived version of it.

  • feature_names is an optional argument containing the feature names for the training data in case of Scikit-Learn/XGBoost models. Valid types are numpy.ndarray and list. This is applicable only in the case where the training data is not of type - pandas.DataFrame.

  • If the training data is of type pandas.DataFrame and feature_names are provided, feature_names are ignored.

  • The value can be a single dictionary(being deprecated, use list even for single schema) or a list if you are using single input data schema. you can provide multiple schemas as dictionaries inside a list.

Example

>>> stored_model_details = client.repository.store_model(model, name)

In more complicated cases you should create proper metadata, similar to this one:

>>> sw_spec_id = client.software_specifications.get_id_by_name('scikit-learn_0.22-py3.6')
>>> sw_spec_id
>>> metadata = {
>>>        client.repository.ModelMetaNames.NAME: 'customer satisfaction prediction model',
>>>        client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sw_spec_id,
>>>        client.repository.ModelMetaNames.TYPE: 'scikit-learn_0.22'
>>>}

In case when you want to provide input data schema of the model, you can provide it as part of meta

>>> sw_spec_id = client.software_specifications.get_id_by_name('spss-modeler_18.1')
>>> sw_spec_id
>>> metadata = {
>>>        client.repository.ModelMetaNames.NAME: 'customer satisfaction prediction model',
>>>        client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sw_spec_id,
>>>        client.repository.ModelMetaNames.TYPE: 'spss-modeler_18.1',
>>>        client.repository.ModelMetaNames.INPUT_DATA_SCHEMA: [{'id': 'test',
>>>                                                             'type': 'list',
>>>                                                             'fields': [{'name': 'age', 'type': 'float'},
>>>                                                                        {'name': 'sex', 'type': 'float'},
>>>                                                                         {'name': 'fbs', 'type': 'float'},
>>>                                                                         {'name': 'restbp', 'type': 'float'}]
>>>                                                               },
>>>                                                               {'id': 'test2',
>>>                                                                'type': 'list',
>>>                                                                'fields': [{'name': 'age', 'type': 'float'},
>>>                                                                           {'name': 'sex', 'type': 'float'},
>>>                                                                           {'name': 'fbs', 'type': 'float'},
>>>                                                                           {'name': 'restbp', 'type': 'float'}]
>>>                                                               }]
>>>             }

A way you might use me with local tar.gz containing model:

>>> stored_model_details = client.repository.store_model(path_to_tar_gz, meta_props=metadata, training_data=None)

A way you might use me with local directory containing model file(s):

>>> stored_model_details = client.repository.store_model(path_to_model_directory, meta_props=metadata, training_data=None)

A way you might use me with trained model guid:

>>> stored_model_details = client.repository.store_model(trained_model_guid, meta_props=metadata, training_data=None)
store_pipeline(meta_props)[source]

Create a pipeline.

Parameters

Important

  1. meta_props: meta data of the pipeline configuration. To see available meta names use:

    >>> client.pipelines.ConfigurationMetaNames.get()
    

    type: dict

Output

Important

returns: Metadata of the pipeline createdn return type: dict

Example

>>> metadata = {
>>>  client.pipelines.ConfigurationMetaNames.NAME: 'my_training_definition',
>>>  client.pipelines.ConfigurationMetaNames.DOCUMENT: {"doc_type":"pipeline","version": "2.0","primary_pipeline": "dlaas_only","pipelines": [{"id": "dlaas_only","runtime_ref": "hybrid","nodes": [{"id": "training","type": "model_node","op": "dl_train","runtime_ref": "DL","inputs": [],"outputs": [],"parameters": {"name": "tf-mnist","description": "Simple MNIST model implemented in TF","command": "python3 convolutional_network.py --trainImagesFile ${DATA_DIR}/train-images-idx3-ubyte.gz --trainLabelsFile ${DATA_DIR}/train-labels-idx1-ubyte.gz --testImagesFile ${DATA_DIR}/t10k-images-idx3-ubyte.gz --testLabelsFile ${DATA_DIR}/t10k-labels-idx1-ubyte.gz --learningRate 0.001 --trainingIters 6000","compute": {"name": "k80","nodes": 1},"training_lib_href":"/v4/libraries/64758251-bt01-4aa5-a7ay-72639e2ff4d2/content"},"target_bucket": "wml-dev-results"}]}]}}
>>> pipeline_details = client.repository.store_pipeline(pipeline_filepath, meta_props=metadata)
>>> pipeline_href = client.repository.get_pipeline_href(pipeline_details)
store_space(meta_props)[source]

Create a space.

Parameters

Important

  1. meta_props: meta data of the space configuration. To see available meta names use:

    >>> client.spaces.ConfigurationMetaNames.get()
    

    type: dict

Output

Important

returns: Metadata of the space created

return type: dict

Example

>>> metadata = {
>>>  client.spaces.ConfigurationMetaNames.NAME: 'my_space'
>>> }
>>> space_details = client.repository.store_space(meta_props=metadata)
>>> space_href = client.repository.get_space_href(experiment_details)
update_experiment(experiment_uid, changes)[source]

Updates existing experiment metadata.

Parameters

Important

  1. experiment_uid: Unique of Id experiment which definition should be updated

    type: str

  2. changes: elements which should be changed, where keys are ConfigurationMetaNames

    type: dict

Output

Important

returns: metadata of updated experiment

return type: dict

Example

>>> metadata = {
>>> client.repository.ExperimentMetaNames.NAME:"updated_exp"
>>> }
>>> exp_details = client.repository.update_experiment(experiment_uid, changes=metadata)
update_function(function_uid, changes, update_function=None)[source]

Updates existing function metadata.

Parameters

Important

  1. function_uid: Unique Id of function which define what should be updated

    type: str

  2. changes: elements which should be changed, where keys are ConfigurationMetaNames

    type: dict

  3. update_function: path to file with archived function content or function which should be changed for specific function_uid

.This parameters is valid only for CP4D 3.0.0.

type: str or function

Output

Important

returns: metadata of updated function

return type: dict

Example

>>> metadata = {
>>> client.repository.FunctionMetaNames.NAME:"updated_function"
>>> }
>>>
>>> function_details = client.repository.update_function(function_uid, changes=metadata)
update_model(model_uid, updated_meta_props=None, update_model=None)[source]

Updates existing model metadata.

Parameters

Important

  1. model_uid: Unique id of model which definition should be updated

    type: str

  2. updated_meta_props: elements which should be changed, where keys are ConfigurationMetaNames

    type: dict

  1. update_model: archived model content file or path to directory containing archived model file which should be changed for specific model_uid

.

This parameters is valid only for CP4D 3.0.0. A way you might use me with local directory containing model file(s):

type: object or archived model content file

Output

Important

returns: metadata of updated model

return type: dict

Example 1

>>> metadata = {
>>> client.repository.ModelMetaNames.NAME:"updated_model"
>>> }
>>> model_details = client.repository.update_model(model_uid, updated_meta_props=metadata)

Example 2

>>> metadata = {
>>> client.repository.ModelMetaNames.NAME:"updated_model"
>>> }
>>> model_details = client.repository.update_model(model_uid, updated_meta_props=metadata, update_model="newmodel_content.tar.gz")
update_pipeline(pipeline_uid, changes)[source]

Updates existing pipeline metadata.

Parameters

Important

  1. pipeline_uid: Unique Id of pipeline which definition should be updated

    type: str

  2. changes: elements which should be changed, where keys are ConfigurationMetaNames

    type: dict

Output

Important

returns: metadata of updated pipeline

return type: dict

Example

>>> metadata = {
>>> client.repository.PipelineMetanames.NAME:"updated_pipeline"
>>> }
>>> pipeline_details = client.repository.update_pipeline(pipeline_uid, changes=metadata)
update_space(space_uid, changes)[source]

Updates existing space metadata.

Parameters

Important

  1. space_uid: Unique Id of space which definition should be updated

    type: str

  2. changes: elements which should be changed, where keys are ConfigurationMetaNames

    type: dict

Output

Important

returns: metadata of updated space

return type: dict

Example

>>> metadata = {
>>> client.repository.SpacesMetaNames.NAME:"updated_space"
>>> }
>>> space_details = client.repository.update_space(space_uid, changes=metadata)
class metanames.ModelMetaNames[source]

Set of MetaNames for models.

Available MetaNames:

MetaName

Type

Required

Example value

Schema

NAME

str

Y

my_model

DESCRIPTION

str

N

my_description

INPUT_DATA_SCHEMA

list

N

{'id': '1', 'type': 'struct', 'fields': [{'name': 'x', 'type': 'double', 'nullable': False, 'metadata': {}}, {'name': 'y', 'type': 'double', 'nullable': False, 'metadata': {}}]}

{'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}

TRAINING_DATA_REFERENCES

list

N

[]

[{'name(optional)': 'string', 'type(required)': 'string', 'connection(required)': {'endpoint_url(required)': 'string', 'access_key_id(required)': 'string', 'secret_access_key(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}, 'schema(optional)': {'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}}]

OUTPUT_DATA_SCHEMA

dict

N

{'id': '1', 'type': 'struct', 'fields': [{'name': 'x', 'type': 'double', 'nullable': False, 'metadata': {}}, {'name': 'y', 'type': 'double', 'nullable': False, 'metadata': {}}]}

{'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}

LABEL_FIELD

str

N

PRODUCT_LINE

TRANSFORMED_LABEL_FIELD

str

N

PRODUCT_LINE_IX

TAGS

list

N

['string', 'string']

['string', 'string']

SIZE

dict

N

{'in_memory': 0, 'content': 0}

{'in_memory(optional)': 'string', 'content(optional)': 'string'}

SPACE_UID

str

N

53628d69-ced9-4f43-a8cd-9954344039a8

PIPELINE_UID

str

N

53628d69-ced9-4f43-a8cd-9954344039a8

RUNTIME_UID

str

N

53628d69-ced9-4f43-a8cd-9954344039a8

TYPE

str

Y

mllib_2.1

CUSTOM

dict

N

{}

DOMAIN

str

N

Watson Machine Learning

HYPER_PARAMETERS

dict

N

METRICS

list

N

IMPORT

dict

N

{'connection': {'endpoint_url': 'https://s3-api.us-geo.objectstorage.softlayer.net', 'access_key_id': '***', 'secret_access_key': '***'}, 'location': {'bucket': 'train-data', 'path': 'training_path'}, 'type': 's3'}

{'name(optional)': 'string', 'type(required)': 'string', 'connection(required)': {'endpoint_url(required)': 'string', 'access_key_id(required)': 'string', 'secret_access_key(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}}

TRAINING_LIB_UID

str

N

53628d69-ced9-4f43-a8cd-9954344039a8

MODEL_DEFINITION_UID

str

N

53628d6_cdee13-35d3-s8989343

SOFTWARE_SPEC_UID

str

N

53628d69-ced9-4f43-a8cd-9954344039a8

TF_MODEL_PARAMS

dict

N

{'save_format': 'None', 'signatures': 'struct', 'options': 'None', 'custom_objects': 'string'}

class metanames.ExperimentMetaNames[source]

Set of MetaNames for experiments.

Available MetaNames:

MetaName

Type

Required

Example value

Schema

NAME

str

Y

Hand-written Digit Recognition

DESCRIPTION

str

N

Hand-written Digit Recognition training

TAGS

list

N

[{'value': 'dsx-project.<project-guid>', 'description': 'DSX project guid'}]

[{'value(required)': 'string', 'description(optional)': 'string'}]

EVALUATION_METHOD

str

N

multiclass

EVALUATION_METRICS

list

N

[{'name': 'accuracy', 'maximize': False}]

[{'name(required)': 'string', 'maximize(optional)': 'boolean'}]

TRAINING_REFERENCES

list

Y

[{'pipeline': {'href': '/v4/pipelines/6d758251-bb01-4aa5-a7a3-72339e2ff4d8'}}]

[{'pipeline(optional)': {'href(required)': 'string', 'data_bindings(optional)': [{'data_reference(required)': 'string', 'node_id(required)': 'string'}], 'nodes_parameters(optional)': [{'node_id(required)': 'string', 'parameters(required)': 'dict'}]}, 'training_lib(optional)': {'href(required)': 'string', 'compute(optional)': {'name(required)': 'string', 'nodes(optional)': 'number'}, 'runtime(optional)': {'href(required)': 'string'}, 'command(optional)': 'string', 'parameters(optional)': 'dict'}}]

SPACE_UID

str

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

LABEL_COLUMN

str

N

label

CUSTOM

dict

N

{'field1': 'value1'}

class metanames.FunctionMetaNames[source]

Set of MetaNames for AI functions.

Available MetaNames:

MetaName

Type

Required

Example value

Schema

NAME

str

Y

ai_function

DESCRIPTION

str

N

This is ai function

RUNTIME_UID

str

N

53628d69-ced9-4f43-a8cd-9954344039a8

SOFTWARE_SPEC_UID

str

N

53628d69-ced9-4f43-a8cd-9954344039a8

INPUT_DATA_SCHEMAS

list

N

[{'id': '1', 'type': 'struct', 'fields': [{'name': 'x', 'type': 'double', 'nullable': False, 'metadata': {}}, {'name': 'y', 'type': 'double', 'nullable': False, 'metadata': {}}]}]

[{'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}]

OUTPUT_DATA_SCHEMAS

list

N

[{'id': '1', 'type': 'struct', 'fields': [{'name': 'multiplication', 'type': 'double', 'nullable': False, 'metadata': {}}]}]

[{'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}]

TAGS

list

N

[{'value': 'ProjectA', 'description': 'Functions created for ProjectA'}]

[{'value(required)': 'string', 'description(optional)': 'string'}]

TYPE

str

N

python

CUSTOM

dict

N

{}

SAMPLE_SCORING_INPUT

list

N

{'input_data': [{'fields': ['name', 'age', 'occupation'], 'values': [['john', 23, 'student'], ['paul', 33, 'engineer']]}]}

{'id(optional)': 'string', 'fields(optional)': 'array', 'values(optional)': 'array'}

SPACE_UID

str

N

3628d69-ced9-4f43-a8cd-9954344039a8

class metanames.PipelineMetanames[source]

Set of MetaNames for pipelines.

Available MetaNames:

MetaName

Type

Required

Example value

Schema

NAME

str

Y

Hand-written Digit Recognitionu

DESCRIPTION

str

N

Hand-written Digit Recognition training

SPACE_UID

str

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

TAGS

list

N

[{'value': 'dsx-project.<project-guid>', 'description': 'DSX project guid'}]

[{'value(required)': 'string', 'description(optional)': 'string'}]

DOCUMENT

dict

N

{'doc_type': 'pipeline', 'version': '2.0', 'primary_pipeline': 'dlaas_only', 'pipelines': [{'id': 'dlaas_only', 'runtime_ref': 'hybrid', 'nodes': [{'id': 'training', 'type': 'model_node', 'op': 'dl_train', 'runtime_ref': 'DL', 'inputs': [], 'outputs': [], 'parameters': {'name': 'tf-mnist', 'description': 'Simple MNIST model implemented in TF', 'command': 'python3 convolutional_network.py --trainImagesFile ${DATA_DIR}/train-images-idx3-ubyte.gz --trainLabelsFile ${DATA_DIR}/train-labels-idx1-ubyte.gz --testImagesFile ${DATA_DIR}/t10k-images-idx3-ubyte.gz --testLabelsFile ${DATA_DIR}/t10k-labels-idx1-ubyte.gz --learningRate 0.001 --trainingIters 6000', 'compute': {'name': 'k80', 'nodes': 1}, 'training_lib_href': '/v4/libraries/64758251-bt01-4aa5-a7ay-72639e2ff4d2/content'}, 'target_bucket': 'wml-dev-results'}]}]}

{'doc_type(required)': 'string', 'version(required)': 'string', 'primary_pipeline(required)': 'string', 'pipelines(required)': [{'id(required)': 'string', 'runtime_ref(required)': 'string', 'nodes(required)': [{'id': 'string', 'type': 'string', 'inputs': 'list', 'outputs': 'list', 'parameters': {'training_lib_href': 'string'}}]}]}

CUSTOM

dict

N

{'field1': 'value1'}

IMPORT

dict

N

{'connection': {'endpoint_url': 'https://s3-api.us-geo.objectstorage.softlayer.net', 'access_key_id': '***', 'secret_access_key': '***'}, 'location': {'bucket': 'train-data', 'path': 'training_path'}, 'type': 's3'}

{'name(optional)': 'string', 'type(required)': 'string', 'connection(required)': {'endpoint_url(required)': 'string', 'access_key_id(required)': 'string', 'secret_access_key(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}}

RUNTIMES

list

N

[{'id': 'id', 'name': 'tensorflow', 'version': '1.13-py3'}]

COMMAND

str

N

convolutional_network.py --trainImagesFile train-images-idx3-ubyte.gz --trainLabelsFile train-labels-idx1-ubyte.gz --testImagesFile t10k-images-idx3-ubyte.gz --testLabelsFile t10k-labels-idx1-ubyte.gz --learningRate 0.001 --trainingIters 6000

LIBRARY_UID

str

N

fb9752c9-301a-415d-814f-cf658d7b856a

COMPUTE

dict

N

{'name': 'k80', 'nodes': 1}

script

class client.Script(client)[source]

Store and manage your scripts assets.

ConfigurationMetaNames = <ibm_watson_machine_learning.metanames.ScriptMetaNames object>

MetaNames for script Assets creation.

create_revision(script_uid)[source]

Creates revision for the given script. Revisions are immutable once created. The metadata and attachment at script_uid is taken and a revision is created out of it

Parameters

script_uid ({str_type}) – Script ID. Mandatory

Returns

stored script revisions metadata

Return type

dict

>>> script_revision = client.scripts.create_revision(script_uid)
delete(asset_uid)[source]

Delete a stored script asset.

Parameters

Important

  1. asset_uid: Unique Id of script asset

    type: str

Output

Important

returns: status (“SUCCESS” or “FAILED”)

return type: str

Example

>>> client.script.delete(asset_uid)
download(asset_uid, filename, rev_uid=None)[source]

Download the content of a script asset.

Parameters

Important

  1. asset_uid: The Unique Id of the script asset to be downloaded

    type: str

  2. filename: filename to be used for the downloaded file

    type: str

  3. rev_uid: Revision id

    type: str

Output

returns: Path to the downloaded asset content

return type: str

Example

>>> client.script.download(asset_uid,"script_file.zip")
get_details(script_uid)[source]

Get script asset details.

Parameters

Important

  1. script_uid: Unique id of script

    type: str

Output

Important

returns: Metadata of the stored script asset

return type: dict

Example

>>> script_details = client.scripts.get_details(script_uid)
static get_href(asset_details)[source]

Get url of stored scripts asset.

Parameters

Important

  1. asset_details: stored script details

    type: dict

Output

Important

returns: href of stored script asset

return type: str

Example

>>> asset_details = client.script.get_details(asset_uid)
>>> asset_href = client.script.get_href(asset_details)
static get_id(asset_details)[source]

Get Unique Id of stored script asset.

Parameters

Important

  1. asset_details: Metadata of the stored script asset

    type: dict

    type: dict

Output

Important

returns: Unique Id of stored shiny asset

return type: str

Example

>>> asset_uid = client.script.get_id(asset_details)
get_revision_details(script_uid=None, rev_uid=None)[source]

Get metadata of script_uid revision.

Parameters
  • script_uid ({str_type}) – Script ID. Mandatory

  • rev_uid (int) – Revision ID. If this parameter is not provided, returns latest revision if existing else error

Returns

stored script(s) metadata

Return type

dict

A way you might use me is:

>>> script_details = client.scripts.get_revision_details(script_uid, rev_uid)
static get_uid(asset_details)[source]

Get Unique Id of stored script asset. This method is deprecated. Use ‘get_id(asset_details)’ instead

Parameters

Important

  1. asset_details: Metadata of the stored script asset

    type: dict

    type: dict

Output

Important

returns: Unique Id of stored script asset

return type: str

Example

>>> asset_uid = client.script.get_uid(asset_details)
list(limit=None)[source]

List stored scripts. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all script in a table format.

return type: None

Example

>>> client.script.list()
list_revisions(script_uid, limit=None)[source]

List all revisions for the given script uid.

Parameters
  • script_uid ({str_type}) – Stored script ID.

  • limit (int) – limit number of fetched records (optional)

Returns

stored script revisions details

Return type

table

>>> client.scripts.list_revisions(script_uid)
store(meta_props, file_path)[source]

Creates a Scripts asset and uploads content to it.

Parameters

Important

  1. meta_props: Name to be given to the Scripts asset

    type: str

  2. file_path: Path to the content file to be uploaded

    type: str

Output

Important

returns: metadata of the stored Scripts asset

return type: dict

Example
>>> metadata = {
>>>        client.script.ConfigurationMetaNamess.NAME: 'my first script',
>>>        client.script.ConfigurationMetaNames.DESCRIPTION: 'description of the script',
>>>        client.script.ConfigurationMetaNames.SOFTWARE_SPEC_UID: '0cdb0f1e-5376-4f4d-92dd-da3b69aa9bda'
>>>    }
>>>
>>> asset_details = client.scripts.store(meta_props=metadata,file_path="/path/to/file")
update(script_uid, meta_props=None, file_path=None)[source]

Update script with either metadata or attachment or both.

Parameters

script_uid (str) – Script UID

Returns

updated metadata of script

Return type

dict

A way you might use me is:

>>> script_details = client.script.update(model_uid, meta, content_path)
class metanames.ScriptMetaNames[source]

Set of MetaNames for Script Specifications.

Available MetaNames:

MetaName

Type

Required

Schema

NAME

str

Y

Python script

DESCRIPTION

str

N

my_description

SOFTWARE_SPEC_UID

str

Y

53628d69-ced9-4f43-a8cd-9954344039a8

service_instance

class client.ServiceInstance(client)[source]

Connect, get details and check usage of your Watson Machine Learning service instance.

get_api_key()[source]

Get api_key of Watson Machine Learning service. :returns: api_key :rtype: str A way you might use me is: >>> instance_details = client.service_instance.get_api_key()

get_details()[source]

Get information about your Watson Machine Learning instance.

Output

Important

returns: metadata of service instance

return type: dict

Example

>>> instance_details = client.service_instance.get_details()
get_instance_id()[source]

Get instance id of your Watson Machine Learning service.

Output

Important

returns: instance id

return type: str

Example

>>> instance_details = client.service_instance.get_instance_id()
get_password()[source]

Get password for your Watson Machine Learning service.

Output

Important

returns: password

return type: str

Example

>>> instance_details = client.service_instance.get_password()
get_url()[source]

Get instance url of your Watson Machine Learning service.

Output

Important

returns: instance url

return type: str

Example

>>> instance_details = client.service_instance.get_url()
get_username()[source]

Get username for your Watson Machine Learning service.

Output

Important

returns: username

return type: str

Example

>>> instance_details = client.service_instance.get_username()

set

class client.Set(client)[source]

Set a space_id/project_id to be used in the subsequent actions.

default_project(project_id)[source]

Set a project ID.

Parameters

Important

  1. project_id: GUID of the project

    type: str

Output

Important

returns: “SUCCESS”

return type: str

Example

>>>  client.set.default_project(project_id)
default_space(space_uid)[source]

Set a space ID.

Parameters

Important

  1. space_uid: GUID of the space to be used:

    type: str

Output

Important

returns: The space that is set here is used for subsequent requests.

return type: str(“SUCCESS”/”FAILURE”)

Example

>>>  client.set.default_space(space_uid)

spaces

class client.PlatformSpaces(client)[source]

Store and manage your spaces

MemberMetaNames = <ibm_watson_machine_learning.metanames.SpacesPlatformMemberMetaNames object>

MetaNames for spaces creation.

create_member(space_id, meta_props)[source]

Create a member within a space.

Parameters

Important

  1. meta_props: meta data of the member configuration. To see available meta names use:

    >>> client.spaces.MemberMetaNames.get()
    

    type: dict

Output

Important

returns: metadata of the stored member

return type: dict

Note

  • ‘role’ can be any one of the following “viewer, editor, admin”

  • ‘type’ can be any one of the following “user,service”

  • ‘id’ can be either service-ID or IAM-userID

Example

>>> metadata = {
>>>  client.spaces.MemberMetaNames.MEMBERS: [{"id":"IBMid-100000DK0B", "type": "user", "role": "admin" }]
>>> }
>>> members_details = client.spaces.create_member(space_id=space_id, meta_props=metadata)
>>> metadata = {
>>>  client.spaces.MemberMetaNames.MEMBERS: [{"id":"iam-ServiceId-5a216e59-6592-43b9-8669-625d341aca71", "type": "service", "role": "admin" }]
>>> }
>>> members_details = client.spaces.create_member(space_id=space_id, meta_props=metadata)
delete(space_id)[source]

Delete a stored space.

Parameters

Important

  1. space_uid: space ID

    type: str

Output

Important

returns: status (“SUCCESS” or “FAILED”)

return type: str

Example

>>> client.spaces.delete(deployment_id)
delete_member(space_id, member_id)[source]

Delete a member associated with a space.

Parameters

Important

  1. space_id: space UID

    type: str

  2. member_id: member UID

    type: str

Output

Important

returns: status (“SUCCESS” or “FAILED”)

return type: str

Example

>>> client.spaces.delete_member(space_id,member_id)
get_details(space_id=None, limit=None)[source]

Get metadata of stored space(s)

Parameters

Important

  1. space_id: Space ID

    type: str

  2. limit: Applicable when space_id is not provided. If space_id is provided, this will be ignored

    type: str

Output

Important

returns: metadata of stored space(s)

return type: dict

Example

>>> space_details = client.spaces.get_details(space_uid)
static get_id(space_details)[source]

Get space_id from space details.

Parameters

Important

  1. space_details: Metadata of the stored space

    type: dict

Output

Important

returns: space ID

return type: str

Example

>>> space_details = client.spaces.store(meta_props)
>>> space_id = client.spaces.get_id(space_details)
get_member_details(space_id, member_id)[source]

Get metadata of member associated with a space

Parameters

Important

  1. space_id: member ID

    type: str

Output

Important

returns: metadata of member of a space

return type: dict

Example

>>> member_details = client.spaces.get_member_details(space_uid,member_id)
static get_uid(space_details)[source]

Get Unique Id of the space. This method is deprecated. Use ‘get_id(space_details)’ instead

Parameters

Important

  1. asset_details: Metadata of the space

    type: dict

    type: dict

Output

Important

returns: Unique Id of space

return type: str

Example

>>> space_details = client.spaces.store(meta_props)
>>> space_uid = client.spaces.get_uid(space_details)
list(limit=None, member=None, roles=None)[source]

List stored spaces. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. limit: limit number of fetched records

    type: int

  2. member: Filters the result list to only include spaces where the user with a matching user id

    is a member

    type: string

  3. roles: limit number of fetched records

    type: string

Output

Important

This method only prints the list of all spaces in a table format.

return type: None

Example

>>> client.spaces.list()
list_members(space_id, limit=None, identity_type=None, role=None, state=None)[source]

List stored members of a space. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. limit: limit number of fetched records

    type: int

  2. identity_type: Find the member by type

    type: string

  3. role: Find the member by role

    type: string

  4. state: Find the member by state

    type: string

Output

Important

This method only prints the list of all members associated with a space in a table format.

return type: None

Example

>>> client.spaces.list_members(space_id)
store(meta_props, background_mode=True)[source]

Create a space. The instance associated with the space via COMPUTE will be used for billing purposes on cloud. Note that STORAGE and COMPUTE are applicable only for cloud

Parameters

Important

  1. meta_props: meta data of the space configuration. To see available meta names use:

    >>> client.spaces.ConfigurationMetaNames.get()
    

type: dict

  1. background_mode: Indicator if store() method will run in background (async) or (sync). Default: True

type: bool

Output

Important

returns: metadata of the stored space

return type: dict

Example

>>> metadata = {
>>>  client.spaces.ConfigurationMetaNames.NAME: 'my_space',
>>>  client.spaces.ConfigurationMetaNames.DESCRIPTION: 'spaces',
>>>  client.spaces.ConfigurationMetaNames.STORAGE: {"resource_crn": "provide crn of the COS storage"},
>>>  client.spaces.ConfigurationMetaNames.COMPUTE: {"name": "test_instance",
>>>                                                 "crn": "provide crn of the instance"}
>>> }
>>> spaces_details = client.spaces.store(meta_props=metadata)
update(space_id, changes)[source]

Updates existing space metadata. ‘STORAGE’ cannot be updated STORAGE and COMPUTE are applicable only for cloud

Parameters

Important

  1. space_uid: ID of space which definition should be updated

    type: str

  2. changes: elements which should be changed, where keys are ConfigurationMetaNames

    type: dict

Output

Important

returns: metadata of updated space

return type: dict

Example

>>> metadata = {
>>> client.spaces.ConfigurationMetaNames.NAME:"updated_space",
>>> client.spaces.ConfigurationMetaNames.COMPUTE: {"name": "test_instance",
>>>                                                "crn": "v1:staging:public:pm-20-dev:us-south:a/09796a1b4cddfcc9f7fe17824a68a0f8:f1026e4b-77cf-4703-843d-c9984eac7272::"
>>>                                               }
>>> }
>>> space_details = client.spaces.update(space_id, changes=metadata)
update_member(space_id, member_id, changes)[source]

Updates existing member metadata.

Parameters

Important

  1. space_id: ID of space

    type: str

  2. member_id: ID of member that needs to be updated

    type: str

  3. changes: elements which should be changed, where keys are ConfigurationMetaNames

    type: dict

Output

Important

returns: metadata of updated member

return type: dict

Example

>>> metadata = {
>>>  client.spaces.MemberMetaNames.MEMBER: {"role": "editor"}
>>> }
>>> member_details = client.spaces.update_member(space_id, member_id, changes=metadata)
class metanames.SpacesPlatformMetaNames[source]

Set of MetaNames for Platform Spaces Specs.

Available MetaNames:

MetaName

Type

Required

Schema

NAME

str

Y

my_space

DESCRIPTION

str

N

my_description

STORAGE

dict

N

{'type': 'bmcos_object_storage', 'resource_crn': '', 'delegated(optional)': 'false'}

COMPUTE

dict

N

{'name': 'name', 'crn': 'crn of the instance'}

class metanames.SpacesPlatformMemberMetaNames[source]

Set of MetaNames for Platform Spaces Member Specs.

Available MetaNames:

MetaName

Type

Required

Example value

Schema

MEMBERS

list

N

[{'id': 'iam-id1', 'role': 'editor', 'type': 'user', 'state': 'active'}, {'id': 'iam-id2', 'role': 'viewer', 'type': 'user', 'state': 'active'}]

[{'id(required)': 'string', 'role(required)': 'string', 'type(required)': 'string', 'state(optional)': 'string'}]

MEMBER

dict

N

{'id': 'iam-id1', 'role': 'editor', 'type': 'user', 'state': 'active'}

software_specifications

class client.SwSpec(client)[source]

Store and manage your software specs.

ConfigurationMetaNames = <ibm_watson_machine_learning.metanames.SwSpecMetaNames object>

MetaNames for Software Specification creation.

add_package_extension(sw_spec_uid, pkg_extn_id)[source]

Add a package extension to software specifications existing metadata.

Parameters

Important

  1. sw_spec_uid: Unique Id of software specification which should be updated

    type: str

  2. pkg_extn_id: Unique Id of package extension which should needs to added to software specification

    type: str

Example

>>> client.software_specifications.add_package_extension(sw_spec_uid, pkg_extn_id)
delete(sw_spec_uid)[source]

Delete a software specification.

Parameters

Important

  1. sw_spec_uid: Unique Id of software specification

    type: str

Output

Important

returns: status (“SUCCESS” or “FAILED”)

return type: str

Example

>>> client.software_specifications.delete(sw_spec_uid)
delete_package_extension(sw_spec_uid, pkg_extn_id)[source]

Delete a package extension from software specifications existing metadata.

Parameters

Important

  1. sw_spec_uid: Unique Id of software specification which should be updated

    type: str

  2. pkg_extn_id: Unique Id of package extension which should needs to deleted from software specification

    type: str

Example

>>>> client.software_specifications.delete_package_extension(sw_spec_uid, pkg_extn_id)

get_details(sw_spec_uid=None)[source]

Get software specification details.

Parameters

Important

  1. sw_spec_details: Metadata of the stored sw_spec

    type: dict

Output

Important

returns: sw_spec UID

return type: str

Example

>>> sw_spec_details = client.software_specifications.get_details(sw_spec_uid)
static get_href(sw_spec_details)[source]

Get url of software specification.

Parameters

Important

  1. sw_spec_details: software specification details

    type: dict

Output

Important

returns: href of software specification

return type: str

Example

>>> sw_spec_details = client.software_specifications.get_details(sw_spec_uid)
>>> sw_spec_href = client.software_specifications.get_href(sw_spec_details)
static get_id(sw_spec_details)[source]

Get Unique Id of software specification.

Parameters

Important

  1. sw_spec_details: Metadata of the software specification

    type: dict

Output

Important

returns: Unique Id of software specification

return type: str

Example

>>> asset_uid = client.software_specifications.get_id(sw_spec_details)
get_id_by_name(sw_spec_name)[source]

Get Unique Id of software specification.

Parameters

Important

  1. sw_spec_name: Name of the software specification

    type: str

Output

Important

returns: Unique Id of software specification

return type: str

Example

>>> asset_uid = client.software_specifications.get_id_by_name(sw_spec_name)
static get_uid(sw_spec_details)[source]

Get Unique Id of software specification. Deprecated!! Use get_id(sw_spec_details) instead

Parameters

Important

  1. sw_spec_details: Metadata of the software specification

    type: dict

Output

Important

returns: Unique Id of software specification

return type: str

Example

>>> asset_uid = client.software_specifications.get_uid(sw_spec_details)
get_uid_by_name(sw_spec_name)[source]

Get Unique Id of software specification. Deprecated!! Use get_id_by_name(self, sw_spec_name) instead

Parameters

Important

  1. sw_spec_name: Name of the software specification

    type: str

Output

Important

returns: Unique Id of software specification

return type: str

Example

>>> asset_uid = client.software_specifications.get_uid_by_name(sw_spec_name)
list()[source]

List software specifications.

Output

Important

This method only prints the list of all software specifications in a table format.

return type: None

Example

>>> client.software_specifications.list()
store(meta_props)[source]

Create a software specification.

Parameters

Important

  1. meta_props: meta data of the space configuration. To see available meta names use:

    >>> client.software_specifications.ConfigurationMetaNames.get()
    

    type: dict

Output

Important

returns: metadata of the stored space

return type: dict

Example

>>> meta_props = {
>>>    client.software_specifications.ConfigurationMetaNames.NAME: "skl_pipeline_heart_problem_prediction",
>>>    client.software_specifications.ConfigurationMetaNames.DESCRIPTION: "description scikit-learn_0.20",
>>>    client.software_specifications.ConfigurationMetaNames.PACKAGE_EXTENSIONS_UID: [],
>>>    client.software_specifications.ConfigurationMetaNames.SOFTWARE_CONFIGURATIONS: {},
>>>    client.software_specifications.ConfigurationMetaNames.BASE_SOFTWARE_SPECIFICATION_ID: "guid"
>>> }
class metanames.SwSpecMetaNames[source]

Set of MetaNames for Software Specifications Specs.

Available MetaNames:

MetaName

Type

Required

Example value

Schema

NAME

str

Y

Python 3.6 with pre-installed ML package

DESCRIPTION

str

N

my_description

PACKAGE_EXTENSIONS

list

N

[{'guid': 'value'}]

SOFTWARE_CONFIGURATION

dict

N

{'platform': {'name': 'python', 'version': '3.6'}}

{'platform(required)': 'string'}

BASE_SOFTWARE_SPECIFICATION

dict

Y

{'guid': 'BASE_SOFTWARE_SPECIFICATION_ID'}

training

class client.Training(client)[source]

Train new models.

cancel(training_uid, hard_delete=False)[source]

Cancel a training which is currently running and remove it. This method is also be used to delete metadata details of the completed or canceled training run when hard_delete parameter is set to True.

Parameters

Important

  1. training_uid: Training UID

    type: str

  1. hard_delete: specify True or False.

    True - To delete the completed or canceled training runs. False - To cancel the currently running training run. Default value is False.

    type: Boolean

Output

Important

returns: status (“SUCCESS” or “FAILED”)

return type: str

Example

>>> client.training.cancel(training_uid)
get_details(training_uid=None, limit=None)[source]

Get metadata of training(s). If training_uid is not specified returns all model spaces metadata.

Parameters

Important

  1. training_uid: Unique Id of Training (optional)

    type: str

  2. limit: limit number of fetched records (optional)

    type: int

Output

Important

returns: metadata of training(s)

return type: dict The output can be {“resources”: [dict]} or a dict

Note

If training_uid is not specified, all trainings metadata is fetched

Example

>>> training_run_details = client.training.get_details(training_uid)
>>> training_runs_details = client.training.get_details()
static get_href(training_details)[source]

Get training_href from training details.

Parameters

Important

  1. training_details: Metadata of the training created

    type: dict

Output

Important

returns: training href

return type: str

Example

>>> training_details = client.training.get_details(training_uid)
>>> run_url = client.training.get_href(training_details)
static get_id(training_details)[source]

Get training_id from training details.

Parameters

Important

  1. training_details: Metadata of the training created

    type: dict

Output

Important

returns: Unique id of training

return type: str

Example

>>> training_details = client.training.get_details(training_id)
>>> training_id = client.training.get_id(training_details)
get_metrics(training_uid)[source]

Get metrics.

Parameters

Important

  1. training_uid: training UID

    type: str

Output

Important

returns: Metrics of a training run

return type: list of dict

Example

>>> training_status = client.training.get_metrics(training_uid)
get_status(training_uid)[source]

Get the status of a training created.

Parameters

Important

  1. training_uid: training UID

    type: str

Output

Important

returns: training_status

return type: dict

Example

>>> training_status = client.training.get_status(training_uid)
static get_uid(training_details)[source]

Get training_uid from training details.

Parameters

Important

  1. training_details: Metadata of the training created

    type: dict

Output

Important

returns: Unique id of training

return type: str

Example

>>> training_details = client.training.get_details(training_uid)
>>> training_uid = client.training.get_uid(training_details)
list(limit=None)[source]

List stored trainings. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all trainings in a table format.

return type: None

Example

>>> client.training.list()
list_intermediate_models(training_uid)[source]

List the intermediate_models.

Parameters

Important

  1. training_uid: Training GUID

    type: str

Output

Important

This method only prints the list of all intermediate_models associated with an AUTOAI training in a table format.

return type: None

Note

This method prints the training logs. This method is not supported for IBM Cloud Pak® for Data.

Example

>>> client.training.list_intermediate_models()
list_subtrainings(training_uid)[source]

List the sub-trainings.

Parameters

Important

  1. training_uid: Training GUID

    type: str

Output

Important

This method only prints the list of all sub-trainings associated with a training in a table format.

return type: None

Example

>>> client.training.list_subtrainings()
monitor_logs(training_uid)[source]

Monitor the logs of a training created.

Parameters

Important

  1. training_uid: Training UID

    type: str

Output

Important

returns: None

return type: None

Note

This method prints the training logs. This method is not supported for IBM Cloud Pak® for Data.

Example

>>> client.training.monitor_logs(training_uid)
monitor_metrics(training_uid)[source]

Monitor the metrics of a training created.

Parameters

Important

  1. training_uid: Training UID

    type: str

Output

Output

Important

returns: None

return type: None

Note

This method prints the training metrics. This method is not supported for IBM Cloud Pak® for Data.

Example

>>> client.training.monitor_metrics(training_uid)
run(meta_props, asynchronous=True)[source]

Create a new Machine Learning training.

Parameters

Important

  1. meta_props: meta data of the training configuration. To see available meta names use:

    >>> client.training.ConfigurationMetaNames.show()
    

    type: str

  2. asynchronous:
    • True - training job is submitted and progress can be checked later.

    • False - method will wait till job completion and print training stats.

    type: bool

Output

Important

returns: Metadata of the training created

return type: dict

Examples

Example meta_props for Training run creation in IBM Cloud Pak® for Data for Data version 3.0.1 or above: >>> metadata = { >>> client.training.ConfigurationMetaNames.NAME: ‘Hand-written Digit Recognition’, >>> client.training.ConfigurationMetaNames.DESCRIPTION: ‘Hand-written Digit Recognition Training’, >>> client.training.ConfigurationMetaNames.PIPELINE: { >>> “id”: “4cedab6d-e8e4-4214-b81a-2ddb122db2ab”, >>> “rev”: “12”, >>> “model_type”: “string”, >>> “data_bindings”: [ >>> { >>> “data_reference_name”: “string”, >>> “node_id”: “string” >>> } >>> ], >>> “nodes_parameters”: [ >>> { >>> “node_id”: “string”, >>> “parameters”: {} >>> } >>> ], >>> “hardware_spec”: { >>> “id”: “4cedab6d-e8e4-4214-b81a-2ddb122db2ab”, >>> “rev”: “12”, >>> “name”: “string”, >>> “num_nodes”: “2” >>> } >>> }, >>> client.training.ConfigurationMetaNames.TRAINING_DATA_REFERENCES: [{ >>> ‘type’: ‘s3’, >>> ‘connection’: {}, >>> ‘location’: { >>> ‘href’: ‘v2/assets/asset1233456’, >>> } >>> “schema”: “{ “id”: “t1”, “name”: “Tasks”, “fields”: [ { “name”: “duration”, “type”: “number” } ]}” >>> }], >>> client.training.ConfigurationMetaNames.TRAINING_RESULTS_REFERENCE: { >>> ‘id’ : ‘string’, >>> ‘connection’: { >>> ‘endpoint_url’: ‘https://s3-api.us-geo.objectstorage.service.networklayer.com’, >>> ‘access_key_id’: ‘*’, >>> ‘secret_access_key’: ‘***’ >>> }, >>> ‘location’: { >>> ‘bucket’: ‘wml-dev-results’, >>> ‘path’ : “path” >>> } >>> ‘type’: ‘s3’ >>> } >>> }

NOTE: You can provide either one of the below values can be provided for training:

client.training.ConfigurationMetaNames.EXPERIMENT client.training.ConfigurationMetaNames.PIPELINE client.training.ConfigurationMetaNames.MODEL_DEFINITION:

Example meta_prop values for training run creation in other versions:

>>> metadata = {
>>>  client.training.ConfigurationMetaNames.NAME: 'Hand-written Digit Recognition',
>>>  client.training.ConfigurationMetaNames.TRAINING_DATA_REFERENCES: [{
>>>          'connection': {
>>>              'endpoint_url': 'https://s3-api.us-geo.objectstorage.service.networklayer.com',
>>>              'access_key_id': '***',
>>>              'secret_access_key': '***'
>>>          },
>>>          'source': {
>>>              'bucket': 'wml-dev',
>>>          }
>>>          'type': 's3'
>>>      }],
>>> client.training.ConfigurationMetaNames.TRAINING_RESULTS_REFERENCE: {
>>>          'connection': {
>>>              'endpoint_url': 'https://s3-api.us-geo.objectstorage.service.networklayer.com',
>>>              'access_key_id': '***',
>>>              'secret_access_key': '***'
>>>          },
>>>          'target': {
>>>              'bucket': 'wml-dev-results',
>>>          }
>>>          'type': 's3'
>>>      },
>>> client.training.ConfigurationMetaNames.PIPELINE_UID : "/v4/pipelines/<PIPELINE-ID>"
>>> }
>>> training_details = client.training.run(definition_uid, meta_props=metadata)
>>> training_uid = client.training.get_uid(training_details)
class metanames.TrainingConfigurationMetaNames[source]

Set of MetaNames for trainings.

Available MetaNames:

MetaName

Type

Required

Example value

Schema

TRAINING_DATA_REFERENCES

list

Y

[{'connection': {'endpoint_url': 'https://s3-api.us-geo.objectstorage.softlayer.net', 'access_key_id': '***', 'secret_access_key': '***'}, 'location': {'bucket': 'train-data', 'path': 'training_path'}, 'type': 's3', 'schema': {'id': '1', 'fields': [{'name': 'x', 'type': 'double', 'nullable': 'False'}]}}]

[{'name(optional)': 'string', 'type(required)': 'string', 'connection(required)': {'endpoint_url(required)': 'string', 'access_key_id(required)': 'string', 'secret_access_key(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}, 'schema(optional)': {'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}}]

TRAINING_RESULTS_REFERENCE

dict

Y

{'connection': {'endpoint_url': 'https://s3-api.us-geo.objectstorage.softlayer.net', 'access_key_id': '***', 'secret_access_key': '***'}, 'location': {'bucket': 'test-results', 'path': 'training_path'}, 'type': 's3'}

{'name(optional)': 'string', 'type(required)': 'string', 'connection(required)': {'endpoint_url(required)': 'string', 'access_key_id(required)': 'string', 'secret_access_key(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}}

TAGS

list

N

[{'value': 'string', 'description': 'string'}]

[{'value(required)': 'string', 'description(optional)': 'string'}]

PIPELINE_UID

str

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

EXPERIMENT_UID

str

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

PIPELINE_DATA_BINDINGS

str

N

[{'data_reference_name': 'string', 'node_id': 'string'}]

[{'data_reference_name(required)': 'string', 'node_id(required)': 'string'}]

PIPELINE_NODE_PARAMETERS

dict

N

[{'node_id': 'string', 'parameters': {}}]

[{'node_id(required)': 'string', 'parameters(required)': 'dict'}]

SPACE_UID

str

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

TRAINING_LIB

dict

N

{'href': '/v4/libraries/3c1ce536-20dc-426e-aac7-7284cf3befc6', 'compute': {'name': 'k80', 'nodes': 0}, 'runtime': {'href': '/v4/runtimes/3c1ce536-20dc-426e-aac7-7284cf3befc6'}, 'command': 'python3 convolutional_network.py', 'parameters': {}}

{'href(required)': 'string', 'type(required)': 'string', 'runtime(optional)': {'href': 'string'}, 'command(optional)': 'string', 'parameters(optional)': 'dict'}

TRAINING_LIB_UID

str

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

TRAINING_LIB_MODEL_TYPE

str

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

TRAINING_LIB_RUNTIME_UID

str

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

TRAINING_LIB_PARAMETERS

dict

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

COMMAND

str

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

COMPUTE

dict

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

PIPELINE_MODEL_TYPE

str

N

tensorflow_1.1.3-py3

API for IBM Cloud Pak® for Data , IBM Watson Machine Learning Server )

To use IBM Watson Machine learning Python client APIs in IBM Cloud Pak® for Data or IBM Watson Machine Learning Server , user must create an instance of APIClient with authentication details.

Authentication

Authentication for IBM Cloud Pak® for Data(CP4D)

IBM Cloud Pak® for Data version 3.0.0 or 3.0.1 user can create IBM Watson Machine Learning python client by providing the credentials as given below:

Example of creation of client using user Credentials:

from ibm_watson_machine_learning import APIClient
wml_credentials = {
                   "url": "<URL>",
                   "username": "<USERNAME>",
                   "password" : "<PASSWORD>",
                   "instance_id": "wml_local",
                   "version" : "3.0.0"

                  }

client = APIClient(wml_credentials)

Example of creating the client using token:

In IBM Cloud Pak® for Data version 3.0.0 or 3.0.1 user can authenticate with token set in the notebook environment.

access_token = os.environ['USER_ACCESS_TOKEN']
from ibm_watson_machine_learning import APIClient

wml_credentials = {
                   "url": "https://us-south.ml.cloud.ibm.com",
                   "token": access_token,
                   "instance_id": "wml_local"
                   "version" : "3.0.0"
                  }
client = APIClient(wml_credentials)

Note

  • The version value should be set to “3.0.0” for In IBM Cloud Pak® for Data version 3.0.0 users.

  • The version value should be set to “3.0.1” for In IBM Cloud Pak® for Data version 3.0.1 users.

  • Setting default space id or project id is mandatory. Refer to client.set.default_space() and client.set.default_project() APIs in this document for more example.

Authentication for WML Server

In IBM Watson Machine Learning Server user can authenticate with token set in the notebook environment or with user credentials.

Example of creating the client using User Credentials:

from ibm_watson_machine_learning import APIClient

wml_credentials = {
                   "url": "<URL>",
                   "username": "<USERNAME>",
                   "password" : "<PASSWORD>",
                   "instance_id": "wml_local",
                   "version" : "1.1"

                  }

client = APIClient(wml_credentials)

Example of creating the client using token:

access_token = os.environ['USER_ACCESS_TOKEN']
from ibm_watson_machine_learning import APIClient

wml_credentials = {
                   "url": "https://us-south.ml.cloud.ibm.com",
                   "token": access_token,
                   "instance_id": "wml_local"
                   "version" : "1.1"
                  }

client = APIClient(wml_credentials)

Note

  • The version value should be set to corresponding WML Server Product version.( Example: “1.1” or “2.0”).

  • Setting default space id or project id is mandatory. Refer to client.set.default_space() and client.set.default_project() APIs in this document for more example.

  • The “url” field value should be having the port number as well. For example, the value can be “https://wmlserver.xxx.com:31843

Supported machine learning frameworks

For the list of supported machine learning frameworks (models) of IBM Cloud Pak® for Data version 3.0.0, Please refer to Watson Machine Learning Documentation.

For the list of supported machine learning frameworks (models) of IBM Cloud Pak® for Data version 3.0.1, Please refer to Watson Machine Learning Documentation.

For the list of supported machine learning frameworks (models) of IBM Watson Machine Learning Server, Please refer to Watson Machine Learning Documentation.

Samples

Refer to the sample usage of Watson Machine Learning python client for CP4D3.0.0 notebooks.

connections

. autoclass:: client.Connections
members

class metanames.ConnectionMetaNames[source]

Set of MetaNames for Spaces Specs.

Available MetaNames:

MetaName

Type

Required

Schema

NAME

str

Y

my_space

DESCRIPTION

str

N

my_description

DATASOURCE_TYPE

str

Y

1e3363a5-7ccf-4fff-8022-4850a8024b68

PROPERTIES

dict

Y

{'database': 'BLUDB', 'host': 'dashdb-txn-sbox-yp-dal09-04.services.dal.bluemix.net', 'password': 'a1b2c3d4#', 'username': 'usr21370'}

data_assets

class client.Assets(client)[source]

Store and manage your data assets.

ConfigurationMetaNames = <ibm_watson_machine_learning.metanames.AssetsMetaNames object>

MetaNames for Data Assets creation.

create(name, file_path)[source]

Creates a data asset and uploads content to it.

Parameters

Important

  1. name: Name to be given to the data asset

    type: str

  2. file_path: Path to the content file to be uploaded

    type: str

Output

Important

returns: metadata of the stored data asset

return type: dict

Example
>>> asset_details = client.data_assets.create(name="sample_asset",file_path="/path/to/file")
delete(asset_uid)[source]

Delete a stored data asset.

Parameters

Important

  1. asset_uid: Unique Id of data asset

    type: str

Output

Important

returns: status (“SUCCESS” or “FAILED”)

return type: str

Example

>>> client.data_assets.delete(asset_uid)
download(asset_uid, filename)[source]

Download the content of a data asset.

Parameters

Important

  1. asset_uid: The Unique Id of the data asset to be downloaded

    type: str

  2. filename: filename to be used for the downloaded file

    type: str

Output

returns: Path to the downloaded asset content

return type: str

Example

>>> client.data_assets.download(asset_uid,"sample_asset.csv")
get_details(asset_uid)[source]

Get data asset details.

Parameters

Important

  1. asset_details: Metadata of the stored data asset

    type: dict

Output

Important

returns: Unique id of asset

return type: str

Example

>>> asset_details = client.data_assets.get_details(asset_uid)
static get_href(asset_details)[source]

Get url of stored data asset.

Parameters

Important

  1. asset_details: stored data asset details

    type: dict

Output

Important

returns: href of stored data asset

return type: str

Example

>>> asset_details = client.data_assets.get_details(asset_uid)
>>> asset_href = client.data_assets.get_href(asset_details)
static get_id(asset_details)[source]

Get Unique Id of stored script asset.

Parameters

Important

  1. asset_details: Metadata of the stored script asset

    type: dict

    type: dict

Output

Important

returns: Unique Id of stored data asset

return type: str

Example

>>> asset_id = client.data_assets.get_id(asset_details)
static get_uid(asset_details)[source]

Get Unique Id of stored data asset. Deprecated!! Use get_id(details) instead

Parameters

Important

  1. asset_details: Metadata of the stored data asset

    type: dict

    type: dict

Output

Important

returns: Unique Id of stored asset

return type: str

Example

>>> asset_uid = client.data_assets.get_uid(asset_details)
list(limit=None)[source]

List stored data assets. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all data assets in a table format.

return type: None

Example

>>> client.data_assets.list()
store(meta_props)[source]

Creates a data asset and uploads content to it.

Parameters

Important

  1. meta_props: meta data of the space configuration. To see available meta names use:

    >>> client.data_assets.ConfigurationMetaNames.get()
    

    type: dict

    Example

    Example for data asset creation for files :

    >>> metadata = {
    >>>  client.data_assets.ConfigurationMetaNames.NAME: 'my data assets',
    >>>  client.data_assets.ConfigurationMetaNames.DESCRIPTION: 'sample description',
    >>>  client.data_assets.ConfigurationMetaNames.DATA_CONTENT_NAME: 'sample.csv'
    >>> }
    >>> asset_details = client.data_assets.store(meta_props=metadata)
    

    Example of data asset creation using connection:

    >>> metadata = {
    >>>  client.data_assets.ConfigurationMetaNames.NAME: 'my data assets',
    >>>  client.data_assets.ConfigurationMetaNames.DESCRIPTION: 'sample description',
    >>>  client.data_assets.ConfigurationMetaNames.CONNECTION_ID: '39eaa1ee-9aa4-4651-b8fe-95d3ddae',
    >>>  client.data_assets.ConfigurationMetaNames.DATA_CONTENT_NAME: 't1/sample.csv'
    >>> }
    >>> asset_details = client.data_assets.store(meta_props=metadata)
    

    Example for data asset creation with database sources type connection:

    >>> metadata = {
     >>>  client.data_assets.ConfigurationMetaNames.NAME: 'my data assets',
     >>>  client.data_assets.ConfigurationMetaNames.DESCRIPTION: 'sample description',
     >>>  client.data_assets.ConfigurationMetaNames.CONNECTION_ID: '23eaf1ee-96a4-4651-b8fe-95d3dadfe',
     >>>  client.data_assets.ConfigurationMetaNames.DATA_CONTENT_NAME: 't1'
     >>> }
     >>> asset_details = client.data_assets.store(meta_props=metadata)
    
class metanames.AssetsMetaNames[source]

Set of MetaNames for Data Asset Specs.

Available MetaNames:

MetaName

Type

Required

Schema

NAME

str

Y

my_data_asset

DATA_CONTENT_NAME

str

Y

/test/sample.csv

CONNECTION_ID

str

N

39eaa1ee-9aa4-4651-b8fe-95d3ddae

DESCRIPTION

str

N

my_description

deployments

class client.Deployments(client)[source]

Deploy and score published artifacts (models and functions).

create(artifact_uid=None, meta_props=None, rev_id=None, **kwargs)[source]

Create a deployment from an artifact. As artifact we understand model or function which may be deployed.

Parameters

Important

  1. artifact_uid: Published artifact UID (model or function uid)

    type: str

  2. meta_props: metaprops. To see the available list of metanames use:

    >>> client.deployments.ConfigurationMetaNames.get()
    

    type: dict

Output

Important

returns: metadata of the created deployment

return type: dict

Example
>>> meta_props = {
>>> wml_client.deployments.ConfigurationMetaNames.NAME: "SAMPLE DEPLOYMENT NAME",
>>> wml_client.deployments.ConfigurationMetaNames.ONLINE: {},
>>> wml_client.deployments.ConfigurationMetaNames.HARDWARE_SPEC : { "id":  "e7ed1d6c-2e89-42d7-aed5-8sb972c1d2b"}
>>> }
>>> deployment_details = client.deployments.create(artifact_uid, meta_props)
create_job(deployment_id, meta_props)[source]

Create an asynchronous deployment job.

Parameters

Important

  1. deployment_id: Unique Id of Deployment

    type: str

  2. meta_props: metaprops. To see the available list of metanames use:

    >>> client.deployments.ScoringMetaNames.get() or client.deployments.DecisionOptimizationmetaNames.get()
    

    type: dict

Output

Important

returns: metadata of the created async deployment job

return type: dict

Note

  • The valid payloads for scoring input are either list of values, pandas or numpy dataframes.

Example

>>> scoring_payload = {wml_client.deployments.ScoringMetaNames.INPUT_DATA: [{'fields': ['GENDER','AGE','MARITAL_STATUS','PROFESSION'], 'values': [['M',23,'Single','Student'],['M',55,'Single','Executive']]}]}
>>> async_job = client.deployments.create_job(deployment_id, scoring_payload)
delete(deployment_uid)[source]

Delete deployment.

Parameters

Important

  1. deployment uid: Unique Id of Deployment

    type: str

Output

Important

returns: status (“SUCCESS” or “FAILED”)

return type: str

Example

>>> client.deployments.delete(deployment_uid)
delete_job(job_uid, hard_delete=False)[source]

Cancels a deployment job that is currenlty running. This method is also be used to delete metadata details of the completed or canceled jobs when hard_delete parameter is set to True.

Parameters

Important

  1. job_uid: Unique Id of deployment job which should be canceled

    type: str

  2. hard_delete: specify True or False.

    True - To delete the completed or canceled job. False - To cancel the currently running deployment job. Default value is False.

    type: Boolean

Output

Important

returns: status (“SUCCESS” or “FAILED”)

return type: str

Example

>>> client.deployments.delete_job(job_uid)
download(virtual_deployment_uid, filename=None)[source]

Downloads file deployment of specified deployment Id. Currently supported format is Core ML.

Parameters
  • virtual_deployment_uid ({str_type}) – Unique Id of virtual deployment

  • filename ({str_type}) – filename of downloaded archive (optional)

Returns

path to downloaded file

Return type

{str_type}

get_details(deployment_uid=None, limit=None)[source]

Get information about your deployment(s). If deployment_uid is not passed, all deployment details are fetched.

Parameters

Important

  1. deployment_uid: Unique Id of Deployment (optional)

    type: str

  2. limit: limit number of fetched records (optional)

    type: int

Output

Important

returns: metadata of deployment(s)

return type: dict

dict (if deployment_uid is not None) or {“resources”: [dict]} (if deployment_uid is None)

Note

If deployment_uid is not specified, all deployments metadata is fetched

Example

>>> deployment_details = client.deployments.get_details(deployment_uid)
>>> deployment_details = client.deployments.get_details(deployment_uid=deployment_uid)
>>> deployments_details = client.deployments.get_details()
static get_download_url(deployment_details)[source]

Get deployment_download_url from deployment details.

Parameters

deployment_details (dict) – Created deployment details

Returns

deployment download URL that is used to get file deployment (for example: Core ML)

Return type

{str_type}

A way you might use me is:

>>> deployment_url = client.deployments.get_download_url(deployment)
static get_href(deployment_details)[source]

Get deployment_href from deployment details.

Parameters

Important

  1. deployment_details: Metadata of the deployment

    type: dict

Output

Important

returns: deployment href that is used to manage the deployment

return type: str

Example

>>> deployment_href = client.deployments.get_href(deployment)
static get_id(deployment_details)[source]

Get deployment id from deployment details.

Parameters

Important

  1. deployment_details: Metadata of the deployment

    type: dict

Output

Important

returns: deployment ID that is used to manage the deployment

return type: str

Example

>>> deployment_id = client.deployments.get_id(deployment)
get_job_details(job_uid=None, limit=None)[source]

Get information about your deployment job(s). If deployment job_uid is not passed, all deployment jobs details are fetched.

Parameters

Important

  1. job_uid: Unqiue Job ID (optional)

    type: str

  2. limit: limit number of fetched records (optional)

    type: int

Output

Important

returns: metadata of deployment job(s)

return type: dict

dict (if job_uid is not None) or {“resources”: [dict]} (if job_uid is None)

Note

If job_uid is not specified, all deployment jobs metadata associated with the deployment Id is fetched

Example

>>> deployment_details = client.deployments.get_job_details()
>>> deployments_details = client.deployments.get_job_details(job_uid=job_uid)
get_job_href(job_details)[source]

Get the href of the deployment job.

Parameters

Important

  1. job_details: metadata of the deployment job

    type: dict

Output

Important

returns: href of the deployment job

return type: str

Example

>>> job_details = client.deployments.get_job_details(job_uid=job_uid)
>>> job_status = client.deployments.get_job_href(job_details)
get_job_status(job_id)[source]

Get the status of the deployment job.

Parameters

Important

  1. job_id: Unique Id of the deployment job

    type: str

Output

Important

returns: status of the deployment job

return type: dict

Example

>>> job_status = client.deployments.get_job_status(job_uid)
get_job_uid(job_details)[source]

Get the Unique Id of the deployment job.

Parameters

Important

  1. job_details: metadata of the deployment job

    type: dict

Output

Important

returns: Unique Id of the deployment job

return type: str

Example

>>> job_details = client.deployments.get_job_details(job_uid=job_uid)
>>> job_status = client.deployments.get_job_uid(job_details)
static get_scoring_href(deployment_details)[source]

Get scoring url from deployment details.

Parameters

Important

  1. deployment_details: Metadata of the deployment

    type: dict

Output

Important

returns: scoring endpoint url that is used for making scoring requests

return type: str

Example

>>> scoring_href = client.deployments.get_scoring_href(deployment)
static get_uid(deployment_details)[source]

Get deployment_uid from deployment details. Deprecated!! Use get_id(deployment_details) instead

Parameters

Important

  1. deployment_details: Metadata of the deployment

    type: dict

Output

Important

returns: deployment UID that is used to manage the deployment

return type: str

Example

>>> deployment_uid = client.deployments.get_uid(deployment)
list(limit=None)[source]

List deployments. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all deployments in a table format.

return type: None

Example

>>> client.deployments.list()
list_jobs(limit=None)[source]

List the async deployment jobs. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all async jobs in a table format.

return type: None

Note

  • This method list only async deployment jobs created for WML deployment.

Example

>>> client.deployments.list_jobs()
score(deployment_id, meta_props)[source]

Make scoring requests against deployed artifact.

Parameters

Important

  1. deployment_id: Unique Id of the deployment to be scored

    type: str

  2. meta_props: Meta props for scoring

    >>> Use client.deployments.ScoringMetaNames.show() to view the list of ScoringMetaNames.
    

    type: dict

  3. transaction_id: transaction id to be passed with records during payload logging (optional)

    type: str

Output

Important

returns: scoring result containing prediction and probability

return type: dict

Note

  • client.deployments.ScoringMetaNames.INPUT_DATA is the only metaname valid for sync scoring.

  • The valid payloads for scoring input are either list of values, pandas or numpy dataframes.

Example

>>> scoring_payload = {wml_client.deployments.ScoringMetaNames.INPUT_DATA:
>>>        [{'fields':
>>>            ['GENDER','AGE','MARITAL_STATUS','PROFESSION'],
>>>            'values': [
>>>                ['M',23,'Single','Student'],
>>>                ['M',55,'Single','Executive']
>>>                ]
>>>         }
>>>       ]}
>>> predictions = client.deployments.score(deployment_id, scoring_payload)
>>> predictions = client.deployments.score(deployment_id, scoring_payload,async=True)
update(deployment_uid, changes)[source]

Updates existing deployment metadata. If ASSET is patched, then ‘id’ field is mandatory and it starts a deployment with the provided asset id/rev. Deployment id remains same

Parameters

Important

  1. deployment_uid: Unqiue Id of deployment which should be updated

    type: str

  2. changes: elements which should be changed, where keys are ConfigurationMetaNames

    type: dict

Output

Important

returns: metadata of updated deployment

return type: dict

Example

>>> metadata = {
>>> client.deployments.ConfigurationMetaNames.NAME:"updated_Deployment",
>>> client.deployments.ConfigurationMetaNames.ASSET: { "id": "ca0cd864-4582-4732-b365-3165598dc945", "rev":"2" }
>>> }
>>>
>>> deployment_details = client.deployments.update(deployment_uid, changes=metadata)
class metanames.DeploymentMetaNames[source]

Set of MetaNames for Deployments Specs.

Available MetaNames:

MetaName

Type

Required

Example value

Schema

NAME

str

N

my_deployment

TAGS

list

N

[{'value': 'dsx-project.<project-guid>', 'description': 'DSX project guid'}]

[{'value(required)': 'string', 'description(optional)': 'string'}]

DESCRIPTION

str

N

my_deployment

CUSTOM

dict

N

{}

AUTO_REDEPLOY

bool

N

False

SPACE_UID

str

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

COMPUTE

dict

N

None

ONLINE

dict

N

{}

BATCH

dict

N

{}

VIRTUAL

dict

N

{}

ASSET

dict

N

{}

R_SHINY

dict

N

{}

HYBRID_PIPELINE_HARDWARE_SPECS

list

N

[{'id': '3342-1ce536-20dc-4444-aac7-7284cf3befc'}]

HARDWARE_SPEC

dict

N

{'id': '3342-1ce536-20dc-4444-aac7-7284cf3befc'}

class metanames.ScoringMetaNames[source]

Set of MetaNames for Scoring.

Available MetaNames:

MetaName

Type

Required

Example value

Schema

NAME

str

N

jobs test

INPUT_DATA

list

N

[{'fields': ['name', 'age', 'occupation'], 'values': [['john', 23, 'student']]}]

[{'name(optional)': 'string', 'id(optional)': 'string', 'fields(optional)': 'array[string]', 'values': 'array[array[string]]'}]

INPUT_DATA_REFERENCES

list

N

[{'id(optional)': 'string', 'name(optional)': 'string', 'type(required)': 'string', 'connection(required)': {'endpoint_url(required)': 'string', 'access_key_id(required)': 'string', 'secret_access_key(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}, 'schema(optional)': {'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}}]

OUTPUT_DATA_REFERENCE

dict

N

{'name(optional)': 'string', 'type(required)': 'string', 'connection(required)': {'endpoint_url(required)': 'string', 'access_key_id(required)': 'string', 'secret_access_key(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}, 'schema(optional)': {'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}}

EVALUATIONS_SPEC

list

N

[{'id': 'string', 'input_target': 'string', 'metrics_names': ['auroc', 'accuracy']}]

[{'id(optional)': 'string', 'input_target(optional)': 'string', 'metrics_names(optional)': 'array[string]'}]

ENVIRONMENT_VARIABLES

dict

N

{'my_env_var1': 'env_var_value1', 'my_env_var2': 'env_var_value2'}

class metanames.DecisionOptimizationMetaNames[source]

Set of MetaNames for Decision Optimization.

Available MetaNames:

MetaName

Type

Required

Example value

Schema

INPUT_DATA

list

N

[{'fields': ['name', 'age', 'occupation'], 'values': [['john', 23, 'student']]}]

[{'name(optional)': 'string', 'id(optional)': 'string', 'fields(optional)': 'array[string]', 'values': 'array[array[string]]'}]

INPUT_DATA_REFERENCES

list

N

[{'fields': ['name', 'age', 'occupation'], 'values': [['john', 23, 'student']]}]

[{'name(optional)': 'string', 'id(optional)': 'string', 'fields(optional)': 'array[string]', 'values': 'array[array[string]]'}]

OUTPUT_DATA

list

N

[{'name(optional)': 'string'}]

OUTPUT_DATA_REFERENCES

list

N

{'name(optional)': 'string', 'type(required)': 'string', 'connection(required)': {'endpoint_url(required)': 'string', 'access_key_id(required)': 'string', 'secret_access_key(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}, 'schema(optional)': {'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}}

SOLVE_PARAMETERS

dict

N

hardware_specifications

class client.HwSpec(client)[source]

Store and manage your hardware specs.

ConfigurationMetaNames = <ibm_watson_machine_learning.metanames.HwSpecMetaNames object>

MetaNames for Hardware Specification.

get_details(hw_spec_uid)[source]

Get hardware specification details.

Parameters

Important

hw_spec_uid : Unique id of the hardware spec ** type**: str

Output

Important

returns: Metadata of the hardware specifications

return type: dict

of hw_spec

Example

>>> hw_spec_details = client.hardware_specifications.get_details(hw_spec_uid)
static get_href(hw_spec_details)[source]

Get url of hardware specifications.

Parameters

Important

  1. hw_spec_details: hardware specifications details

    type: dict

Output

Important

returns: href of hardware specifications

return type: str

Example

>>> hw_spec_details = client.hw_spec.get_details(hw_spec_uid)
>>> hw_spec_href = client.hw_spec.get_href(hw_spec_details)
static get_id(hw_spec_details)[source]

Get ID of hardware specifications asset.

Parameters

Important

  1. asset_details: Metadata of the hardware specifications

    type: dict

    type: dict

Output

Important

returns: Unique Id of hardware specifications

return type: str

Example

>>> asset_uid = client.hardware_specifications.get_id(hw_spec_details)
get_id_by_name(hw_spec_name)[source]

Get Unique Id of hardware specification for the given name.

Parameters

Important

  1. hw_spec_name: name of the hardware spec

    type: str

Output

Important

returns: Unique Id of hardware specification

return type: str

Example

>>> asset_uid = client.hardware_specifications.get_id_by_name(hw_spec_name)
static get_uid(hw_spec_details)[source]

Get UID of hardware specifications asset. Deprecated!! Use get_id(hw_spec_details) instead

Parameters

Important

  1. asset_details: Metadata of the hardware specifications

    type: dict

    type: dict

Output

Important

returns: Unique Id of hardware specifications

return type: str

Example

>>> asset_uid = client.hardware_specifications.get_uid(hw_spec_details)
get_uid_by_name(hw_spec_name)[source]

Get Unique Id of hardware specification for the given name. Deprecated!! Use get_id_by_name(self, hw_spec_name) instead

Parameters

Important

  1. hw_spec_name: name of the hardware spec

    type: str

Output

Important

returns: Unique Id of hardware specification

return type: str

Example

>>> asset_uid = client.hardware_specifications.get_uid_by_name(hw_spec_name)
list(name=None)[source]

List hardware specifications.

Parameters

Important

name : Unique id of the hardware spec ** type**: str

Output

Important

This method only prints the list of all assets in a table format.

return type: None

Example

>>> client.hardware_specifications.list()
class metanames.HwSpecMetaNames[source]

Set of MetaNames for Software Specifications Specs.

Available MetaNames:

MetaName

Type

Required

Schema

NAME

str

Y

Python 3.6 with pre-installed ML package

DESCRIPTION

str

N

my_description

HARDWARE_CONFIGURATION

dict

N

{}

model_definitions

class client.ModelDefinition(client)[source]

Store and manage your model_definitions.

ConfigurationMetaNames = <ibm_watson_machine_learning.metanames.ModelDefinitionMetaNames object>

MetaNames for model_definition creation.

create_revision(model_definition_uid)[source]

Creates revision for the given model_definition. Revisions are immutable once created. The metadata and attachment at model_definition is taken and a revision is created out of it

Parameters

model_definition ({str_type}) – model_definition ID. Mandatory

Returns

stored model_definition revisions metadata

Return type

dict

>>> model_definition_revision = client.model_definitions.create_revision(model_definition_id)
delete(model_definition_uid)[source]

Delete a stored model_definition.

Parameters

Important

  1. model_definition_uid: Unique Id of stored Model definition

    type: str

Output

Important

returns: status (“SUCCESS” or “FAILED”)

return type: str

Example

>>> client.model_definitions.delete(model_definition_uid)
download(model_definition_uid, filename, rev_id=None)[source]

Download the content of a script asset.

Parameters

Important

  1. model_definition_uid: The Unique Id of the model_definition asset to be downloaded

    type: str

  2. filename: filename to be used for the downloaded file

    type: str

  3. rev_id: Revision id

    type: str

Output

returns: Path to the downloaded asset content

return type: str

Example

>>> client.script.download(asset_uid,"script_file.zip")
get_details(model_definition_uid)[source]

Get metadata of stored model_definition.

Parameters

Important

  1. model_definition_uid: Unique Id of model_definition

    type: str

Output

Important

returns: metadata of model definition

return type: dict dict (if model_definition_uid is not None)

Example

>>> model_definition_details = client.model_definitions.get_details(model_definition_uid)
get_href(model_definition_details)[source]

Get href of stored model_definition.

param model_definition_details

stored model_definition details

type model_definition_details

dict

returns

href of stored model_definition

rtype

{str_type}

EXAMPLE:

>>> model_definition_uid = client.model_definitions.get_href(model_definition_details)
get_id(model_definition_details)[source]

Get Unique Id of stored model_definition asset.

Parameters

Important

  1. model_definition_details: Metadata of the stored model_definition asset

    type: dict

    type: dict

Output

Important

returns: Unique Id of stored model_definition asset

return type: str

Example

>>> asset_uid = client.model_definition.get_id(asset_details)
get_revision_details(model_definition_uid, rev_uid=None)[source]

Get metadata of model_definition_uid revision.

Parameters
  • model_definition_uid ({str_type}) – model_definition ID. Mandatory

  • rev_uid (int) – Revision ID. If this parameter is not provided, returns latest revision if existing else error

Returns

stored model definitions metadata

Return type

dict

A way you might use me is:

>>> script_details = client.model_definitions.get_revision_details(model_definition_uid, rev_uid)
get_uid(model_definition_details)[source]

Get uid of stored model. Deprecated!! Use get_id(model_definition_details) instead

Parameters

model_definition_details (dict) – stored model_definition details

Returns

uid of stored model_definition

Return type

{str_type}

A way you might use me is:

>>> model_definition_uid = client.model_definitions.get_uid(model_definition_details)
list(limit=None)[source]

List stored model_definition assets. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all model_definition assets in a table format.

return type: None

Example

>>> client.model_definitions.list()
list_revisions(model_definition_uid, limit=None)[source]

List stored model_definition assets. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. model_definition_uid: Unique id of model_definition

    type: str

  2. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all model_definition revision in a table format.

return type: None

Example

>>> client.model_definitions.list_revisions()
store(model_definition, meta_props)[source]

Create a model_definitions.

Parameters

Important

  1. meta_props: meta data of the model_definition configuration. To see available meta names use:

    >>> client.model_definitions.ConfigurationMetaNames.get()
    

    type: dict

  1. model_definition: Path to the content file to be uploaded

    type: str

Output

Important

returns: Metadata of the model_defintion created

return type: dict

Example

>>>  client.model_definitions.store(model_definition, meta_props)
update(model_definition_id, meta_props=None, file_path=None)[source]

Update model_definition with either metadata or attachment or both. :param model_definition_id: model_definition ID :type model_definition_id: str :param file_path: Path to the content file to be uploaded :type file_path: str :returns: updated metadata of model_definition :rtype: dict A way you might use me is: >>> model_definition_details = client.model_definition.update(model_definition_id, meta_props, file_path)

class metanames.ModelDefinitionMetaNames[source]

Set of MetaNames for Model Definition.

Available MetaNames:

MetaName

Type

Required

Example value

Schema

NAME

str

Y

my_model_definition

DESCRIPTION

str

N

my model_definition

PLATFORM

dict

Y

{'name': 'python', 'versions': ['3.5']}

{'name(required)': 'string', 'versions(required)': ['versions']}

VERSION

str

Y

1.0

COMMAND

str

N

python3 convolutional_network.py

CUSTOM

dict

N

{'field1': 'value1'}

SPACE_UID

str

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

package_extensions

class client.PkgExtn(client)[source]

Store and manage your software Packages Extension specs.

ConfigurationMetaNames = <ibm_watson_machine_learning.metanames.PkgExtnMetaNames object>

MetaNames for Package Extensions creation.

delete(pkg_extn_uid)[source]

Delete a package extension.

Parameters

Important

  1. pkg_extn_uid: Unique Id of package extension

    type: str

Output

Important

returns: status (“SUCCESS” or “FAILED”)

return type: str

Example

>>> client.package_extensions.delete(pkg_extn_uid)
download(pkg_extn_id, filename)[source]

Download a package extension.

Parameters

Important

  1. pkg_extn_uid: The Unique Id of the package extension to be downloaded

    type: str

  2. filename: filename to be used for the downloaded file

    type: str

Output

returns: Path to the downloaded package extension content

return type: str

Example

>>> client.assets.download(asset_uid,"sample_conda.yml/custom_library.zip")
get_details(pkg_extn_uid)[source]

Get package extensions details.

Parameters

Important

  1. pkg_extn_details: Metadata of the package extensions

    type: dict

Output

Important

returns: pkg_extn UID

return type: str

Example

>>> pkg_extn_details = client.pkg_extn.get_details(pkg_extn_uid)
static get_href(pkg_extn_details)[source]

Get url of stored package extensions.

Parameters

Important

  1. asset_details: package extensions details

    type: dict

Output

Important

returns: href of package extensions details

return type: str

Example

>>> pkg_extn_details = client.package_extensions.get_details(pkg_extn_uid)
>>> pkg_extn_href = client.package_extensions.get_href(pkg_extn_details)
static get_id(pkg_extn_details)[source]

Get Unique Id of package extensions.

Parameters

Important

  1. asset_details: Metadata of the package extensions

    type: dict

    type: dict

Output

Important

returns: Unique Id of package extension

return type: str

Example

>>> asset_id = client.package_extensions.get_id(pkg_extn_details)
get_id_by_name(pkg_extn_name)[source]

Get ID of package extensions.

Parameters

Important

  1. asset_details: Metadata of the package extension

    type: dict

    type: dict

Output

Important

returns: Unique Id of package extension

return type: str

Example

>>> asset_id = client.package_extensions.get_id_by_name(pkg_extn_name)
static get_uid(pkg_extn_details)[source]

Get Unique Id of package extensions. Deprecated!! use get_id(pkg_extn_details) instead

Parameters

Important

  1. asset_details: Metadata of the package extensions

    type: dict

    type: dict

Output

Important

returns: Unique Id of package extension

return type: str

Example

>>> asset_uid = client.package_extensions.get_uid(pkg_extn_details)
get_uid_by_name(pkg_extn_name)[source]

Get UID of package extensions. Deprecated!! Use get_id_by_name(pkg_extn_name) instead

Parameters

Important

  1. asset_details: Metadata of the package extension

    type: dict

    type: dict

Output

Important

returns: Unique Id of package extension

return type: str

Example

>>> asset_uid = client.package_extensions.get_uid_by_name(pkg_extn_name)
list()[source]

List package extensions.

Output

Important

This method only prints the list of all package extensionss in a table format.

return type: None

Example

>>> client.package_extensions.list()
store(meta_props, file_path)[source]

Create a package extensions.

Parameters

Important

  1. meta_props: meta data of the pacakge extension. To see available meta names use:

    >>> client.package_extensions.ConfigurationMetaNames.get()
    

    type: dict

Output

Important

returns: metadata of the package extensions

return type: dict

Example

>>> meta_props = {
>>>    client.package_extensions.ConfigurationMetaNames.NAME: "skl_pipeline_heart_problem_prediction",
>>>    client.package_extensions.ConfigurationMetaNames.DESCRIPTION: "description scikit-learn_0.20",
>>>    client.package_extensions.ConfigurationMetaNames.TYPE: "conda_yml",
>>> }
Example
>>> pkg_extn_details = client.package_extensions.store(meta_props=meta_props,file_path="/path/to/file")
class metanames.PkgExtnMetaNames[source]

Set of MetaNames for Package Extensions Specs.

Available MetaNames:

MetaName

Type

Required

Schema

NAME

str

Y

Python 3.6 with pre-installed ML package

DESCRIPTION

str

N

my_description

TYPE

str

Y

conda_yml/custom_library

repository - Use this for working with models, functions, experiments and pipelines

class client.Repository(client)[source]

Store and manage your models, functions, spaces, pipelines and experiments using Watson Machine Learning Repository.

Important

  1. To view ModelMetaNames, use:

    >>> client.repository.ModelMetaNames.show()
    
  2. To view ExperimentMetaNames, use:

    >>> client.repository.ExperimentMetaNames.show()
    
  3. To view FunctionMetaNames, use:

    >>> client.repository.FunctionMetaNames.show()
    
  4. To view PipelineMetaNames, use:

    >>> client.repository.PipelineMetaNames.show()
    
clone(artifact_id, space_id=None, action='copy', rev_id=None)[source]

Creates a new resource(models, runtimes, libraries, experiments, functions, pipelines) identical with the model either in the same space or in a new space. All dependent assets will be cloned too.

Parameters

Important

  1. model_id: Guid of the artifact to be cloned:

    type: str

  2. space_id: Guid of the space to which the model needs to be cloned. (optional)

    type: str

  3. action: Action specifying “copy” or “move”. (optional)

    type: str

  4. rev_id: Revision ID of the artifact. (optional)

    type: str

Output

Important

returns: Metadata of the model cloned.

return type: dict

Example
>>> client.repository.clone(artifact_id=artifact_id,space_id=space_uid,action="copy")

Note

  • If revision id is not specified, all revisions of the artifact are cloned

  • Default value of the parameter action is copy

  • Space guid is mandatory for move action

create_experiment_revision(experiment_uid)[source]

Create a new version for a experiment.

Parameters

experiment_uid ({str_type}) – Unique ID of the experiment.

Returns

experiment version details

Return type

dict

Example:

>>> stored_experiment_revision_details = client.repository.create_experiment_revision(experiment_uid)

create_function_revision(function_uid)[source]

Create a new version for a function.

Parameters

function_uid ({str_type}) – Unique ID of the function.

Returns

function version details

Return type

dict

Example:

>>> stored_function_revision_details = client.repository.create_function_revision( function_uid)

create_member(space_uid, meta_props)[source]

Create a member within a space.

Parameters

Important

  1. meta_props: meta data of the member configuration. To see available meta names use:

    >>> client.spaces.ConfigurationMetaNames.get()
    

    type: dict

Output

Important

returns: metadata of the stored member

return type: dict

Note

  • client.spaces.MemberMetaNames.ROLE can be any one of the following “viewer, editor, admin”

  • client.spaces.MemberMetaNames.IDENTITY_TYPE can be any one of the following “user,service”

  • client.spaces.MemberMetaNames.IDENTITY can be either service-ID or IAM-userID

Example

>>> metadata = {
>>>  client.spaces.MemberMetaNames.ROLE:"Admin",
>>>  client.spaces.MemberMetaNames.IDENTITY:"iam-ServiceId-5a216e59-6592-43b9-8669-625d341aca71",
>>>  client.spaces.MemberMetaNames.IDENTITY_TYPE:"service"
>>> }
>>> members_details = client.repository.create_member(space_uid=space_id, meta_props=metadata)
create_model_revision(model_uid)[source]

Create a new version for a model.

Parameters

model_uid ({str_type}) – model ID

Returns

model version details

Return type

dict

Example:

>>> stored_model_revision_details = client.repository.create_model_revision( model_uid="MODELID")

create_pipeline_revision(pipeline_uid)[source]

Create a new version for a model.

Parameters

pipeline_uid ({str_type}) – Unique ID of the Pipeline

Returns

pipeline version details

Return type

dict

Example:

>>> stored_pipeline_revision_details = client.repository.create_pipeline_revision( pipeline_uid)

create_revision(artifact_uid)[source]

Create revision for passed artifact_uid.

Parameters

artifact_uid ({str_type}) – unique id of stored model, experiment, function or pipelines

Returns

artifact new revision metadata

Return type

dict

A way you might use me is:

>>> details = client.repository.create_revision(artifact_uid)
delete(artifact_uid)[source]

Delete model, experiment, pipeline, space, runtime, library or function from repository.

Parameters

Important

  1. artifact_uid: Unique id of stored model, experiment, function, pipeline, space, library or runtime

    type: str

Output

Important

returns: status (“SUCCESS” or “FAILED”)

return type: str

Example

>>> client.repository.delete(artifact_uid)
download(artifact_uid, filename='downloaded_artifact.tar.gz', rev_uid=None, format=None)[source]

Downloads configuration file for artifact with specified uid.

Parameters

Important

  1. artifact_uid: Unique Id of model, function, runtime or library

    type: str

  2. filename: Name of the file to which the artifact content has to be downloaded

    default value: downloaded_artifact.tar.gz

    type: str

Output

Important

returns: Path to the downloaded artifact content

return type: str

Note

If filename is not specified, the default filename is “downloaded_artifact.tar.gz”.

Example

>>> client.repository.download(model_uid, 'my_model.tar.gz')
get_details(artifact_uid=None)[source]

Get metadata of stored artifacts. If artifact_uid is not specified returns all models, experiments, functions, pipelines, spaces, libraries and runtimes metadata.

Parameters

Important

  1. artifact_uid: Unique Id of stored model, experiment, function, pipeline, space, library or runtime (optional)

    type: str

Output

Important

returns: stored artifact(s) metadata

return type: dict

dict (if artifact_uid is not None) or {“resources”: [dict]} (if artifact_uid is None)

Note

If artifact_uid is not specified, all models, experiments, functions, pipelines, spaces, libraries and runtimes metadata is fetched

Example

>>> details = client.repository.get_details(artifact_uid)
>>> details = client.repository.get_details()
get_experiment_details(experiment_uid=None, limit=None)[source]

Get metadata of experiment. If no experiment_uid is specified all experiments metadata is returned.

Parameters

Important

  1. experiment_uid: Unique Id of experiment (optional)

    type: str

  2. limit: limit number of fetched records (optional)

    type: int

Output

Important

returns: experiment(s) metadata

return type: dict

dict (if experiment_uid is not None) or {“resources”: [dict]} (if experiment_uid is None)

Note

If experiment_uid is not specified, all experiments metadata is fetched

Example

>>> experiment_details = client.respository.get_experiment_details(experiment_uid)
static get_experiment_href(experiment_details)[source]

Get href of stored experiment.

Parameters

Important

  1. experiment_details: Metadata of the stored experiment

    type: dict

Output

Important

returns: href of stored experiment

return type: str

Example
>>> experiment_details = client.repository.get_experiment_detailsf(experiment_uid)
>>> experiment_href = client.repository.get_experiment_href(experiment_details)
static get_experiment_id(experiment_details)[source]

Get Unique Id of stored experiment.

Parameters

Important

  1. experiment_details: Metadata of the stored experiment

    type: dict

Output

Important

returns: Unique Id of stored experiment

return type: str

Example
>>> experiment_details = client.repository.get_experiment_details(experiment_uid)
>>> experiment_uid = client.repository.get_experiment_id(experiment_details)
get_experiment_revision_details(experiment_uid, rev_id)[source]

Get metadata of experiment revision.

Parameters

Important

  1. experiment_uid: Unique Id of experiment

    type: str

  2. rev_id: Unique id of experiment revision

    type: str

Output

Important

returns: experiment revision metadata

return type: dict

Example
>>> experiment_rev_details = client.respository.get_experiment__revision_details(experiment_uid, rev_uid)
static get_experiment_uid(experiment_details)[source]

Get Unique Id of stored experiment.

Parameters

Important

  1. experiment_details: Metadata of the stored experiment

    type: dict

Output

Important

returns: Unique Id of stored experiment

return type: str

Example
>>> experiment_details = client.repository.get_experiment_detailsf(experiment_uid)
>>> experiment_uid = client.repository.get_experiment_uid(experiment_details)
get_function_details(function_uid=None, limit=None)[source]

Get metadata of function. If no function_uid is specified all functions metadata is returned.

Parameters

Important

  1. function_uid: Unique Id of function (optional)

    type: str

  2. limit: limit number of fetched records (optional)

    type: int

Output

Important

returns: function(s) metadata

return type: dict

dict (if function_uid is not None) or {“resources”: [dict]} (if function_uid is None)

Note

If function_uid is not specified, all functions metadata is fetched

Example

>>> function_details = client.respository.get_function_details(function_uid)
>>> function_details = client.respository.get_function_details()
static get_function_href(function_details)[source]

Get href of stored function.

Parameters

Important

  1. function_details: Metadata of the stored function

    type: dict

Output

Important

returns: href of stored function

return type: str

Example

>>> function_details = client.repository.get_function_detailsf(function_uid)
>>> function_url = client.repository.get_function_href(function_details)
static get_function_id(function_details)[source]

Get Id of stored function.

Parameters

Important

  1. function_details: Metadata of the stored function

    type: dict

Output

Important

returns: Id of stored function

return type: str

Example

>>> function_details = client.repository.get_function_details(function_uid)
>>> function_id = client.repository.get_function_id(function_details)
get_function_revision_details(function_uid, rev_id)[source]

Get metadata of function revision.

Parameters

Important

  1. function_uid: Unique Id of function

    type: str

  2. rev_id: Unique Id of function revision

    type: str

Output

Important

returns: function revision metadata

return type: dict

Example

>>> function_rev_details = client.respository.get_function_revision_details(function_uid, rev_id)
static get_function_uid(function_details)[source]

Get Unique Id of stored function. Deprecated!! Use get_function_id(function_details) instead

Parameters

Important

  1. function_details: Metadata of the stored function

    type: dict

Output

Important

returns: Unique Id of stored function

return type: str

Example

>>> function_details = client.repository.get_function_detailsf(function_uid)
>>> function_uid = client.repository.get_function_uid(function_details)
static get_member_href(member_details)[source]

Get member_href from member details.

Parameters

Important

  1. space_details: Metadata of the stored member

    type: dict

Output

Important

returns: member href

return type: str

Example

>>> member_details = client.repository.get_member_details(member_id)
>>> member_href = client.repository.get_member_href(member_details)
static get_member_uid(member_details)[source]

Get member_uid from member details.

Parameters

Important

  1. member_details: Metadata of the created member

    type: dict

Output

Important

returns: unique id of member

return type: str

Example

>>> member_details = client.repository.get_member_details(member_id)
>>> member_id = client.repository.get_member_uid(member_details)
get_members_details(space_uid, member_id=None, limit=None)[source]

Get metadata of members associated with a space. If member_uid is not specified, it returns all the members metadata.

Parameters

Important

  1. space_uid: Unique id of member (optional)

    type: str

  2. limit: limit number of fetched records (optional)

    type: int

Output

Important

returns: metadata of member(s) of a space

return type: dict dict (if member_id is not None) or {“resources”: [dict]} (if member_id is None)

Note

If member id is not specified, all members metadata is fetched

Example

>>> member_details = client.repository.get_member_details(space_uid,member_id)
get_model_details(model_uid=None, limit=None)[source]

Get metadata of stored model. If model_uid is not specified returns all models metadata.

Parameters

Important

  1. model_uid: Unique Id of Model (optional)

    type: str

  2. limit: limit number of fetched records (optional)

    type: int

Output

Important

returns: metadata of model(s)

return type: dict dict (if model_uid is not None) or {“resources”: [dict]} (if model_uid is None)

Note

If model_uid is not specified, all models metadata is fetched

Example

>>> model_details = client.repository.get_model_details(model_uid)
>>> models_details = client.repository.get_model_details()
static get_model_href(model_details)[source]

Get href of stored model.

Parameters

Important

  1. model_details: Metadata of the stored model

    type: dict

Output

Important

returns: href of stored model

return type: str

Example

>>> model_details = client.repository.get_model_detailsf(model_uid)
>>> model_uid = client.repository.get_model_href(model_details)
static get_model_id(model_details)[source]

Get Unique Id of stored model.

Parameters

Important

  1. model_details: Metadata of the stored model

    type: dict

Output

Important

returns: Unique Id of stored model

return type: str

Example

>>> model_details = client.repository.get_model_details(model_uid)
>>> model_uid = client.repository.get_model_id(model_details)
get_model_revision_details(model_uid, rev_uid)[source]

Get metadata of model revision.

Parameters

Important

  1. experiment_uid: Unique Id of model

    type: str

  2. limit: Unique id of model revision

    type: str

Output

Important

returns: model revision metadata

return type: dict

Example
>>> model_rev_details = client.respository.get_model_revision_details(model_uid, rev_uid)
static get_model_uid(model_details)[source]

Get Unique Id of stored model.

Parameters

Important

  1. model_details: Metadata of the stored model

    type: dict

Output

Important

returns: Unique Id of stored model

return type: str

Example

>>> model_details = client.repository.get_model_details(model_uid)
>>> model_uid = client.repository.get_model_uid(model_details)
get_pipeline_details(pipeline_uid=None, limit=None)[source]

Get metadata of stored pipelines. If pipeline_uid is not specified returns all pipelines metadata.

Parameters

Important

  1. pipeline_uid: Unique id of Pipeline(optional)

    type: str

  2. limit: limit number of fetched records (optional)

    type: int

Output

Important

returns: metadata of pipeline(s)

return type: dict dict (if pipeline_uid is not None) or {“resources”: [dict]} (if pipeline_uid is None)

Note

If pipeline_uid is not specified, all pipelines metadata is fetched

Example

>>> pipeline_details = client.repository.get_pipeline_details(pipeline_uid)
>>> pipeline_details = client.repository.get_pipeline_details()
static get_pipeline_href(pipeline_details)[source]

Get pipeline_hef from pipeline details.

Parameters

Important

  1. pipeline_details: Metadata of the stored pipeline

    type: dict

Output

Important

returns: pipeline href

return type: str

Example

>>> pipeline_details = client.repository.get_pipeline_details(pipeline_uid)
>>> pipeline_href = client.repository.get_pipeline_href(pipeline_details)
static get_pipeline_id(pipeline_details)[source]

Get pipeline_uid from pipeline details.

Parameters

Important

  1. pipeline_details: Metadata of the stored pipeline

    type: dict

Output

Important

returns: Unique Id of pipeline

return type: str

Example

>>> pipeline_details = client.repository.get_pipeline_details(pipeline_uid)
>>> pipeline_uid = client.repository.get_pipeline_id(pipeline_details)
get_pipeline_revision_details(pipeline_uid, rev_id)[source]

Get metadata of stored pipeline revision.

Parameters

Important

  1. pipeline_uid: Unique id of Pipeline

    type: str

  2. rev_id: Unique id Pipeline revision

    type: str

Output

Important

returns: metadata of revision pipeline(s)

return type: dict

Example

>>> pipeline_rev_details = client.repository.get_pipeline_revision_details(pipeline_uid, rev_id)
static get_pipeline_uid(pipeline_details)[source]

Get pipeline_uid from pipeline details.

Parameters

Important

  1. pipeline_details: Metadata of the stored pipeline

    type: dict

Output

Important

returns: Unique Id of pipeline

return type: str

Example

>>> pipeline_details = client.repository.get_pipeline_details(pipeline_uid)
>>> pipeline_uid = client.repository.get_pipeline_uid(pipeline_details)
get_space_details(space_uid=None, limit=None)[source]

Get metadata of stored space. If space_uid is not specified returns all model spaces metadata.

Parameters

Important

  1. space_uid: Unique id of Space (optional)

    type: str

  2. limit: limit number of fetched records (optional)

    type: int

Output

Important

returns: metadata of space(s)

return type: dict dict (if space_uid is not None) or {“resources”: [dict]} (if space_uid is None)

Note

If space_uid is not specified, all spaces metadata is fetched

Example

>>> space_details = client.repository.get_space_details(space_uid)
>>> space_details = client.repository.get_space_details()
static get_space_href(space_details)[source]

Get space_href from space details.

Parameters

Important

  1. space_details: Metadata of the stored space

    type: dict

Output

Important

returns: space href

return type: str

Example

>>> space_details = client.repository.get_space_details(space_uid)
>>> space_href = client.repository.get_space_href(space_details)
static get_space_uid(space_details)[source]

Get space_uid from space details.

Parameters

Important

  1. space_details: Metadata of the stored space

    type: dict

Output

Important

returns: Unique Id of space

return type: str

Example

>>> space_details = client.repository.get_space_details(space_uid)
>>> space_uid = client.repository.get_space_uid(space_details)
list()[source]

List stored models, pipelines, runtimes, libraries, functions, spaces and experiments. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all models, pipelines, runtimes, libraries, functions, spaces and experiments in a table format.

return type: None

Example

>>> client.repository.list()
list_experiments(limit=None)[source]

List stored experiments. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all experiments in a table format.

return type: None

Example

>>> client.repository.list_experiments()
list_experiments_revisions(experiment_uid, limit=None)[source]

List stored experiment revisions. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. experiment_uid: Uniquie Id of the experiment

    type: str

Important

  1. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all revisions of given experiment ID in a table format.

return type: None

Example

>>> client.repository.list_experiments_revisions(experiment_uid)
list_functions(limit=None)[source]

List stored functions. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all functions in a table format.

return type: None

Example

>>> client.respository.list_functions()
list_functions_revisions(function_uid, limit=None)[source]

List stored function revisions. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. function_uid: Uniquie Id of the function

    type: str

Important

  1. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all revisions of given function ID in a table format.

return type: None

Example

>>> client.repository.list_functions_revisions(function_uid)
list_members(space_uid, limit=None)[source]

List stored members of a space. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all members associated with a space in a table format.

return type: None

Example

>>> client.spaces.list_members()
list_models(limit=None)[source]

List stored models. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all models in a table format.

return type: None

Example

>>> client.repository.list_models()
list_models_revisions(model_uid, limit=None)[source]

List stored model revisions. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. model_uid: Uniquie Id of the model

    type: str

Important

  1. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all revisions of given model ID in a table format.

return type: None

Example

>>> client.repository.list_models_revisions(model_uid)
list_pipelines(limit=None)[source]

List stored pipelines. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all pipelines in a table format.

return type: None

Example

>>> client.repository.list_pipelines()
list_pipelines_revisions(pipeline_uid, limit=None)[source]

List stored pipeline revisions. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. model_uid: Uniquie Id of the pipeline

    type: str

Important

  1. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all revisions of given pipeline ID in a table format.

return type: None

Example

>>> client.repository.list_pipelines_revisions(pipeline_uid)
list_spaces(limit=None)[source]

List stored spaces. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all spaces in a table format.

return type: None

Example

>>> client.repository.list_spaces()
load(artifact_uid)[source]

Load model from repository to object in local environment.

Parameters

Important

  1. artifact_uid: Unique Id of model

    type: str

Output

Important

returns: model object

return type: object

Example

>>> model_obj = client.repository.load(model_uid)
store_experiment(meta_props)[source]

Create an experiment.

Parameters

Important

  1. meta_props: meta data of the experiment configuration. To see available meta names use:

    >>> client.experiments.ConfigurationMetaNames.get()
    

    type: dict

Output

Important

returns: Metadata of the experiment created

return type: dict

Example

>>> metadata = {
>>>  client.experiments.ConfigurationMetaNames.NAME: 'my_experiment',
>>>  client.experiments.ConfigurationMetaNames.EVALUATION_METRICS: ['accuracy'],
>>>  client.experiments.ConfigurationMetaNames.TRAINING_REFERENCES: [
>>>      {
>>>        'pipeline': {'href': pipeline_href_1}
>>>
>>>      },
>>>      {
>>>        'pipeline': {'href':pipeline_href_2}
>>>      },
>>>   ]
>>> }
>>> experiment_details = client.repository.store_experiment(meta_props=metadata)
>>> experiment_href = client.repository.get_experiment_href(experiment_details)
store_function(function, meta_props)[source]

Create a function.

Parameters

Important

  1. meta_props: meta data or name of the function. To see available meta names use:

    >>> client.repository.FunctionMetaNames.show()
    

    type: dict

  2. function: path to file with archived function content or function (as described above)

    • As a ‘function’ may be used one of the following:

    • filepath to gz file

    • ‘score’ function reference, where the function is the function which will be deployed

    • generator function, which takes no argument or arguments which all have primitive python default values and as result return ‘score’ function

    type: str or function

Output

Important

returns: Metadata of the function created.

return type: dict

Example

The most simple use is (using score function):

>>> meta_props = {
>>>    client.repository.FunctionMetaNames.NAME: "function",
>>>    client.repository.FunctionMetaNames.DESCRIPTION: "This is ai function",
>>>    client.repository.FunctionMetaNames.SOFTWARE_SPEC_UID: "53dc4cf1-252f-424b-b52d-5cdd9814987f"}
>>> def score(payload):
>>>      values = [[row[0]*row[1]] for row in payload['values']]
>>>      return {'fields': ['multiplication'], 'values': values}
>>> stored_function_details = client.repository.store_function(score, meta_props)

Other, more interesting example is using generator function. In this situation it is possible to pass some variables:

>>> wml_creds = {...}
>>> def gen_function(wml_credentials=wml_creds, x=2):
        def f(payload):
            values = [[row[0]*row[1]*x] for row in payload['values']]
            return {'fields': ['multiplication'], 'values': values}
        return f
>>> stored_function_details = client.repository.store_function(gen_function, meta_props)
store_model(model, meta_props=None, training_data=None, training_target=None, pipeline=None, feature_names=None, label_column_names=None, subtrainingId=None)[source]

Create a model.

Parameters

Important

  1. model:

    Can be one of following:

    • The train model object:

      • scikit-learn

      • xgboost

      • spark (PipelineModel)

    • path to saved model in format:

      • keras (.tgz)

      • pmml (.xml)

      • scikit-learn (.tar.gz)

      • tensorflow (.tar.gz)

      • spss (.str)

    • directory containing model file(s):

      • scikit-learn

      • xgboost

      • tensorflow

    • unique id of trained model

  2. training_data: Spark DataFrame supported for spark models. Pandas dataframe, numpy.ndarray or array supported for scikit-learn models

    type: spark dataframe, pandas dataframe, numpy.ndarray or array

  3. meta_props: meta data of the models configuration. To see available meta names use:

    >>> client.repository.ModelMetaNames.get()
    

    type: dict

  4. training_target: array with labels required for scikit-learn models

    type: array

  5. pipeline: pipeline required for spark mllib models

    type: object

  6. feature_names: Feature names for the training data in case of Scikit-Learn/XGBoost models. This is applicable only in the case where the training data is not of type - pandas.DataFrame.

    type: numpy.ndarray or list

  7. label_column_names: Label column names of the trained Scikit-Learn/XGBoost models.

    type: numpy.ndarray and list

Output

Important

returns: Metadata of the model created

return type: dict

Note

  • For a keras model, model content is expected to contain a .h5 file and an archived version of it.

  • feature_names is an optional argument containing the feature names for the training data in case of Scikit-Learn/XGBoost models. Valid types are numpy.ndarray and list. This is applicable only in the case where the training data is not of type - pandas.DataFrame.

  • If the training data is of type pandas.DataFrame and feature_names are provided, feature_names are ignored.

  • The value can be a single dictionary(being deprecated, use list even for single schema) or a list if you are using single input data schema. you can provide multiple schemas as dictionaries inside a list.

Example

>>> stored_model_details = client.repository.store_model(model, name)

In more complicated cases you should create proper metadata, similar to this one:

>>> sw_spec_id = client.software_specifications.get_id_by_name('scikit-learn_0.22-py3.6')
>>> sw_spec_id
>>> metadata = {
>>>        client.repository.ModelMetaNames.NAME: 'customer satisfaction prediction model',
>>>        client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sw_spec_id,
>>>        client.repository.ModelMetaNames.TYPE: 'scikit-learn_0.22'
>>>}

In case when you want to provide input data schema of the model, you can provide it as part of meta

>>> sw_spec_id = client.software_specifications.get_id_by_name('spss-modeler_18.1')
>>> sw_spec_id
>>> metadata = {
>>>        client.repository.ModelMetaNames.NAME: 'customer satisfaction prediction model',
>>>        client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sw_spec_id,
>>>        client.repository.ModelMetaNames.TYPE: 'spss-modeler_18.1',
>>>        client.repository.ModelMetaNames.INPUT_DATA_SCHEMA: [{'id': 'test',
>>>                                                             'type': 'list',
>>>                                                             'fields': [{'name': 'age', 'type': 'float'},
>>>                                                                        {'name': 'sex', 'type': 'float'},
>>>                                                                         {'name': 'fbs', 'type': 'float'},
>>>                                                                         {'name': 'restbp', 'type': 'float'}]
>>>                                                               },
>>>                                                               {'id': 'test2',
>>>                                                                'type': 'list',
>>>                                                                'fields': [{'name': 'age', 'type': 'float'},
>>>                                                                           {'name': 'sex', 'type': 'float'},
>>>                                                                           {'name': 'fbs', 'type': 'float'},
>>>                                                                           {'name': 'restbp', 'type': 'float'}]
>>>                                                               }]
>>>             }

A way you might use me with local tar.gz containing model:

>>> stored_model_details = client.repository.store_model(path_to_tar_gz, meta_props=metadata, training_data=None)

A way you might use me with local directory containing model file(s):

>>> stored_model_details = client.repository.store_model(path_to_model_directory, meta_props=metadata, training_data=None)

A way you might use me with trained model guid:

>>> stored_model_details = client.repository.store_model(trained_model_guid, meta_props=metadata, training_data=None)
store_pipeline(meta_props)[source]

Create a pipeline.

Parameters

Important

  1. meta_props: meta data of the pipeline configuration. To see available meta names use:

    >>> client.pipelines.ConfigurationMetaNames.get()
    

    type: dict

Output

Important

returns: Metadata of the pipeline createdn return type: dict

Example

>>> metadata = {
>>>  client.pipelines.ConfigurationMetaNames.NAME: 'my_training_definition',
>>>  client.pipelines.ConfigurationMetaNames.DOCUMENT: {"doc_type":"pipeline","version": "2.0","primary_pipeline": "dlaas_only","pipelines": [{"id": "dlaas_only","runtime_ref": "hybrid","nodes": [{"id": "training","type": "model_node","op": "dl_train","runtime_ref": "DL","inputs": [],"outputs": [],"parameters": {"name": "tf-mnist","description": "Simple MNIST model implemented in TF","command": "python3 convolutional_network.py --trainImagesFile ${DATA_DIR}/train-images-idx3-ubyte.gz --trainLabelsFile ${DATA_DIR}/train-labels-idx1-ubyte.gz --testImagesFile ${DATA_DIR}/t10k-images-idx3-ubyte.gz --testLabelsFile ${DATA_DIR}/t10k-labels-idx1-ubyte.gz --learningRate 0.001 --trainingIters 6000","compute": {"name": "k80","nodes": 1},"training_lib_href":"/v4/libraries/64758251-bt01-4aa5-a7ay-72639e2ff4d2/content"},"target_bucket": "wml-dev-results"}]}]}}
>>> pipeline_details = client.repository.store_pipeline(pipeline_filepath, meta_props=metadata)
>>> pipeline_href = client.repository.get_pipeline_href(pipeline_details)
store_space(meta_props)[source]

Create a space.

Parameters

Important

  1. meta_props: meta data of the space configuration. To see available meta names use:

    >>> client.spaces.ConfigurationMetaNames.get()
    

    type: dict

Output

Important

returns: Metadata of the space created

return type: dict

Example

>>> metadata = {
>>>  client.spaces.ConfigurationMetaNames.NAME: 'my_space'
>>> }
>>> space_details = client.repository.store_space(meta_props=metadata)
>>> space_href = client.repository.get_space_href(experiment_details)
update_experiment(experiment_uid, changes)[source]

Updates existing experiment metadata.

Parameters

Important

  1. experiment_uid: Unique of Id experiment which definition should be updated

    type: str

  2. changes: elements which should be changed, where keys are ConfigurationMetaNames

    type: dict

Output

Important

returns: metadata of updated experiment

return type: dict

Example

>>> metadata = {
>>> client.repository.ExperimentMetaNames.NAME:"updated_exp"
>>> }
>>> exp_details = client.repository.update_experiment(experiment_uid, changes=metadata)
update_function(function_uid, changes, update_function=None)[source]

Updates existing function metadata.

Parameters

Important

  1. function_uid: Unique Id of function which define what should be updated

    type: str

  2. changes: elements which should be changed, where keys are ConfigurationMetaNames

    type: dict

  3. update_function: path to file with archived function content or function which should be changed for specific function_uid

.This parameters is valid only for CP4D 3.0.0.

type: str or function

Output

Important

returns: metadata of updated function

return type: dict

Example

>>> metadata = {
>>> client.repository.FunctionMetaNames.NAME:"updated_function"
>>> }
>>>
>>> function_details = client.repository.update_function(function_uid, changes=metadata)
update_model(model_uid, updated_meta_props=None, update_model=None)[source]

Updates existing model metadata.

Parameters

Important

  1. model_uid: Unique id of model which definition should be updated

    type: str

  2. updated_meta_props: elements which should be changed, where keys are ConfigurationMetaNames

    type: dict

  1. update_model: archived model content file or path to directory containing archived model file which should be changed for specific model_uid

.

This parameters is valid only for CP4D 3.0.0. A way you might use me with local directory containing model file(s):

type: object or archived model content file

Output

Important

returns: metadata of updated model

return type: dict

Example 1

>>> metadata = {
>>> client.repository.ModelMetaNames.NAME:"updated_model"
>>> }
>>> model_details = client.repository.update_model(model_uid, updated_meta_props=metadata)

Example 2

>>> metadata = {
>>> client.repository.ModelMetaNames.NAME:"updated_model"
>>> }
>>> model_details = client.repository.update_model(model_uid, updated_meta_props=metadata, update_model="newmodel_content.tar.gz")
update_pipeline(pipeline_uid, changes)[source]

Updates existing pipeline metadata.

Parameters

Important

  1. pipeline_uid: Unique Id of pipeline which definition should be updated

    type: str

  2. changes: elements which should be changed, where keys are ConfigurationMetaNames

    type: dict

Output

Important

returns: metadata of updated pipeline

return type: dict

Example

>>> metadata = {
>>> client.repository.PipelineMetanames.NAME:"updated_pipeline"
>>> }
>>> pipeline_details = client.repository.update_pipeline(pipeline_uid, changes=metadata)
update_space(space_uid, changes)[source]

Updates existing space metadata.

Parameters

Important

  1. space_uid: Unique Id of space which definition should be updated

    type: str

  2. changes: elements which should be changed, where keys are ConfigurationMetaNames

    type: dict

Output

Important

returns: metadata of updated space

return type: dict

Example

>>> metadata = {
>>> client.repository.SpacesMetaNames.NAME:"updated_space"
>>> }
>>> space_details = client.repository.update_space(space_uid, changes=metadata)
class metanames.ModelMetaNames[source]

Set of MetaNames for models.

Available MetaNames:

MetaName

Type

Required

Example value

Schema

NAME

str

Y

my_model

DESCRIPTION

str

N

my_description

INPUT_DATA_SCHEMA

list

N

{'id': '1', 'type': 'struct', 'fields': [{'name': 'x', 'type': 'double', 'nullable': False, 'metadata': {}}, {'name': 'y', 'type': 'double', 'nullable': False, 'metadata': {}}]}

{'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}

TRAINING_DATA_REFERENCES

list

N

[]

[{'name(optional)': 'string', 'type(required)': 'string', 'connection(required)': {'endpoint_url(required)': 'string', 'access_key_id(required)': 'string', 'secret_access_key(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}, 'schema(optional)': {'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}}]

OUTPUT_DATA_SCHEMA

dict

N

{'id': '1', 'type': 'struct', 'fields': [{'name': 'x', 'type': 'double', 'nullable': False, 'metadata': {}}, {'name': 'y', 'type': 'double', 'nullable': False, 'metadata': {}}]}

{'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}

LABEL_FIELD

str

N

PRODUCT_LINE

TRANSFORMED_LABEL_FIELD

str

N

PRODUCT_LINE_IX

TAGS

list

N

['string', 'string']

['string', 'string']

SIZE

dict

N

{'in_memory': 0, 'content': 0}

{'in_memory(optional)': 'string', 'content(optional)': 'string'}

SPACE_UID

str

N

53628d69-ced9-4f43-a8cd-9954344039a8

PIPELINE_UID

str

N

53628d69-ced9-4f43-a8cd-9954344039a8

RUNTIME_UID

str

N

53628d69-ced9-4f43-a8cd-9954344039a8

TYPE

str

Y

mllib_2.1

CUSTOM

dict

N

{}

DOMAIN

str

N

Watson Machine Learning

HYPER_PARAMETERS

dict

N

METRICS

list

N

IMPORT

dict

N

{'connection': {'endpoint_url': 'https://s3-api.us-geo.objectstorage.softlayer.net', 'access_key_id': '***', 'secret_access_key': '***'}, 'location': {'bucket': 'train-data', 'path': 'training_path'}, 'type': 's3'}

{'name(optional)': 'string', 'type(required)': 'string', 'connection(required)': {'endpoint_url(required)': 'string', 'access_key_id(required)': 'string', 'secret_access_key(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}}

TRAINING_LIB_UID

str

N

53628d69-ced9-4f43-a8cd-9954344039a8

MODEL_DEFINITION_UID

str

N

53628d6_cdee13-35d3-s8989343

SOFTWARE_SPEC_UID

str

N

53628d69-ced9-4f43-a8cd-9954344039a8

TF_MODEL_PARAMS

dict

N

{'save_format': 'None', 'signatures': 'struct', 'options': 'None', 'custom_objects': 'string'}

class metanames.ExperimentMetaNames[source]

Set of MetaNames for experiments.

Available MetaNames:

MetaName

Type

Required

Example value

Schema

NAME

str

Y

Hand-written Digit Recognition

DESCRIPTION

str

N

Hand-written Digit Recognition training

TAGS

list

N

[{'value': 'dsx-project.<project-guid>', 'description': 'DSX project guid'}]

[{'value(required)': 'string', 'description(optional)': 'string'}]

EVALUATION_METHOD

str

N

multiclass

EVALUATION_METRICS

list

N

[{'name': 'accuracy', 'maximize': False}]

[{'name(required)': 'string', 'maximize(optional)': 'boolean'}]

TRAINING_REFERENCES

list

Y

[{'pipeline': {'href': '/v4/pipelines/6d758251-bb01-4aa5-a7a3-72339e2ff4d8'}}]

[{'pipeline(optional)': {'href(required)': 'string', 'data_bindings(optional)': [{'data_reference(required)': 'string', 'node_id(required)': 'string'}], 'nodes_parameters(optional)': [{'node_id(required)': 'string', 'parameters(required)': 'dict'}]}, 'training_lib(optional)': {'href(required)': 'string', 'compute(optional)': {'name(required)': 'string', 'nodes(optional)': 'number'}, 'runtime(optional)': {'href(required)': 'string'}, 'command(optional)': 'string', 'parameters(optional)': 'dict'}}]

SPACE_UID

str

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

LABEL_COLUMN

str

N

label

CUSTOM

dict

N

{'field1': 'value1'}

class metanames.FunctionMetaNames[source]

Set of MetaNames for AI functions.

Available MetaNames:

MetaName

Type

Required

Example value

Schema

NAME

str

Y

ai_function

DESCRIPTION

str

N

This is ai function

RUNTIME_UID

str

N

53628d69-ced9-4f43-a8cd-9954344039a8

SOFTWARE_SPEC_UID

str

N

53628d69-ced9-4f43-a8cd-9954344039a8

INPUT_DATA_SCHEMAS

list

N

[{'id': '1', 'type': 'struct', 'fields': [{'name': 'x', 'type': 'double', 'nullable': False, 'metadata': {}}, {'name': 'y', 'type': 'double', 'nullable': False, 'metadata': {}}]}]

[{'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}]

OUTPUT_DATA_SCHEMAS

list

N

[{'id': '1', 'type': 'struct', 'fields': [{'name': 'multiplication', 'type': 'double', 'nullable': False, 'metadata': {}}]}]

[{'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}]

TAGS

list

N

[{'value': 'ProjectA', 'description': 'Functions created for ProjectA'}]

[{'value(required)': 'string', 'description(optional)': 'string'}]

TYPE

str

N

python

CUSTOM

dict

N

{}

SAMPLE_SCORING_INPUT

list

N

{'input_data': [{'fields': ['name', 'age', 'occupation'], 'values': [['john', 23, 'student'], ['paul', 33, 'engineer']]}]}

{'id(optional)': 'string', 'fields(optional)': 'array', 'values(optional)': 'array'}

SPACE_UID

str

N

3628d69-ced9-4f43-a8cd-9954344039a8

class metanames.PipelineMetanames[source]

Set of MetaNames for pipelines.

Available MetaNames:

MetaName

Type

Required

Example value

Schema

NAME

str

Y

Hand-written Digit Recognitionu

DESCRIPTION

str

N

Hand-written Digit Recognition training

SPACE_UID

str

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

TAGS

list

N

[{'value': 'dsx-project.<project-guid>', 'description': 'DSX project guid'}]

[{'value(required)': 'string', 'description(optional)': 'string'}]

DOCUMENT

dict

N

{'doc_type': 'pipeline', 'version': '2.0', 'primary_pipeline': 'dlaas_only', 'pipelines': [{'id': 'dlaas_only', 'runtime_ref': 'hybrid', 'nodes': [{'id': 'training', 'type': 'model_node', 'op': 'dl_train', 'runtime_ref': 'DL', 'inputs': [], 'outputs': [], 'parameters': {'name': 'tf-mnist', 'description': 'Simple MNIST model implemented in TF', 'command': 'python3 convolutional_network.py --trainImagesFile ${DATA_DIR}/train-images-idx3-ubyte.gz --trainLabelsFile ${DATA_DIR}/train-labels-idx1-ubyte.gz --testImagesFile ${DATA_DIR}/t10k-images-idx3-ubyte.gz --testLabelsFile ${DATA_DIR}/t10k-labels-idx1-ubyte.gz --learningRate 0.001 --trainingIters 6000', 'compute': {'name': 'k80', 'nodes': 1}, 'training_lib_href': '/v4/libraries/64758251-bt01-4aa5-a7ay-72639e2ff4d2/content'}, 'target_bucket': 'wml-dev-results'}]}]}

{'doc_type(required)': 'string', 'version(required)': 'string', 'primary_pipeline(required)': 'string', 'pipelines(required)': [{'id(required)': 'string', 'runtime_ref(required)': 'string', 'nodes(required)': [{'id': 'string', 'type': 'string', 'inputs': 'list', 'outputs': 'list', 'parameters': {'training_lib_href': 'string'}}]}]}

CUSTOM

dict

N

{'field1': 'value1'}

IMPORT

dict

N

{'connection': {'endpoint_url': 'https://s3-api.us-geo.objectstorage.softlayer.net', 'access_key_id': '***', 'secret_access_key': '***'}, 'location': {'bucket': 'train-data', 'path': 'training_path'}, 'type': 's3'}

{'name(optional)': 'string', 'type(required)': 'string', 'connection(required)': {'endpoint_url(required)': 'string', 'access_key_id(required)': 'string', 'secret_access_key(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}}

RUNTIMES

list

N

[{'id': 'id', 'name': 'tensorflow', 'version': '1.13-py3'}]

COMMAND

str

N

convolutional_network.py --trainImagesFile train-images-idx3-ubyte.gz --trainLabelsFile train-labels-idx1-ubyte.gz --testImagesFile t10k-images-idx3-ubyte.gz --testLabelsFile t10k-labels-idx1-ubyte.gz --learningRate 0.001 --trainingIters 6000

LIBRARY_UID

str

N

fb9752c9-301a-415d-814f-cf658d7b856a

COMPUTE

dict

N

{'name': 'k80', 'nodes': 1}

script

class client.Script(client)[source]

Store and manage your scripts assets.

ConfigurationMetaNames = <ibm_watson_machine_learning.metanames.ScriptMetaNames object>

MetaNames for script Assets creation.

create_revision(script_uid)[source]

Creates revision for the given script. Revisions are immutable once created. The metadata and attachment at script_uid is taken and a revision is created out of it

Parameters

script_uid ({str_type}) – Script ID. Mandatory

Returns

stored script revisions metadata

Return type

dict

>>> script_revision = client.scripts.create_revision(script_uid)
delete(asset_uid)[source]

Delete a stored script asset.

Parameters

Important

  1. asset_uid: Unique Id of script asset

    type: str

Output

Important

returns: status (“SUCCESS” or “FAILED”)

return type: str

Example

>>> client.script.delete(asset_uid)
download(asset_uid, filename, rev_uid=None)[source]

Download the content of a script asset.

Parameters

Important

  1. asset_uid: The Unique Id of the script asset to be downloaded

    type: str

  2. filename: filename to be used for the downloaded file

    type: str

  3. rev_uid: Revision id

    type: str

Output

returns: Path to the downloaded asset content

return type: str

Example

>>> client.script.download(asset_uid,"script_file.zip")
get_details(script_uid)[source]

Get script asset details.

Parameters

Important

  1. script_uid: Unique id of script

    type: str

Output

Important

returns: Metadata of the stored script asset

return type: dict

Example

>>> script_details = client.scripts.get_details(script_uid)
static get_href(asset_details)[source]

Get url of stored scripts asset.

Parameters

Important

  1. asset_details: stored script details

    type: dict

Output

Important

returns: href of stored script asset

return type: str

Example

>>> asset_details = client.script.get_details(asset_uid)
>>> asset_href = client.script.get_href(asset_details)
static get_id(asset_details)[source]

Get Unique Id of stored script asset.

Parameters

Important

  1. asset_details: Metadata of the stored script asset

    type: dict

    type: dict

Output

Important

returns: Unique Id of stored shiny asset

return type: str

Example

>>> asset_uid = client.script.get_id(asset_details)
get_revision_details(script_uid=None, rev_uid=None)[source]

Get metadata of script_uid revision.

Parameters
  • script_uid ({str_type}) – Script ID. Mandatory

  • rev_uid (int) – Revision ID. If this parameter is not provided, returns latest revision if existing else error

Returns

stored script(s) metadata

Return type

dict

A way you might use me is:

>>> script_details = client.scripts.get_revision_details(script_uid, rev_uid)
static get_uid(asset_details)[source]

Get Unique Id of stored script asset. This method is deprecated. Use ‘get_id(asset_details)’ instead

Parameters

Important

  1. asset_details: Metadata of the stored script asset

    type: dict

    type: dict

Output

Important

returns: Unique Id of stored script asset

return type: str

Example

>>> asset_uid = client.script.get_uid(asset_details)
list(limit=None)[source]

List stored scripts. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all script in a table format.

return type: None

Example

>>> client.script.list()
list_revisions(script_uid, limit=None)[source]

List all revisions for the given script uid.

Parameters
  • script_uid ({str_type}) – Stored script ID.

  • limit (int) – limit number of fetched records (optional)

Returns

stored script revisions details

Return type

table

>>> client.scripts.list_revisions(script_uid)
store(meta_props, file_path)[source]

Creates a Scripts asset and uploads content to it.

Parameters

Important

  1. meta_props: Name to be given to the Scripts asset

    type: str

  2. file_path: Path to the content file to be uploaded

    type: str

Output

Important

returns: metadata of the stored Scripts asset

return type: dict

Example
>>> metadata = {
>>>        client.script.ConfigurationMetaNamess.NAME: 'my first script',
>>>        client.script.ConfigurationMetaNames.DESCRIPTION: 'description of the script',
>>>        client.script.ConfigurationMetaNames.SOFTWARE_SPEC_UID: '0cdb0f1e-5376-4f4d-92dd-da3b69aa9bda'
>>>    }
>>>
>>> asset_details = client.scripts.store(meta_props=metadata,file_path="/path/to/file")
update(script_uid, meta_props=None, file_path=None)[source]

Update script with either metadata or attachment or both.

Parameters

script_uid (str) – Script UID

Returns

updated metadata of script

Return type

dict

A way you might use me is:

>>> script_details = client.script.update(model_uid, meta, content_path)
class metanames.ScriptMetaNames[source]

Set of MetaNames for Script Specifications.

Available MetaNames:

MetaName

Type

Required

Schema

NAME

str

Y

Python script

DESCRIPTION

str

N

my_description

SOFTWARE_SPEC_UID

str

Y

53628d69-ced9-4f43-a8cd-9954344039a8

set

class client.Set(client)[source]

Set a space_id/project_id to be used in the subsequent actions.

default_project(project_id)[source]

Set a project ID.

Parameters

Important

  1. project_id: GUID of the project

    type: str

Output

Important

returns: “SUCCESS”

return type: str

Example

>>>  client.set.default_project(project_id)
default_space(space_uid)[source]

Set a space ID.

Parameters

Important

  1. space_uid: GUID of the space to be used:

    type: str

Output

Important

returns: The space that is set here is used for subsequent requests.

return type: str(“SUCCESS”/”FAILURE”)

Example

>>>  client.set.default_space(space_uid)

software_specifications

class client.SwSpec(client)[source]

Store and manage your software specs.

ConfigurationMetaNames = <ibm_watson_machine_learning.metanames.SwSpecMetaNames object>

MetaNames for Software Specification creation.

add_package_extension(sw_spec_uid, pkg_extn_id)[source]

Add a package extension to software specifications existing metadata.

Parameters

Important

  1. sw_spec_uid: Unique Id of software specification which should be updated

    type: str

  2. pkg_extn_id: Unique Id of package extension which should needs to added to software specification

    type: str

Example

>>> client.software_specifications.add_package_extension(sw_spec_uid, pkg_extn_id)
delete(sw_spec_uid)[source]

Delete a software specification.

Parameters

Important

  1. sw_spec_uid: Unique Id of software specification

    type: str

Output

Important

returns: status (“SUCCESS” or “FAILED”)

return type: str

Example

>>> client.software_specifications.delete(sw_spec_uid)
delete_package_extension(sw_spec_uid, pkg_extn_id)[source]

Delete a package extension from software specifications existing metadata.

Parameters

Important

  1. sw_spec_uid: Unique Id of software specification which should be updated

    type: str

  2. pkg_extn_id: Unique Id of package extension which should needs to deleted from software specification

    type: str

Example

>>>> client.software_specifications.delete_package_extension(sw_spec_uid, pkg_extn_id)

get_details(sw_spec_uid=None)[source]

Get software specification details.

Parameters

Important

  1. sw_spec_details: Metadata of the stored sw_spec

    type: dict

Output

Important

returns: sw_spec UID

return type: str

Example

>>> sw_spec_details = client.software_specifications.get_details(sw_spec_uid)
static get_href(sw_spec_details)[source]

Get url of software specification.

Parameters

Important

  1. sw_spec_details: software specification details

    type: dict

Output

Important

returns: href of software specification

return type: str

Example

>>> sw_spec_details = client.software_specifications.get_details(sw_spec_uid)
>>> sw_spec_href = client.software_specifications.get_href(sw_spec_details)
static get_id(sw_spec_details)[source]

Get Unique Id of software specification.

Parameters

Important

  1. sw_spec_details: Metadata of the software specification

    type: dict

Output

Important

returns: Unique Id of software specification

return type: str

Example

>>> asset_uid = client.software_specifications.get_id(sw_spec_details)
get_id_by_name(sw_spec_name)[source]

Get Unique Id of software specification.

Parameters

Important

  1. sw_spec_name: Name of the software specification

    type: str

Output

Important

returns: Unique Id of software specification

return type: str

Example

>>> asset_uid = client.software_specifications.get_id_by_name(sw_spec_name)
static get_uid(sw_spec_details)[source]

Get Unique Id of software specification. Deprecated!! Use get_id(sw_spec_details) instead

Parameters

Important

  1. sw_spec_details: Metadata of the software specification

    type: dict

Output

Important

returns: Unique Id of software specification

return type: str

Example

>>> asset_uid = client.software_specifications.get_uid(sw_spec_details)
get_uid_by_name(sw_spec_name)[source]

Get Unique Id of software specification. Deprecated!! Use get_id_by_name(self, sw_spec_name) instead

Parameters

Important

  1. sw_spec_name: Name of the software specification

    type: str

Output

Important

returns: Unique Id of software specification

return type: str

Example

>>> asset_uid = client.software_specifications.get_uid_by_name(sw_spec_name)
list()[source]

List software specifications.

Output

Important

This method only prints the list of all software specifications in a table format.

return type: None

Example

>>> client.software_specifications.list()
store(meta_props)[source]

Create a software specification.

Parameters

Important

  1. meta_props: meta data of the space configuration. To see available meta names use:

    >>> client.software_specifications.ConfigurationMetaNames.get()
    

    type: dict

Output

Important

returns: metadata of the stored space

return type: dict

Example

>>> meta_props = {
>>>    client.software_specifications.ConfigurationMetaNames.NAME: "skl_pipeline_heart_problem_prediction",
>>>    client.software_specifications.ConfigurationMetaNames.DESCRIPTION: "description scikit-learn_0.20",
>>>    client.software_specifications.ConfigurationMetaNames.PACKAGE_EXTENSIONS_UID: [],
>>>    client.software_specifications.ConfigurationMetaNames.SOFTWARE_CONFIGURATIONS: {},
>>>    client.software_specifications.ConfigurationMetaNames.BASE_SOFTWARE_SPECIFICATION_ID: "guid"
>>> }
class metanames.SwSpecMetaNames[source]

Set of MetaNames for Software Specifications Specs.

Available MetaNames:

MetaName

Type

Required

Example value

Schema

NAME

str

Y

Python 3.6 with pre-installed ML package

DESCRIPTION

str

N

my_description

PACKAGE_EXTENSIONS

list

N

[{'guid': 'value'}]

SOFTWARE_CONFIGURATION

dict

N

{'platform': {'name': 'python', 'version': '3.6'}}

{'platform(required)': 'string'}

BASE_SOFTWARE_SPECIFICATION

dict

Y

{'guid': 'BASE_SOFTWARE_SPECIFICATION_ID'}

training

class client.Training(client)[source]

Train new models.

cancel(training_uid, hard_delete=False)[source]

Cancel a training which is currently running and remove it. This method is also be used to delete metadata details of the completed or canceled training run when hard_delete parameter is set to True.

Parameters

Important

  1. training_uid: Training UID

    type: str

  1. hard_delete: specify True or False.

    True - To delete the completed or canceled training runs. False - To cancel the currently running training run. Default value is False.

    type: Boolean

Output

Important

returns: status (“SUCCESS” or “FAILED”)

return type: str

Example

>>> client.training.cancel(training_uid)
get_details(training_uid=None, limit=None)[source]

Get metadata of training(s). If training_uid is not specified returns all model spaces metadata.

Parameters

Important

  1. training_uid: Unique Id of Training (optional)

    type: str

  2. limit: limit number of fetched records (optional)

    type: int

Output

Important

returns: metadata of training(s)

return type: dict The output can be {“resources”: [dict]} or a dict

Note

If training_uid is not specified, all trainings metadata is fetched

Example

>>> training_run_details = client.training.get_details(training_uid)
>>> training_runs_details = client.training.get_details()
static get_href(training_details)[source]

Get training_href from training details.

Parameters

Important

  1. training_details: Metadata of the training created

    type: dict

Output

Important

returns: training href

return type: str

Example

>>> training_details = client.training.get_details(training_uid)
>>> run_url = client.training.get_href(training_details)
static get_id(training_details)[source]

Get training_id from training details.

Parameters

Important

  1. training_details: Metadata of the training created

    type: dict

Output

Important

returns: Unique id of training

return type: str

Example

>>> training_details = client.training.get_details(training_id)
>>> training_id = client.training.get_id(training_details)
get_metrics(training_uid)[source]

Get metrics.

Parameters

Important

  1. training_uid: training UID

    type: str

Output

Important

returns: Metrics of a training run

return type: list of dict

Example

>>> training_status = client.training.get_metrics(training_uid)
get_status(training_uid)[source]

Get the status of a training created.

Parameters

Important

  1. training_uid: training UID

    type: str

Output

Important

returns: training_status

return type: dict

Example

>>> training_status = client.training.get_status(training_uid)
static get_uid(training_details)[source]

Get training_uid from training details.

Parameters

Important

  1. training_details: Metadata of the training created

    type: dict

Output

Important

returns: Unique id of training

return type: str

Example

>>> training_details = client.training.get_details(training_uid)
>>> training_uid = client.training.get_uid(training_details)
list(limit=None)[source]

List stored trainings. If limit is set to None there will be only first 50 records shown.

Parameters

Important

  1. limit: limit number of fetched records

    type: int

Output

Important

This method only prints the list of all trainings in a table format.

return type: None

Example

>>> client.training.list()
list_intermediate_models(training_uid)[source]

List the intermediate_models.

Parameters

Important

  1. training_uid: Training GUID

    type: str

Output

Important

This method only prints the list of all intermediate_models associated with an AUTOAI training in a table format.

return type: None

Note

This method prints the training logs. This method is not supported for IBM Cloud Pak® for Data.

Example

>>> client.training.list_intermediate_models()
list_subtrainings(training_uid)[source]

List the sub-trainings.

Parameters

Important

  1. training_uid: Training GUID

    type: str

Output

Important

This method only prints the list of all sub-trainings associated with a training in a table format.

return type: None

Example

>>> client.training.list_subtrainings()
monitor_logs(training_uid)[source]

Monitor the logs of a training created.

Parameters

Important

  1. training_uid: Training UID

    type: str

Output

Important

returns: None

return type: None

Note

This method prints the training logs. This method is not supported for IBM Cloud Pak® for Data.

Example

>>> client.training.monitor_logs(training_uid)
monitor_metrics(training_uid)[source]

Monitor the metrics of a training created.

Parameters

Important

  1. training_uid: Training UID

    type: str

Output

Output

Important

returns: None

return type: None

Note

This method prints the training metrics. This method is not supported for IBM Cloud Pak® for Data.

Example

>>> client.training.monitor_metrics(training_uid)
run(meta_props, asynchronous=True)[source]

Create a new Machine Learning training.

Parameters

Important

  1. meta_props: meta data of the training configuration. To see available meta names use:

    >>> client.training.ConfigurationMetaNames.show()
    

    type: str

  2. asynchronous:
    • True - training job is submitted and progress can be checked later.

    • False - method will wait till job completion and print training stats.

    type: bool

Output

Important

returns: Metadata of the training created

return type: dict

Examples

Example meta_props for Training run creation in IBM Cloud Pak® for Data for Data version 3.0.1 or above: >>> metadata = { >>> client.training.ConfigurationMetaNames.NAME: ‘Hand-written Digit Recognition’, >>> client.training.ConfigurationMetaNames.DESCRIPTION: ‘Hand-written Digit Recognition Training’, >>> client.training.ConfigurationMetaNames.PIPELINE: { >>> “id”: “4cedab6d-e8e4-4214-b81a-2ddb122db2ab”, >>> “rev”: “12”, >>> “model_type”: “string”, >>> “data_bindings”: [ >>> { >>> “data_reference_name”: “string”, >>> “node_id”: “string” >>> } >>> ], >>> “nodes_parameters”: [ >>> { >>> “node_id”: “string”, >>> “parameters”: {} >>> } >>> ], >>> “hardware_spec”: { >>> “id”: “4cedab6d-e8e4-4214-b81a-2ddb122db2ab”, >>> “rev”: “12”, >>> “name”: “string”, >>> “num_nodes”: “2” >>> } >>> }, >>> client.training.ConfigurationMetaNames.TRAINING_DATA_REFERENCES: [{ >>> ‘type’: ‘s3’, >>> ‘connection’: {}, >>> ‘location’: { >>> ‘href’: ‘v2/assets/asset1233456’, >>> } >>> “schema”: “{ “id”: “t1”, “name”: “Tasks”, “fields”: [ { “name”: “duration”, “type”: “number” } ]}” >>> }], >>> client.training.ConfigurationMetaNames.TRAINING_RESULTS_REFERENCE: { >>> ‘id’ : ‘string’, >>> ‘connection’: { >>> ‘endpoint_url’: ‘https://s3-api.us-geo.objectstorage.service.networklayer.com’, >>> ‘access_key_id’: ‘*’, >>> ‘secret_access_key’: ‘***’ >>> }, >>> ‘location’: { >>> ‘bucket’: ‘wml-dev-results’, >>> ‘path’ : “path” >>> } >>> ‘type’: ‘s3’ >>> } >>> }

NOTE: You can provide either one of the below values can be provided for training:

client.training.ConfigurationMetaNames.EXPERIMENT client.training.ConfigurationMetaNames.PIPELINE client.training.ConfigurationMetaNames.MODEL_DEFINITION:

Example meta_prop values for training run creation in other versions:

>>> metadata = {
>>>  client.training.ConfigurationMetaNames.NAME: 'Hand-written Digit Recognition',
>>>  client.training.ConfigurationMetaNames.TRAINING_DATA_REFERENCES: [{
>>>          'connection': {
>>>              'endpoint_url': 'https://s3-api.us-geo.objectstorage.service.networklayer.com',
>>>              'access_key_id': '***',
>>>              'secret_access_key': '***'
>>>          },
>>>          'source': {
>>>              'bucket': 'wml-dev',
>>>          }
>>>          'type': 's3'
>>>      }],
>>> client.training.ConfigurationMetaNames.TRAINING_RESULTS_REFERENCE: {
>>>          'connection': {
>>>              'endpoint_url': 'https://s3-api.us-geo.objectstorage.service.networklayer.com',
>>>              'access_key_id': '***',
>>>              'secret_access_key': '***'
>>>          },
>>>          'target': {
>>>              'bucket': 'wml-dev-results',
>>>          }
>>>          'type': 's3'
>>>      },
>>> client.training.ConfigurationMetaNames.PIPELINE_UID : "/v4/pipelines/<PIPELINE-ID>"
>>> }
>>> training_details = client.training.run(definition_uid, meta_props=metadata)
>>> training_uid = client.training.get_uid(training_details)
class metanames.TrainingConfigurationMetaNames[source]

Set of MetaNames for trainings.

Available MetaNames:

MetaName

Type

Required

Example value

Schema

TRAINING_DATA_REFERENCES

list

Y

[{'connection': {'endpoint_url': 'https://s3-api.us-geo.objectstorage.softlayer.net', 'access_key_id': '***', 'secret_access_key': '***'}, 'location': {'bucket': 'train-data', 'path': 'training_path'}, 'type': 's3', 'schema': {'id': '1', 'fields': [{'name': 'x', 'type': 'double', 'nullable': 'False'}]}}]

[{'name(optional)': 'string', 'type(required)': 'string', 'connection(required)': {'endpoint_url(required)': 'string', 'access_key_id(required)': 'string', 'secret_access_key(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}, 'schema(optional)': {'id(required)': 'string', 'fields(required)': [{'name(required)': 'string', 'type(required)': 'string', 'nullable(optional)': 'string'}]}}]

TRAINING_RESULTS_REFERENCE

dict

Y

{'connection': {'endpoint_url': 'https://s3-api.us-geo.objectstorage.softlayer.net', 'access_key_id': '***', 'secret_access_key': '***'}, 'location': {'bucket': 'test-results', 'path': 'training_path'}, 'type': 's3'}

{'name(optional)': 'string', 'type(required)': 'string', 'connection(required)': {'endpoint_url(required)': 'string', 'access_key_id(required)': 'string', 'secret_access_key(required)': 'string'}, 'location(required)': {'bucket': 'string', 'path': 'string'}}

TAGS

list

N

[{'value': 'string', 'description': 'string'}]

[{'value(required)': 'string', 'description(optional)': 'string'}]

PIPELINE_UID

str

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

EXPERIMENT_UID

str

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

PIPELINE_DATA_BINDINGS

str

N

[{'data_reference_name': 'string', 'node_id': 'string'}]

[{'data_reference_name(required)': 'string', 'node_id(required)': 'string'}]

PIPELINE_NODE_PARAMETERS

dict

N

[{'node_id': 'string', 'parameters': {}}]

[{'node_id(required)': 'string', 'parameters(required)': 'dict'}]

SPACE_UID

str

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

TRAINING_LIB

dict

N

{'href': '/v4/libraries/3c1ce536-20dc-426e-aac7-7284cf3befc6', 'compute': {'name': 'k80', 'nodes': 0}, 'runtime': {'href': '/v4/runtimes/3c1ce536-20dc-426e-aac7-7284cf3befc6'}, 'command': 'python3 convolutional_network.py', 'parameters': {}}

{'href(required)': 'string', 'type(required)': 'string', 'runtime(optional)': {'href': 'string'}, 'command(optional)': 'string', 'parameters(optional)': 'dict'}

TRAINING_LIB_UID

str

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

TRAINING_LIB_MODEL_TYPE

str

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

TRAINING_LIB_RUNTIME_UID

str

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

TRAINING_LIB_PARAMETERS

dict

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

COMMAND

str

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

COMPUTE

dict

N

3c1ce536-20dc-426e-aac7-7284cf3befc6

PIPELINE_MODEL_TYPE

str

N

tensorflow_1.1.3-py3

AutoAI (BETA, IBM Cloud only)

This version of ibm-watson-machine-learning client introduces support for AutoAI Experiments. Note that AutoAI SDK functionality is currently only available as a beta for IBM Cloud.

Working with DataConnection BETA

DataConnection is the base class to start working with your data storage needed for AutoAI backend to fetch training data and store all of the results. There are several ways you can use the DataConnection object. This is one basic scenario.

To start an AutoAI experiment, first you must specify where your training dataset is located. Currently, WML AutoAI supports Cloud Object Storage (COS) and data assets on Cloud.

Cloud DataConnection Initialization

To upload your experiment dataset, you must initialize DataConnection with your COS credentials.

from ibm_watson_machine_learning.autoai.helpers.connections import S3Connection, S3Location, DataConnection

# note: this DataConnection will be used as a reference where to find your training dataset
training_data_connection = DataConnection(
        connection=S3Connection(endpoint_url='url of the COS endpoint',
                                access_key_id='COS access key id',
                                secret_access_key='COS secret acces key'
                                ),
        location=S3Location(bucket='bucket_name',   # note: COS bucket name where training dataset is located
                            path='my_path'  # note: path within bucket where your training dataset is located
                            )
    )

# note: this DataConnection will be used as a reference where to save all of the AutoAI experiment results
results_connection = DataConnection(
        connection=S3Connection(endpoint_url='url of the COS endpoint',
                                access_key_id='COS access key id',
                                secret_access_key='COS secret acces key'
                                ),
        # note: bucket name and path could be different or the same as specified in the training_data_connection
        location=S3Location(bucket='bucket_name',
                            path='my_path'
                            )
    )

Upload your training dataset

An AutoAI experiment should be able to access your training data. If you do not have a training dataset stored already, you can do it by invoking the write() method of the DataConnection.

training_data_connection.write(data='local_path_to_the_dataset', remote_name='training_dataset.csv')

Download your training dataset

To download stored dataset, use the read() method of DataConnection.

dataset = training_data_connection.read()   # note: returns a pandas DataFrame

Working with AutoAI class and optimizer (BETA)

The AutoAI experiment class is responsible for creating experiments and scheduling training. All experiment results are stored automatically in the user-specified Cloud Object Storage (COS). Then, the AutoAI SDK can fetch the results and provide them directly to the user for further usage.

For an AutoAI object initialization WML credentials (with apikey and url) and one of project_id or space_id.

from ibm_watson_machine_learning.experiment import AutoAI

experiment = AutoAI(wml_credentials,
    space_id='76g53e0-0b32-4a0e-9152-3d50324855ddb')
)

remote_auto_pipelines = experiment.optimizer(
            name='test name',
            desc='test description',
            prediction_type=AutoAI.PredictionType.BINARY,
            prediction_column='y',
            scoring=AutoAI.Metrics.ACCURACY_SCORE,
            test_size=0.1,
            max_num_daub_ensembles=1,
            train_sample_rows_test_size=1.,
            daub_include_only_estimators = [
                 AutoAI.ClassificationAlgorithms.XGB,
                 AutoAI.ClassificationAlgorithms.LGBM
                 ],
            cognito_transform_names = [
                 AutoAI.Transformers.SUM,
                 AutoAI.Transformers.MAX
                 ]
        )

Get configuration parameters

To see what configuration parameters are used, call the get_params() method.

config_parameters = remote_auto_pipelines.get_params()
print(config_parameters)
{
    'name': 'test name',
    'desc': 'test description',
    'prediction_type': 'classification',
    'prediction_column': 'y',
    'scoring': 'roc_auc',
    'test_size': 0.1,
    'max_num_daub_ensembles': 1
}

Fit AutoAI experiment

To schedule an AutoAI experiment, call a fit() method. This will trigger a training and an optimization process on WML. fit() method can be synchronous (background_mode=False), or asynchronous (background_mode=True). When you do not want to wait until the fit end, invoke an async version, which immediately returns only fit/run details. Otherwise, in the sync version, you will see a progress bar with information about the learning/optimization process.

fit_details = remote_auto_pipelines.fit(
        training_data_reference=[training_data_connection],
        training_results_reference=results_connection,
        background_mode=True)

# OR

remote_auto_pipelines = remote_auto_pipelines.fit(
        training_data_reference=[training_data_connection],
        training_results_reference=results_connection,
        background_mode=False)

Get run status, get run details

If you decided to use an asynchronous option during fit() method, you can monitor the run/fit details and status using the following two methods:

status = remote_auto_pipelines.get_run_status()
print(status)
'running'

# OR

'completed'

run_details = remote_auto_pipelines.get_run_details()
print(run_details)
{'entity': {'pipeline': {'href': '/v4/pipelines/5bfeb4c5-90df-48b8-9e03-ba232d8c0838'},
        'results_reference': {'connection': {'access_key_id': '...',
                                             'endpoint_url': '...',
                                             'secret_access_key': '...'},
                              'location': {'bucket': '...',
                                           'logs': '53c8cb7b-c8b5-44aa-8b52-6fde3c588462',
                                           'model': '53c8cb7b-c8b5-44aa-8b52-6fde3c588462/model',
                                           'path': '.',
                                           'pipeline': './33825fa2-5fca-471a-ab1a-c84820b3e34e/pipeline.json',
                                           'training': './33825fa2-5fca-471a-ab1a-c84820b3e34e',
                                           'training_status': './33825fa2-5fca-471a-ab1a-c84820b3e34e/training-status.json'},
                              'type': 's3'},
        'space': {'href': '/v4/spaces/71ab11ea-bb77-4ae6-b98a-a77f30ade09d'},
        'status': {'completed_at': '2020-02-17T10:46:32.962Z',
                   'message': {'level': 'info',
                               'text': 'Training job '
                                       '33825fa2-5fca-471a-ab1a-c84820b3e34e '
                                       'completed'},
                   'state': 'completed'},
        'training_data_references': [{'connection': {'access_key_id': '...',
                                                     'endpoint_url': '...',
                                                     'secret_access_key': '...'},
                                      'location': {'bucket': '...',
                                                   'path': '...'},
                                      'type': 's3'}]},
 'metadata': {'created_at': '2020-02-17T10:44:22.532Z',
              'guid': '33825fa2-5fca-471a-ab1a-c84820b3e34e',
              'href': '/v4/trainings/33825fa2-5fca-471a-ab1a-c84820b3e34e',
              'id': '33825fa2-5fca-471a-ab1a-c84820b3e34e',
              'modified_at': '2020-02-17T10:46:32.987Z'}}

Get data connections

data_connections = remote_auto_pipelines.get_data_connections()
# note: data_connections is a list with all training_connections that you referenced during fit() call

Summary

You can get a ranking of all computed pipeline models sorted based on a scorer metric supplied at the beginning. The output is a pandas.DataFrame with pipeline names, computation timestamps, machine learning metrics and the number of enhancements implemented in each of the pipeline.

results = remote_auto_pipelines.summary()
print(results)
               Number of enhancements  ...  training_f1
Pipeline Name                          ...
Pipeline_4                          3  ...     0.555556
Pipeline_3                          2  ...     0.554978
Pipeline_2                          1  ...     0.503175
Pipeline_1                          0  ...     0.529928

Get pipeline details

To see pipeline composition steps and nodes, use get_pipeline_details(). Empty pipeline_name returns the best computed pipeline details.

pipeline_params = remote_auto_pipelines.get_pipeline_details(pipeline_name='Pipeline_1')
print(pipeline_params)
{
    'composition_steps': [
        'TrainingDataset_full_199_16', 'Split_TrainingHoldout',
        'TrainingDataset_full_179_16', 'Preprocessor_default', 'DAUB'
        ],
    'pipeline_nodes': [
        'PreprocessingTransformer', 'LogisticRegressionEstimator'
        ]
}

Get pipeline

Use this method to load a specific pipeline. By default, get_pipeline() returns a Lale (https://github.com/ibm/lale) pipeline.

pipeline = remote_auto_pipelines.get_pipeline(pipeline_name='Pipeline_4')
print(type(pipeline))
'lale.operators.TrainablePipeline'

There is an additional option to load a pipeline as a scikit pipeline model type.

lale_pipeline = remote_auto_pipelines.get_pipeline(pipeline_name='Pipeline_4', astype=AutoAI.PipelineTypes.SKLEARN)
print(type(lale_pipeline))
<class 'sklearn.pipeline.Pipeline'>

Working with deployments (BETA)

The following classes enable you to work with Watson Machine Learning deployments.

Web Service

Web Service is an online type of deployment. It allows you to upload and deploy your model to be able to score it via online web service. You need to pass the location where the training was performed (source_space_id or source_project_id). The model can be deployed to any space or project described by target_space_id or target_project_id.

WebService supports only AutoAI deployment.

from ibm_watson_machine_learning.deployment import WebService

# note: only AutoAI deployment is possible now
service = WebService(wml_credentials,
     source_space_id='76g53e0-0b32-4a0e-9152-3d50324855ddb',
     target_space_id='1234abc1234abc1234abc1234abc1234abcd')
 )

service.create(
       experiment_run_id="...",
       model=model,
       deployment_name='My new deployment'
   )

Batch

Batch manages batch type of deployment. It allows upload and deploy model and run batch deployment job. As in Web Service, you need to pass the location where the training was performed (source_space_id or source_project_id). The model can be deployed to any space or project described by target_space_id or target_project_id.

Batch supports only AutoAI deployment. The input data can be provided as one of: pandas.DataFrame, data-asset or Cloud Object Storage file.

Example of batch deployment creation:

from ibm_watson_machine_learning.deployment import Batch

service_batch = Batch(wml_credentials, source_space_id='76g53e0-0b32-4a0e-9152-3d50324855ddb')
service_batch.create(
        experiment_run_id="6ce62a02-3e41-4d11-89d1-484c2deaed75",
        model="Pipeline_4",
        deployment_name='Batch deployment')

Example of batch job creation with inline data as type pandas.DataFrame:

scoring_params = service_batch.run_job(
            payload=test_X_df,
            background_mode=False)

Example of batch job creation with COS object:

from ibm_watson_machine_learning.helpers.connections import S3Connection, S3Location, DataConnection

payload_reference = DataConnection(
        connection=S3Connection(endpoint_url='url of the COS endpoint',
                                access_key_id='COS access key id',
                                secret_access_key='COS secret acces key'
                                ),
        location=S3Location(bucket='bucket_name',   # note: COS bucket name where deployment payload dataset is located
                            path='my_path'  # note: path within bucket where your deployment payload dataset is located
                            )
    )

results_reference = DataConnection(
        connection=S3Connection(endpoint_url='url of the COS endpoint',
                                access_key_id='COS access key id',
                                secret_access_key='COS secret acces key'
                                ),
        location=S3Location(bucket='bucket_name',   # note: COS bucket name where deployment output should be located
                            path='my_path_where_output_will_be_saved'  # note: path within bucket where your deployment output should be located
                            )
    )
payload_reference.write("local_path_to_the_batch_payload_csv_file", remote_name="batch_payload_location.csv"])

scoring_params = service_batch.run_job(
    payload=[payload_reference],
    output_data_reference=results_reference,
    background_mode=False)   # If background_mode is False, then synchronous mode is started. Otherwise job status need to be monitored.

Example of batch job creation with data-asset object.

from ibm_watson_machine_learning.helpers.connections import DataConnection, CloudAssetLocation, DeploymentOutputAssetLocation

payload_reference = DataConnection(location=CloudAssetLocation(asset_id=asset_id))
results_reference = DataConnection(
        location=DeploymentOutputAssetLocation(name="batch_output_file_name.csv"))

    scoring_params = service_batch.run_job(
        payload=[payload_reference],
        output_data_reference=results_reference,
        background_mode=False)

AutoAI Modules (BETA for IBM Cloud)

AutoAI

class ibm_watson_machine_learning.experiment.autoai.autoai.AutoAI(wml_credentials: Union[dict, WorkSpace] = None, project_id: str = None, space_id: str = None)[source]

Bases: ibm_watson_machine_learning.experiment.base_experiment.base_experiment.BaseExperiment

AutoAI class for pipeline models optimization automation.

wml_credentials: dictionary, required

Credentials to Watson Machine Learning instance.

project_id: str, optional

ID of the Watson Studio project.

space_id: str, optional

ID of the Watson Studio Space.

>>> from ibm_watson_machine_learning.experiment import AutoAI
>>> # Remote version of AutoAI
>>> experiment = AutoAI(
>>>        wml_credentials={
>>>              "apikey": "...",
>>>              "iam_apikey_description": "...",
>>>              "iam_apikey_name": "...",
>>>              "iam_role_crn": "...",
>>>              "iam_serviceid_crn": "...",
>>>              "instance_id": "...",
>>>              "url": "https://us-south.ml.cloud.ibm.com"
>>>            },
>>>         project_id="...",
>>>         space_id="...")
>>>
>>> # Local version of AutoAI
>>> experiment = AutoAI()
class ClassificationAlgorithms(value)

Bases: enum.Enum

Classification algorithms that AutoAI could use.

DT = 'DecisionTreeClassifierEstimator'
EX_TREES = 'ExtraTreesClassifierEstimator'
GB = 'GradientBoostingClassifierEstimator'
LGBM = 'LGBMClassifierEstimator'
LR = 'LogisticRegressionEstimator'
RF = 'RandomForestClassifierEstimator'
XGB = 'XGBClassifierEstimator'
class DataConnectionTypes

Bases: object

Supported types of DataConnection. OneOf: [s3, FS]

CA = 'connection_asset'
DS = 'data_asset'
FS = 'fs'
S3 = 's3'
class Metrics

Bases: object

Supported types of classification and regression metrics in autoai.

ACCURACY_SCORE = 'accuracy'
AVERAGE_PRECISION_SCORE = 'average_precision'
EXPLAINED_VARIANCE_SCORE = 'explained_variance'
F1_SCORE = 'f1'
F1_SCORE_MACRO = 'f1_macro'
F1_SCORE_MICRO = 'f1_micro'
F1_SCORE_WEIGHTED = 'f1_weighted'
LOG_LOSS = 'neg_log_loss'
MEAN_ABSOLUTE_ERROR = 'neg_mean_absolute_error'
MEAN_SQUARED_ERROR = 'neg_mean_squared_error'
MEAN_SQUARED_LOG_ERROR = 'neg_mean_squared_log_error'
MEDIAN_ABSOLUTE_ERROR = 'neg_median_absolute_error'
PRECISION_SCORE = 'precision'
PRECISION_SCORE_MACRO = 'precision_macro'
PRECISION_SCORE_MICRO = 'precision_micro'
PRECISION_SCORE_WEIGHTED = 'precision_weighted'
R2_SCORE = 'r2'
RECALL_SCORE = 'recall'
RECALL_SCORE_MACRO = 'recall_macro'
RECALL_SCORE_MICRO = 'recall_micro'
RECALL_SCORE_WEIGHTED = 'recall_weighted'
ROC_AUC_SCORE = 'roc_auc'
ROOT_MEAN_SQUARED_ERROR = 'neg_root_mean_squared_error'
ROOT_MEAN_SQUARED_LOG_ERROR = 'neg_root_mean_squared_log_error'
class PipelineTypes

Bases: object

Supported types of Pipelines.

LALE = 'lale'
SKLEARN = 'sklearn'
class PredictionType

Bases: object

Supported types of learning. OneOf: [BINARY, MULTICLASS, REGRESSION]

BINARY = 'binary'
MULTICLASS = 'multiclass'
REGRESSION = 'regression'
class RegressionAlgorithms(value)

Bases: enum.Enum

Regression algorithms that AutoAI could use.

DT = 'DecisionTreeRegressorEstimator'
EX_TREES = 'ExtraTreesRegressorEstimator'
GB = 'GradientBoostingRegressorEstimator'
LGBM = 'LGBMRegressorEstimator'
LR = 'LinearRegressionEstimator'
RF = 'RandomForestRegressorEstimator'
RIDGE = 'RidgeEstimator'
XGB = 'XGBRegressorEstimator'
class TShirtSize

Bases: object

Possible sizes of the AutoAI POD Depends on the POD size, AutoAI could support different data sets sizes.

S - small (2vCPUs and 8GB of RAM) M - Medium (4vCPUs and 16GB of RAM) L - Large (8vCPUs and 32GB of RAM)) XL - Extra Large (16vCPUs and 64GB of RAM)

L = 'l'
M = 'm'
ML = 'ml'
S = 's'
XL = 'xl'
class Transformers

Bases: object

Supported types of congito transformers names in autoai.

ABS = 'abs'
CBRT = 'cbrt'
COS = 'cos'
CUBE = 'cube'
DIFF = 'diff'
DIVIDE = 'divide'
FEATUREAGGLOMERATION = 'featureagglomeration'
ISOFORESTANOMALY = 'isoforestanomaly'
LOG = 'log'
MAX = 'max'
MINMAXSCALER = 'minmaxscaler'
NXOR = 'nxor'
PCA = 'pca'
PRODUCT = 'product'
ROUND = 'round'
SIGMOID = 'sigmoid'
SIN = 'sin'
SQRT = 'sqrt'
SQUARE = 'square'
STDSCALER = 'stdscaler'
SUM = 'sum'
TAN = 'tan'
optimizer(name: str, *, prediction_type: ibm_watson_machine_learning.utils.autoai.enums.PredictionType, prediction_column: str, scoring: ibm_watson_machine_learning.utils.autoai.enums.Metrics, desc: str = None, test_size: float = 0.1, max_number_of_estimators: int = 2, train_sample_rows_test_size: float = None, daub_include_only_estimators: List[Union[ClassificationAlgorithms, RegressionAlgorithms]] = None, cognito_transform_names: List[Transformers] = None, data_join_graph: Optional[ibm_watson_machine_learning.preprocessing.multiple_files_preprocessor.DataJoinGraph] = None, csv_separator: Union[List[str], str] = ',', excel_sheet: Union[str, int] = 0, encoding: str = 'utf-8', positive_label: str = None, data_join_only: bool = False, **kwargs) → Union[ibm_watson_machine_learning.experiment.autoai.optimizers.remote_auto_pipelines.RemoteAutoPipelines, ibm_watson_machine_learning.experiment.autoai.optimizers.local_auto_pipelines.LocalAutoPipelines][source]

Initialize an AutoAi optimizer.

name: str, required

Name for the AutoPipelines

prediction_type: PredictionType, required

Type of the prediction.

prediction_column: str, required

name of the target/label column

scoring: Metrics, required

Type of the metric to optimize with.

desc: str, optional

Description

test_size: float, optional

Percentage of the entire dataset to leave as a holdout. Default 0.1

max_number_of_estimators: int, optional

Maximum number (top-K ranked by DAUB model selection) of the selected algorithm, or estimator types, for example LGBMClassifierEstimator, XGBoostClassifierEstimator, or LogisticRegressionEstimator to use in pipeline composition. The default is 2, where only the highest ranked by model selection algorithm type is used. (min 1, max 4)

train_sample_rows_test_size: float, optional

Training data sampling percentage

daub_include_only_estimators: List[Union[‘ClassificationAlgorithms’, ‘RegressionAlgorithms’]], optional

List of estimators to include in computation process. See: AutoAI.ClassificationAlgorithms or AutoAI.RegressionAlgorithms

cognito_transform_names: List[‘Transformers’], optional

List of transformers to include in feature enginnering computation process. See: AutoAI.Transformers

csv_separator: Union[List[str], str], optional

The separator, or list of separators to try for separating columns in a CSV file. Not used if the file_name is not a CSV file. Default is ‘,’.

excel_sheet: Union[str, int], optional

Name or number of the excel sheet to use. Only use when xlsx file is an input. Default is 0.

encoding: str, optional

Encoding type for CSV training file.

positive_label: str, optional

The positive class to report when binary classification. When multiclass or regression, this will be ignored.

t_shirt_size: TShirtSize, optional

The size of the remote AutoAI POD instance (computing resources). Only applicable to a remote scenario. See: AutoAI.TShirtSize

data_join_graph: DataJoinGraph, optional

A graph object with definition of join structure for multiple input data sources. Data preprocess step for multiple files.

data_join_only: bool, optional

If True only preprocessing will be executed.

RemoteAutoPipelines or LocalAutoPipelines, depends on how you initialize the AutoAI object.

>>> from ibm_watson_machine_learning.experiment import AutoAI
>>> experiment = AutoAI(...)
>>>
>>> optimizer = experiment.optimizer(
>>>        name="name of the optimizer.",
>>>        prediction_type=AutoAI.PredictionType.BINARY,
>>>        prediction_column="y",
>>>        scoring=AutoAI.Metrics.ROC_AUC_SCORE,
>>>        desc="Some description.",
>>>        test_size=0.1,
>>>        max_num_daub_ensembles=1,
>>>        cognito_transform_names=[AutoAI.Transformers.SUM,AutoAI.Transformers.MAX],
>>>        train_sample_rows_test_size=1,
>>>        daub_include_only_estimators=[AutoAI.ClassificationAlgorithms.LGBM, AutoAI.ClassificationAlgorithms.XGB],
>>>        t_shirt_size=AutoAI.TShirtSize.L
>>>    )
>>>
>>> optimizer = experiment.optimizer(
>>>        name="name of the optimizer.",
>>>        prediction_type=AutoAI.PredictionType.MULTICLASS,
>>>        prediction_column="y",
>>>        scoring=AutoAI.Metrics.ROC_AUC_SCORE,
>>>        desc="Some description.",
>>>    )
runs(*, filter: str) → Union[ibm_watson_machine_learning.experiment.autoai.runs.auto_pipelines_runs.AutoPipelinesRuns, ibm_watson_machine_learning.experiment.autoai.runs.local_auto_pipelines_runs.LocalAutoPipelinesRuns][source]

Get the historical runs but with WML Pipeline name filter (for remote scenario). Get the historical runs but with experiment name filter (for local scenario).

filter: str, required

WML Pipeline name to filter the historical runs. or experiment name to filter the local historical runs.

AutoPipelinesRuns or LocalAutoPipelinesRuns

>>> from ibm_watson_machine_learning.experiment import AutoAI
>>> experiment = AutoAI(...)
>>>
>>> experiment.runs(filter='Test').list()

RemoteAutoPipelines

class ibm_watson_machine_learning.experiment.autoai.optimizers.remote_auto_pipelines.RemoteAutoPipelines(name: str, prediction_type: PredictionType, prediction_column: str, scoring: Metrics, engine: WMLEngine, desc: str = None, test_size: float = 0.1, max_num_daub_ensembles: int = 1, t_shirt_size: TShirtSize = 'm', train_sample_rows_test_size: float = None, daub_include_only_estimators: List[Union[ClassificationAlgorithms, RegressionAlgorithms]] = None, cognito_transform_names: List[Transformers] = None, data_join_graph: DataJoinGraph = None, csv_separator: Union[List[str], str] = ',', excel_sheet: Union[str, int] = 0, encoding: str = 'utf-8', positive_label: str = None, data_join_only: bool = False, notebooks=False, autoai_pod_version=None, obm_pod_version=None, **kwargs)[source]

Bases: ibm_watson_machine_learning.experiment.autoai.optimizers.base_auto_pipelines.BaseAutoPipelines

RemoteAutoPipelines class for pipeline operation automation on WML.

name: str, required

Name for the AutoPipelines

prediction_type: PredictionType, required

Type of the prediction.

prediction_column: str, required

name of the target/label column

scoring: Metrics, required

Type of the metric to optimize with.

desc: str, optional

Description

test_size: float, optional

Percentage of the entire dataset to leave as a holdout. Default 0.1

max_num_daub_ensembles: int, optional

Maximum number (top-K ranked by DAUB model selection) of the selected algorithm, or estimator types, for example LGBMClassifierEstimator, XGBoostClassifierEstimator, or LogisticRegressionEstimator to use in pipeline composition. The default is 1, where only the highest ranked by model selection algorithm type is used.

train_sample_rows_test_size: float, optional

Training data sampling percentage

daub_include_only_estimators: List[Union[‘ClassificationAlgorithms’, ‘RegressionAlgorithms’]], optional

List of estimators to include in computation process.

cognito_transform_names: List[‘Transformers’], optional

List of transformers to include in feature enginnering computation process. See: AutoAI.Transformers

csv_separator: Union[List[str], str], optional

The separator, or list of separators to try for separating columns in a CSV file. Not used if the file_name is not a CSV file. Default is ‘,’.

excel_sheet: Union[str, int], optional

Name or number of the excel sheet to use. Only use when xlsx file is an input. Default is 0.

encoding: str, optional

Encoding type for CSV training file.

positive_label: str, optional

The positive class to report when binary classification. When multiclass or regression, this will be ignored.

t_shirt_size: TShirtSize, optional

The size of the remote AutoAI POD instance (computing resources). Only applicable to a remote scenario.

engine: WMLEngine, required

Engine for remote work on WML.

data_join_graph: DataJoinGraph, optional

A graph object with definition of join structure for multiple input data sources. Data preprocess step for multiple files.

cancel_run() → None[source]

Cancels an AutoAI run.

fit(train_data: Optional[pandas.core.frame.DataFrame] = None, *, training_data_reference: List[DataConnection], training_results_reference: Optional[ibm_watson_machine_learning.helpers.connections.connections.DataConnection] = None, background_mode=False) → dict[source]

Run a training process on WML of autoai on top of the training data referenced by DataConnection.

training_data_reference: List[DataConnection], required

Data storage connection details to inform where training data is stored.

training_results_reference: DataConnection, optional

Data storage connection details to store pipeline training results. Not applicable on CP4D.

background_mode: bool, optional

Indicator if fit() method will run in background (async) or (sync).

Dictionary with run details.

>>> from ibm_watson_machine_learning.experiment import AutoAI
>>> from ibm_watson_machine_learning.helpers import DataConnection, S3Connection, S3Location
>>>
>>> experiment = AutoAI(credentials, ...)
>>> remote_optimizer = experiment.optimizer(...)
>>>
>>> remote_optimizer.fit(
>>>     training_data_connection=[DataConnection(
>>>         connection=S3Connection(
>>>             endpoint_url="https://s3.us.cloud-object-storage.appdomain.cloud",
>>>             access_key_id="9c92n0scodfa",
>>>             secret_access_key="0ch827gf9oiwdn0c90n20nc0oms29j"),
>>>         location=S3Location(
>>>             bucket='automl',
>>>             path='german_credit_data_biased_training.csv')
>>>         )
>>>     )],
>>>     DataConnection(
>>>         connection=S3Connection(
>>>             endpoint_url="https://s3.us.cloud-object-storage.appdomain.cloud",
>>>             access_key_id="9c92n0scodfa",
>>>             secret_access_key="0ch827gf9oiwdn0c90n20nc0oms29j"),
>>>         location=S3Location(
>>>             bucket='automl',
>>>             path='')
>>>         )
>>>     ),
>>>     background_mode=False)
get_data_connections() → List[ibm_watson_machine_learning.helpers.connections.connections.DataConnection][source]
Create DataConnection objects for further user usage

(eg. to handle data storage connection or to recreate autoai holdout split).

List[‘DataConnection’] with populated optimizer parameters

get_params() → dict[source]

Get configuration parameters of AutoPipelines.

Dictionary with AutoPipelines parameters.

>>> from ibm_watson_machine_learning.experiment import AutoAI
>>> experiment = AutoAI(credentials, ...)
>>> remote_optimizer = experiment.optimizer(...)
>>>
>>> remote_optimizer.get_params()
    {
        'name': 'test name',
        'desc': 'test description',
        'prediction_type': 'classification',
        'prediction_column': 'y',
        'scoring': 'roc_auc',
        'test_size': 0.1,
        'max_num_daub_ensembles': 1,
        't_shirt_size': 'm',
        'train_sample_rows_test_size': 0.8,
        'daub_include_only_estimators': ["ExtraTreesClassifierEstimator",
                                        "GradientBoostingClassifierEstimator",
                                        "LGBMClassifierEstimator",
                                        "LogisticRegressionEstimator",
                                        "RandomForestClassifierEstimator",
                                        "XGBClassifierEstimator"]
    }
get_pipeline(pipeline_name: str = None, astype: PipelineTypes = 'lale', persist: bool = False) → Union[Pipeline, TrainablePipeline][source]

Download specified pipeline from WML.

pipeline_name: str, optional

Pipeline name, if you want to see the pipelines names, please use summary() method. If this parameter is None, the best pipeline will be fetched.

astype: PipelineTypes, optional

Type of returned pipeline model. If not specified, lale type is chosen.

persist: bool, optional

Indicates if selected pipeline should be stored locally.

Scikit-Learn pipeline.

RemoteAutoPipelines.summary()

>>> from ibm_watson_machine_learning.experiment import AutoAI
>>> experiment = AutoAI(credentials, ...)
>>> remote_optimizer = experiment.optimizer(...)
>>>
>>> pipeline_1 = remote_optimizer.get_pipeline(pipeline_name='Pipeline_1')
>>> pipeline_2 = remote_optimizer.get_pipeline(pipeline_name='Pipeline_1',