Dataset Viewer
	| doc_id
				 stringlengths 40 40 | url
				 stringlengths 90 160 | title
				 stringlengths 5 96 | document
				 stringlengths 24 62.1k | md_document
				 stringlengths 63 109k | 
|---|---|---|---|---|
| 
	81D740CEF3967C20721612B7866072EF240484E9 | 
	https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DOJava.html?context=cdpaas&locale=en | 
	Decision Optimization Java models | 
	 Decision Optimization Java models 
You can create and run  Decision Optimization models in Java by using the  Watson Machine Learning REST API.
You can build your  Decision Optimization models in Java or you can use  Java worker to package CPLEX, CPO, and OPL models.
For more information about these models, see the following reference manuals.
*  [Java CPLEX reference documentation](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.cplex.help/refjavacplex/html/overview-summary.html)
*  [Java CPO reference documentation](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.cpo.help/refjavacpoptimizer/html/overview-summary.html)
*  [Java OPL reference documentation](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.ide.help/refjavaopl/html/overview-summary.html)
To package and deploy  Java models in  Watson Machine Learning, see [Deploying Java models for Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployJava.html) and the boilerplate provided in the [Java worker GitHub](https://github.com/IBMDecisionOptimization/cplex-java-worker/blob/master/README.md).
 | 
	# Decision Optimization Java models #
You can create and run  Decision Optimization models in Java by using the  Watson Machine Learning REST API\.
You can build your  Decision Optimization models in Java or you can use  Java worker to package CPLEX, CPO, and OPL models\.
For more information about these models, see the following reference manuals\.
<!-- <ul> -->
 *  [Java CPLEX reference documentation](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.cplex.help/refjavacplex/html/overview-summary.html)
 *  [Java CPO reference documentation](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.cpo.help/refjavacpoptimizer/html/overview-summary.html)
 *  [Java OPL reference documentation](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.ide.help/refjavaopl/html/overview-summary.html)
<!-- </ul> -->
To package and deploy  Java models in  Watson Machine Learning, see [Deploying Java models for Decision Optimization](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployJava.html) and the boilerplate provided in the [Java worker GitHub](https://github.com/IBMDecisionOptimization/cplex-java-worker/blob/master/README.md)\.
<!-- </article "role="article" "> -->
 | 
| 
	6DBD14399B24F78CAFEC6225B77DAFAE357DDEE5 | 
	https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DONotebooks.html?context=cdpaas&locale=en | 
	Decision Optimization notebooks | 
	 Decision Optimization notebooks 
You can create and run  Decision Optimization models in Python  notebooks by using  DOcplex, a native Python API for  Decision Optimization. Several  Decision Optimization notebooks are already available for you to use.
The  Decision Optimization environment currently supports Python 3.10. The following Python environments give you access to the Community Edition of the CPLEX engines. The Community Edition is limited to solving problems with up to 1000 constraints and 1000 variables, or with a search space of 1000 X 1000 for Constraint Programming problems.
*  Runtime 23.1 on Python 3.10 S/XS/XXS
*  Runtime 22.2 on Python 3.10 S/XS/XXS
To run larger problems, select a runtime that includes the full CPLEX commercial edition. The  Decision Optimization environment ( DOcplex) is available in the following runtimes (full CPLEX commercial edition):
*  NLP + DO runtime 23.1 on Python 3.10 with CPLEX 22.1.1.0
*  DO + NLP runtime 22.2 on Python 3.10 with CPLEX 20.1.0.1
You can easily change environments (runtimes and Python version) inside a  notebook by using the  Environment tab (see [Changing the environment of a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.htmlchange-env)). Thus, you can formulate optimization models and test them with small data sets in one environment. Then, to solve models with bigger data sets, you can switch to a different environment, without having to rewrite or copy the  notebook code.
Multiple examples of  Decision Optimization notebooks are available in the   Samples, including:
*  The Sudoku example, a Constraint Programming example in which the objective is to solve a 9x9 Sudoku grid.
*  The Pasta Production Problem example, a Linear Programming example in which the objective is to minimize the production cost for some pasta products and to ensure that the customers' demand for the products is satisfied.
These and more examples are also available in the jupyter folder of the [DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples)
All  Decision Optimization notebooks use  DOcplex.
 | 
	# Decision Optimization notebooks #
You can create and run  Decision Optimization models in Python  notebooks by using  DOcplex, a native Python API for  Decision Optimization\. Several  Decision Optimization notebooks are already available for you to use\.
The  Decision Optimization environment currently supports `Python 3.10`\. The following Python environments give you access to the Community Edition of the CPLEX engines\. The Community Edition is limited to solving problems with up to 1000 constraints and 1000 variables, or with a search space of 1000 X 1000 for Constraint Programming problems\.
<!-- <ul> -->
 *  `Runtime 23.1 on Python 3.10 S/XS/XXS`
 *  `Runtime 22.2 on Python 3.10 S/XS/XXS`
<!-- </ul> -->
To run larger problems, select a runtime that includes the full CPLEX commercial edition\. The  Decision Optimization environment ( DOcplex) is available in the following runtimes (full CPLEX commercial edition):
<!-- <ul> -->
 *  `NLP + DO runtime 23.1 on Python 3.10` with `CPLEX 22.1.1.0`
 *  `DO + NLP runtime 22.2 on Python 3.10` with `CPLEX 20.1.0.1`
<!-- </ul> -->
You can easily change environments (runtimes and Python version) inside a  notebook by using the  Environment tab (see [Changing the environment of a notebook](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/notebook-environments.html#change-env))\. Thus, you can formulate optimization models and test them with small data sets in one environment\. Then, to solve models with bigger data sets, you can switch to a different environment, without having to rewrite or copy the  notebook code\.
Multiple examples of  Decision Optimization notebooks are available in the   Samples, including:
<!-- <ul> -->
 *  The Sudoku example, a Constraint Programming example in which the objective is to solve a 9x9 Sudoku grid\.
 *  The Pasta Production Problem example, a Linear Programming example in which the objective is to minimize the production cost for some pasta products and to ensure that the customers' demand for the products is satisfied\.
<!-- </ul> -->
These and more examples are also available in the **jupyter** folder of the **[DO\-samples](https://github.com/IBMDecisionOptimization/DO-Samples)**
All  Decision Optimization notebooks use  DOcplex\.
<!-- </article "role="article" "> -->
 | 
| 
	277C8CB678CAF766466EDE03C506EB0A822FD400 | 
	https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DOconnections.html?context=cdpaas&locale=en | 
	Supported data sources in Decision Optimization | 
	 Supported data sources in  Decision Optimization 
Decision Optimization supports the following relational and nonrelational data sources on .  watsonx.ai.
*  [IBM data sources](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DOconnections.html?context=cdpaas&locale=enDOConnections__ibm-data-src)
*  [Third-party data sources](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DOconnections.html?context=cdpaas&locale=enDOConnections__third-party-data-src)
 | 
	# Supported data sources in  Decision Optimization #
Decision Optimization supports the following relational and nonrelational data sources on \.  watsonx\.ai\.
<!-- <ul> -->
 *  [IBM data sources](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DOconnections.html?context=cdpaas&locale=en#DOConnections__ibm-data-src)
 *  [Third\-party data sources](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DOconnections.html?context=cdpaas&locale=en#DOConnections__third-party-data-src)
<!-- </ul> -->
<!-- </article "role="article" "> -->
 | 
| 
	E990E009903E315FA6752E7E82C2634AF4A425B9 | 
	https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DOintro.html?context=cdpaas&locale=en | 
	Ways to use Decision Optimization | 
	 Ways to use  Decision Optimization 
To build  Decision Optimization models, you can create Python  notebooks with  DOcplex, a native Python API for Decision Optimization, or use the  Decision Optimization experiment UI that has more benefits and features.
 | 
	# Ways to use  Decision Optimization #
To build  Decision Optimization models, you can create Python  notebooks with  DOcplex, a native Python API for Decision Optimization, or use the  Decision Optimization experiment UI that has more benefits and features\.
<!-- </article "role="article" "> -->
 | 
| 
	8892A757ECB2C4A02806A7B262712FF2E30CE044 | 
	https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/OPLmodels.html?context=cdpaas&locale=en | 
	OPL models | 
	 OPL models 
You can build OPL models in the  Decision Optimization experiment UI in  watsonx.ai.
In this section:
*  [Inputs and Outputs](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/OPLmodels.html?context=cdpaas&locale=entopic_oplmodels__section_oplIO)
*  [Engine settings](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/OPLmodels.html?context=cdpaas&locale=entopic_oplmodels__engsettings)
To create an OPL model in the  experiment UI, select  in the model selection window. You can also import OPL models from a file or import a scenario .zip file that contains the OPL model and the data. If you import from a file or scenario .zip file, the data must be in  .csv format. However, you can import other file formats that you have as project assets into the  experiment UI. You can also import data sets including connected data into your project from the model builder in the [Prepare data view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_preparedata).
For more information about the OPL language and engine parameters, see:
*  [OPL language reference manual](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.ide.help/OPL_Studio/opllangref/topics/opl_langref_modeling_language.html)
*  [OPL Keywords](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.ide.help/OPL_Studio/opllang_quickref/topics/opl_keywords_top.html)
*  [A list of CPLEX parameters](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.cplex.help/CPLEX/Parameters/topics/introListTopical.html)
*  [A list of CPO parameters](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.cpo.help/CP_Optimizer/Parameters/topics/paramcpoptimizer.html)
 | 
	# OPL models #
You can build OPL models in the  Decision Optimization experiment UI in  watsonx\.ai\.
In this section:
<!-- <ul> -->
 *  [Inputs and Outputs](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/OPLmodels.html?context=cdpaas&locale=en#topic_oplmodels__section_oplIO)
 *  [Engine settings](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/OPLmodels.html?context=cdpaas&locale=en#topic_oplmodels__engsettings)
<!-- </ul> -->
To create an OPL model in the  experiment UI, select  in the model selection window\. You can also import OPL models from a file or import a scenario \.zip file that contains the OPL model and the data\. If you import from a file or scenario \.zip file, the data must be in  \.csv format\. However, you can import other file formats that you have as project assets into the  experiment UI\. You can also import data sets including connected data into your project from the model builder in the [Prepare data view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html#ModelBuilderInterface__section_preparedata)\.
For more information about the OPL language and engine parameters, see:
<!-- <ul> -->
 *  [OPL language reference manual](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.ide.help/OPL_Studio/opllangref/topics/opl_langref_modeling_language.html)
 *  [OPL Keywords](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.ide.help/OPL_Studio/opllang_quickref/topics/opl_keywords_top.html)
 *  [A list of CPLEX parameters](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.cplex.help/CPLEX/Parameters/topics/introListTopical.html)
 *  [A list of CPO parameters](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.cpo.help/CP_Optimizer/Parameters/topics/paramcpoptimizer.html)
<!-- </ul> -->
<!-- </article "role="article" "> -->
 | 
| 
	8E56F0EFD08FF4A97E439EA3B8DE2B7AF1A302C9 | 
	https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=en | 
	Decision Optimization Visualization view | 
	 Visualization view 
With the  Decision Optimization experiment Visualization view, you can configure the graphical representation of input data and solutions for one or several scenarios.
Quick links:
*  [Visualization view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__section-dashboard)
*  [Table search and filtering](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__section_tablefilter)
*  [Visualization widgets syntax](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__section_widgetssyntax)
*  [Visualization Editor](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__viseditor)
*  [Visualization pages](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__vispages)
The  Visualization view is common to all scenarios in a Decision Optimization  experiment.
For example, the following image shows the default bar chart that appears in the solution tab for the example that is used in the tutorial [Solving and analyzing a model: the diet problem](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Notebooks/solveModel.htmltask_mtg_n3q_m1b).

The  Visualization view helps you compare different scenarios to validate models and business decisions.
For example, to show the two scenarios solved in this diet example tutorial, you can add another bar chart as follows:
1.  Click the chart widget and configure it by clicking the pencil icon.
2.  In the Chart widget editor, select  Add scenario and choose  scenario 1 (assuming that your current scenario is scenario 2) so that you have both scenario 1 and scenario 2 listed.
3.  In the Table field, select the  Solution data option and select  solution from the drop-down list.
4.  In the bar chart pane, select  Descending for the  Category order,  Y-axis for the  Bar type and click  OK to close the Chart widget editor. A second bar chart is then displayed showing you the solution results for scenario 2.
5.  Re-edit the chart and select  @Scenario in the  Split by field of the Bar chart pane. You then obtain both scenarios in the same bar chart:
.
You can select many different types of charts in the Chart widget editor.
Alternatively using the Vega Chart widget, you can similarly choose  Solution data>solution to display the same data, select value and name in both the x and y fields in the Chart section of the Vega Chart widget editor. Then, in the Mark section, select @Scenario for the color field. This selection gives you the following bar chart with the two scenarios on the same y-axis, distinguished by different colors.
.
If you re-edit the chart and select @Scenario for the column facet, you obtain the two scenarios in separate charts side-by-side as follows:

You can use many different types of charts that are available in the  Mark field of the Vega Chart widget editor.
You can also select the JSON tab in all the widget editors and configure your charts by using the JSON code. A more advanced example of JSON code is provided in the [Vega Chart widget specifications](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__section_hdc_5mm_33b) section.
The following widgets are available:
*  [Notes widget](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__section_edc_5mm_33b)
Add simple text notes to the  Visualization view.
*  [Table widget](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__section_fdc_5mm_33b)
Present input data and solution in tables, with a search and filtering feature. See [Table search and filtering](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__section_tablefilter).
*  [Charts widgets](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__section_alh_lfn_l2b)
Present input data and solution in charts.
*  [Gantt chart widget](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=entopic_visualization__section_idc_5mm_33b)
Display the solution to a scheduling problem (or any other type of suitable problem) in a Gantt chart.
This widget is used automatically for scheduling problems that are modeled with the  Modeling Assistant. You can edit this Gantt chart or create and configure new Gantt charts for any problem even for those models that don't use the  Modeling Assistant.
 | 
	# Visualization view #
With the  Decision Optimization experiment Visualization view, you can configure the graphical representation of input data and solutions for one or several scenarios\.
Quick links:
<!-- <ul> -->
 *  [Visualization view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=en#topic_visualization__section-dashboard)
 *  [Table search and filtering](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=en#topic_visualization__section_tablefilter)
 *  [Visualization widgets syntax](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=en#topic_visualization__section_widgetssyntax)
 *  [Visualization Editor](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=en#topic_visualization__viseditor)
 *  [Visualization pages](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=en#topic_visualization__vispages)
<!-- </ul> -->
The  Visualization view is common to all scenarios in a Decision Optimization  experiment\.
For example, the following image shows the default bar chart that appears in the solution tab for the example that is used in the tutorial [Solving and analyzing a model: the diet problem](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Notebooks/solveModel.html#task_mtg_n3q_m1b)\.

The  Visualization view helps you compare different scenarios to validate models and business decisions\.
For example, to show the two scenarios solved in this diet example tutorial, you can add another bar chart as follows:
<!-- <ol> -->
1.  Click the chart widget and configure it by clicking the pencil icon\.
2.  In the Chart widget editor, select  Add scenario and choose  scenario 1 (assuming that your current scenario is scenario 2) so that you have both scenario 1 and scenario 2 listed\.
3.  In the Table field, select the  Solution data option and select  solution from the drop\-down list\.
4.  In the bar chart pane, select  Descending for the  Category order,  Y\-axis for the  Bar type and click  OK to close the Chart widget editor\. A second bar chart is then displayed showing you the solution results for scenario 2\.
5.  Re\-edit the chart and select  @Scenario in the  Split by field of the Bar chart pane\. You then obtain both scenarios in the same bar chart:
<!-- </ol> -->
\.
You can select many different types of charts in the Chart widget editor\.
Alternatively using the Vega Chart widget, you can similarly choose  Solution data>solution to display the same data, select value and name in both the x and y fields in the Chart section of the Vega Chart widget editor\. Then, in the Mark section, select @Scenario for the color field\. This selection gives you the following bar chart with the two scenarios on the same y\-axis, distinguished by different colors\.
\.
If you re\-edit the chart and select @Scenario for the column facet, you obtain the two scenarios in separate charts side\-by\-side as follows:

You can use many different types of charts that are available in the  Mark field of the Vega Chart widget editor\.
You can also select the JSON tab in all the widget editors and configure your charts by using the JSON code\. A more advanced example of JSON code is provided in the [Vega Chart widget specifications](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=en#topic_visualization__section_hdc_5mm_33b) section\.
The following widgets are available:
<!-- <ul> -->
 *  [**Notes widget**](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=en#topic_visualization__section_edc_5mm_33b)
    
    Add simple text notes to the  Visualization view.
 *  [**Table widget**](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=en#topic_visualization__section_fdc_5mm_33b)
    
    Present input data and solution in tables, with a search and filtering feature. See [Table search and filtering](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=en#topic_visualization__section_tablefilter).
 *  **[Charts widgets](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=en#topic_visualization__section_alh_lfn_l2b)**
    
    Present input data and solution in charts.
 *  [**Gantt chart widget**](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html?context=cdpaas&locale=en#topic_visualization__section_idc_5mm_33b)
    
    Display the solution to a scheduling problem (or any other type of suitable problem) in a Gantt chart.
    
    This widget is used automatically for scheduling problems that are modeled with the  Modeling Assistant. You can edit this Gantt chart or create and configure new Gantt charts for any problem even for those models that don't use the  Modeling Assistant.
<!-- </ul> -->
<!-- </article "role="article" "> -->
 | 
| 
	33923FE20855D3EA3850294C0FB447EC3F1B7BDF | 
	https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/buildingmodels.html?context=cdpaas&locale=en | 
	Decision Optimization experiments | 
	 Decision Optimization experiments 
If you use the  Decision Optimization experiment UI, you can take advantage of its many features in this user-friendly environment. For example, you can create and solve models, produce reports, compare scenarios and save models ready for deployment with  Watson Machine Learning.
The  Decision Optimization experiment UI facilitates workflow. Here you can:
*  Select and edit the data relevant for your optimization problem, see [Prepare data view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_preparedata)
*  Create, import, edit and solve Python models in the  Decision Optimization experiment UI, see [Decision Optimization notebook tutorial](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Notebooks/solveModel.htmltask_mtg_n3q_m1b)
*  Create, import, edit and solve models expressed in natural language with the  Modeling Assistant, see [Modeling Assistant tutorial](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuild.htmlcogusercase)
*  Create, import, edit and solve OPL models in the  Decision Optimization experiment UI, see [OPL models](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/OPLmodels.htmltopic_oplmodels)
*  Generate a  notebook from your model, work with it as a  notebook then reload it as a model, see [Generating a notebook from a scenario](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__generateNB) and [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_overview)
*  Visualize data and solutions, see [Explore solution view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__solution)
*  Investigate and compare solutions for multiple scenarios, see [Scenario pane](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__scenariopanel) and [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_overview)
*  Easily create and share reports with tables, charts and notes using widgets provided in the [Visualization Editor](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.htmltopic_visualization)
*  Save models that are ready for deployment in  Watson Machine Learning, see [Scenario pane](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__scenariopanel) and [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_overview)
See the [Decision Optimization experiment UI comparison table](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DOintro.htmlDOIntro__comparisontable) for a list of features available with and without the  Decision Optimization experiment UI.
See [Views and scenarios](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface) for a description of the user interface and scenario management.
 | 
	# Decision Optimization experiments #
If you use the  Decision Optimization experiment UI, you can take advantage of its many features in this user\-friendly environment\. For example, you can create and solve models, produce reports, compare scenarios and save models ready for deployment with  Watson Machine Learning\.
The  Decision Optimization experiment UI facilitates workflow\. Here you can:
<!-- <ul> -->
 *  Select and edit the data relevant for your optimization problem, see [Prepare data view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html#ModelBuilderInterface__section_preparedata)
 *  Create, import, edit and solve Python models in the  Decision Optimization experiment UI, see [Decision Optimization notebook tutorial](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Notebooks/solveModel.html#task_mtg_n3q_m1b)
 *  Create, import, edit and solve models expressed in natural language with the  Modeling Assistant, see [Modeling Assistant tutorial](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuild.html#cogusercase)
 *  Create, import, edit and solve OPL models in the  Decision Optimization experiment UI, see [OPL models](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/OPLmodels.html#topic_oplmodels)
 *  Generate a  notebook from your model, work with it as a  notebook then reload it as a model, see [Generating a notebook from a scenario](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html#ModelBuilderInterface__generateNB) and [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html#ModelBuilderInterface__section_overview)
 *  Visualize data and solutions, see [Explore solution view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html#ModelBuilderInterface__solution)
 *  Investigate and compare solutions for multiple scenarios, see [Scenario pane](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html#ModelBuilderInterface__scenariopanel) and [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html#ModelBuilderInterface__section_overview)
 *  Easily create and share reports with tables, charts and notes using widgets provided in the [Visualization Editor](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/Visualization.html#topic_visualization)
 *  Save models that are ready for deployment in  Watson Machine Learning, see [Scenario pane](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html#ModelBuilderInterface__scenariopanel) and [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html#ModelBuilderInterface__section_overview)
<!-- </ul> -->
See the [Decision Optimization experiment UI comparison table](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/DOintro.html#DOIntro__comparisontable) for a list of features available with and without the  Decision Optimization experiment UI\.
See [Views and scenarios](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html#ModelBuilderInterface) for a description of the user interface and scenario management\.
<!-- </article "role="article" "> -->
 | 
| 
	497007D0D0ABAC3202BBF912A15BFC389066EBDA | 
	https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/configureEnvironments.html?context=cdpaas&locale=en | 
	Decision Optimization experiment Python and CPLEX runtime versions and Python extensions | 
	 Configuring environments and adding Python extensions 
You can change your default environment for Python and CPLEX in the  experiment Overview.
 Procedure 
To change the default environment for  DOcplex and  Modeling Assistant models:
1.  Open the   Overview, click  to open the  Information pane, and select the  Environments tab.

2.  Expand the environment section according to your model type. For Python and Modeling Assistant models, expand  Python environment. You can see the default Python environment (if one exists). To change the default environment for OPL, CPLEX, or CPO models, expand the appropriate environment section according to your model type and follow this same procedure.
3.  Expand the name of your environment, and select a different Python environment.
4.  Optional: To create a new environment:
1.  Select  New environment for Python. A new window opens for you to define your new environment. 
2.  Enter a  name, and select a  CPLEX version,  hardware specification,  copies (number of nodes),  Python version and (optionally) you can set  Associate a Python extension to  On to include any  Python libraries that you want to add.
3.  Click  New Python extension.
4.  Enter a name for your extension in the new  Create a Python extension window that opens, and click  Create.
5.  In the new Configure Python extension window that opens, you can set  YAML code to  On and enter or edit the provided YAML code.For example, use the provided template to add the custom libraries:
 Modify the following content to add a software customization to an environment.
 To remove an existing customization, delete the entire content and click Apply.
 Add conda channels on a new line after defaults, indented by two spaces and a hyphen.
channels:
- defaults
 To add packages through conda or pip, remove the comment on the following line.
 dependencies:
 Add conda packages here, indented by two spaces and a hyphen.
 Remove the comment on the following line and replace sample package name with your package name:
  - a_conda_package=1.0
 Add pip packages here, indented by four spaces and a hyphen.
 Remove the comments on the following lines  and replace sample package name with your package name.
  - pip:
    - a_pip_package==1.0
You can also click  Browse to add any Python libraries.
For example, this image shows a dynamic programming Python library that is imported and  YAML code set to  On.
Click  Done.
6.  Click  Create in the  New environment window.
Your chosen (or newly created) environment appears as ticked in the  Python environments drop-down list in the  Environments tab. The tick indicates that this is the default Python environment for all scenarios in your  experiment.
5.  Select  Manage experiment environments to see a detailed list of all existing environments for your  experiment in the  Environments tab.
You can use the options provided by clicking the three vertical dots next to an environment to  Edit,  Set as default,  Update in a deployment space or  Delete the environment. You can also create a  New environment from the  Manage experiment environments window, but creating a new environment from this window does not make it the default unless you explicitly set is as the default.
Updating your environment for Python or CPLEX versions: Python versions are regularly updated. If however you have explicitly specified an older Python version in your model, you must update this version specification or your models will not work. You can either create a new Python environment, as described earlier, or edit one from Manage experiment environments. This is also useful if you want to select a different version of CPLEX for your default environment.
6.  Click the  Python extensions tab.

Here you can view your Python extensions and see which environment it is used in. You can also create a  New Python extension or use the options to  Edit,  Download, and  Delete existing ones. If you edit a Python extension that is used by an experiment environment, the environment will be re-created.
You can also view your Python environments in your deployment space assets and any Python extensions you have added will appear in the software specification.
 Selecting a different run environment for a particular scenario 
You can choose different environments for individual scenarios on the Environment tab of the Run configuration pane.
 Procedure 
1.  Open the   Scenario pane and select your scenario in the   Build model view.
2.  Click the  Configure run icon next to the  Run button to open the Run configuration pane and select the  Environment tab.
3.  Choose  Select run environment for this scenario, choose an environment from the drop-down menu, and click  Run.
4.  Open the   Overview information pane. You can now see that your scenario has your chosen environment, while other scenarios are not affected by this modification.
 | 
	# Configuring environments and adding Python extensions #
You can change your default environment for Python and CPLEX in the  experiment Overview\.
## Procedure ##
To change the default environment for  DOcplex and  Modeling Assistant models:
<!-- <ol> -->
1.  Open the   Overview, click  to open the  Information pane, and select the  Environments tab\. 
    
    
2.  Expand the environment section according to your model type\. For Python and Modeling Assistant models, expand  Python environment\. You can see the default Python environment (if one exists)\. To change the default environment for OPL, CPLEX, or CPO models, expand the appropriate environment section according to your model type and follow this same procedure\.
3.  Expand the name of your environment, and select a different Python environment\.
4.  Optional: **To create a new environment**:
    
    <!-- <ol> -->
    
    1.  Select  New environment for Python. A new window opens for you to define your new environment. 
    2.  Enter a  name, and select a  CPLEX version,  hardware specification,  copies (number of nodes),  Python version and (optionally) you can set  Associate a Python extension to  On to include any  Python libraries that you want to add. 
    3.  Click  New Python extension.
    4.  Enter a name for your extension in the new  Create a Python extension window that opens, and click  Create.
    5.  In the new Configure Python extension window that opens, you can set  YAML code to  On and enter or edit the provided YAML code.For example, use the provided template to add the custom libraries:
        
            # Modify the following content to add a software customization to an environment.
            # To remove an existing customization, delete the entire content and click Apply.
            
            # Add conda channels on a new line after defaults, indented by two spaces and a hyphen.
            channels:
              - defaults
            
            # To add packages through conda or pip, remove the comment on the following line.
            # dependencies:
            
            # Add conda packages here, indented by two spaces and a hyphen.
            # Remove the comment on the following line and replace sample package name with your package name:
            #  - a_conda_package=1.0
            
            # Add pip packages here, indented by four spaces and a hyphen.
            # Remove the comments on the following lines  and replace sample package name with your package name.
            #  - pip:
            #    - a_pip_package==1.0
        
        You can also click  Browse to add any Python libraries.
        
        For example, this image shows a dynamic programming Python library that is imported and  YAML code set to  On.
        
        Click  Done.
    6.  Click  Create in the  New environment window.
    
    <!-- </ol> -->
    
    Your chosen (or newly created) environment appears as ticked in the  Python environments drop-down list in the  Environments tab. The tick indicates that this is the default Python environment for all scenarios in your  experiment.
5.  Select  Manage experiment environments to see a detailed list of all existing environments for your  experiment in the  Environments tab\.
    
    You can use the options provided by clicking the three vertical dots next to an environment to  Edit,  Set as default,  Update in a deployment space or  Delete the environment. You can also create a  New environment from the  Manage experiment environments window, but creating a new environment from this window does not make it the default unless you explicitly set is as the default.
    
    Updating your environment for Python or CPLEX versions: Python versions are regularly updated. If however you have explicitly specified an older Python version in your model, you must update this version specification or your models will not work. You can either create a new Python environment, as described earlier, or edit one from Manage experiment environments. This is also useful if you want to select a different version of CPLEX for your default environment.
6.  Click the  Python extensions tab\.
    
    
    
    Here you can view your Python extensions and see which environment it is used in. You can also create a  New Python extension or use the options to  Edit,  Download, and  Delete existing ones. If you edit a Python extension that is used by an experiment environment, the environment will be re-created.
    
    You can also view your Python environments in your deployment space assets and any Python extensions you have added will appear in the software specification.
<!-- </ol> -->
<!-- <article "class="topic task nested1" role="article" id="task_envscenario" "> -->
## Selecting a different run environment for a particular scenario ##
You can choose different environments for individual scenarios on the Environment tab of the Run configuration pane\.
### Procedure ###
<!-- <ol> -->
1.  Open the   Scenario pane and select your scenario in the   Build model view\.
2.  Click the  Configure run icon next to the  Run button to open the Run configuration pane and select the  Environment tab\.
3.  Choose  Select run environment for this scenario, choose an environment from the drop\-down menu, and click  Run\.
4.  Open the   Overview information pane\. You can now see that your scenario has your chosen environment, while other scenarios are not affected by this modification\.
<!-- </ol> -->
<!-- </article "class="topic task nested1" role="article" id="task_envscenario" "> -->
<!-- </article "class="nested0" role="article" id="task_hwswconfig" "> -->
 | 
| 
	5788D38721AEAE446CFAD7D9288B6BAB33FA1EF9 | 
	https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/docExamples.html?context=cdpaas&locale=en | 
	Decision Optimization sample models and notebooks | 
	 Sample models and notebooks for  Decision Optimization 
Several examples are presented in this documentation as tutorials. You can also use many other examples that are provided in the  Decision Optimization GitHub, and in the  Samples.
Quick links:
*  [Examples used in this documentation](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/docExamples.html?context=cdpaas&locale=enExamples__docexamples)
*  [Decision Optimization experiment samples (Modeling Assistant, Python, OPL)](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/docExamples.html?context=cdpaas&locale=enExamples__section_modelbuildersamples)
*  [Jupyter notebook samples](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/docExamples.html?context=cdpaas&locale=enExamples__section_xrg_fdj_cgb)
*  [Python notebooks in the Samples](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/docExamples.html?context=cdpaas&locale=enExamples__section_pythoncommunity)
 | 
	# Sample models and notebooks for  Decision Optimization #
Several examples are presented in this documentation as tutorials\. You can also use many other examples that are provided in the  Decision Optimization GitHub, and in the  Samples\.
Quick links:
<!-- <ul> -->
 *  [Examples used in this documentation](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/docExamples.html?context=cdpaas&locale=en#Examples__docexamples)
 *  [Decision Optimization experiment samples (Modeling Assistant, Python, OPL)](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/docExamples.html?context=cdpaas&locale=en#Examples__section_modelbuildersamples)
 *  [Jupyter notebook samples](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/docExamples.html?context=cdpaas&locale=en#Examples__section_xrg_fdj_cgb)
 *  [Python notebooks in the Samples](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/docExamples.html?context=cdpaas&locale=en#Examples__section_pythoncommunity)
<!-- </ul> -->
<!-- </article "role="article" "> -->
 | 
| 
	167D5677958594BA275E34B8748F7E8091782560 | 
	https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=en | 
	Decision Optimization experiment UI views and scenarios | 
	 Decision Optimization experiment views and scenarios 
The  Decision Optimization experiment UI has different  views in which you can select data, create models, solve different scenarios, and visualize the results.
Quick links to sections:
*  [ Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__section_overview)
*  [Hardware and software configuration](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__section_environment)
*  [Prepare data view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__section_preparedata)
*  [Build model view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__ModelView)
*  [Multiple model files](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__section_g21_p5n_plb)
*  [Run models](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__runmodel)
*  [Run configuration](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__section_runconfig)
*  [Run environment tab](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__envtabConfigRun)
*  [Explore solution view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__solution)
*  [Scenario pane](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__scenariopanel)
*  [Generating  notebooks from scenarios](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__generateNB)
*  [Importing scenarios](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__p_Importingscenarios)
*  [Exporting scenarios](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=enModelBuilderInterface__p_Exportingscenarios)
Note: To create and run Optimization models, you must have both a  Machine Learning service added to your project and a deployment space that is associated with your  experiment:
1.  Add a [Machine Learning service](https://cloud.ibm.com/catalog/services/machine-learning) to your project. You can either add this service at the project level (see [Creating a  Watson Machine Learning Service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-instance.html)), or you can add it when you first create a new  Decision Optimization experiment: click  Add a  Machine Learning service, select, or create a  New service, click  Associate, then close the window.
2.  Associate a [deployment space](https://dataplatform.cloud.ibm.com/ml-runtime/spaces) with your  Decision Optimization experiment (see [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.htmlcreate)). A deployment space can be created or selected when you first create a new  Decision Optimization experiment: click  Create a deployment space, enter a name for your deployment space, and click  Create. For existing models, you can also create, or select a space in the [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_overview) information pane.
When you add a Decision Optimization experiment as an asset in your project, you open the Decision Optimization experiment UI.
With the  Decision Optimization experiment UI, you can create and solve prescriptive optimization models that focus on the specific business problem that you want to solve.  To edit and solve models, you must have Admin or Editor roles in the project. Viewers of shared projects can only see  experiments, but cannot modify or run them.
You can create a  Decision Optimization model from scratch by entering a name or by choosing a .zip file, and then selecting  Create. Scenario 1 opens.
With the  Decision Optimization experiment UI, you can create several scenarios, with different data sets and optimization models. Thus, you, can create and compare different scenarios and see what impact changes can have on a problem.
For a step-by-step guide to build, solve and deploy a  Decision Optimization model, by using the user interface, see the [Quick start tutorial with video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html).
For each of the following views, you can organize your screen as full-screen or as a split-screen. To do so, hover over one of the  view tabs ( Prepare data,  Build model,  Explore solution) for a second or two. A menu then appears where you can select  Full Screen,  Left or  Right. For example, if you choose  Left for the  Prepare data view, and then choose  Right for the  Explore solution view, you can see both these views on the same screen.
 | 
	# Decision Optimization experiment views and scenarios #
The  Decision Optimization experiment UI has different  views in which you can select data, create models, solve different scenarios, and visualize the results\.
Quick links to sections:
<!-- <ul> -->
 *  [ Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=en#ModelBuilderInterface__section_overview)
 *  [Hardware and software configuration](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=en#ModelBuilderInterface__section_environment)
 *  [Prepare data view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=en#ModelBuilderInterface__section_preparedata)
 *  [Build model view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=en#ModelBuilderInterface__ModelView)
 *  [Multiple model files](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=en#ModelBuilderInterface__section_g21_p5n_plb)
 *  [Run models](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=en#ModelBuilderInterface__runmodel)
 *  [Run configuration](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=en#ModelBuilderInterface__section_runconfig)
 *  [Run environment tab](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=en#ModelBuilderInterface__envtabConfigRun)
 *  [Explore solution view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=en#ModelBuilderInterface__solution)
 *  [Scenario pane](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=en#ModelBuilderInterface__scenariopanel)
 *  [Generating  notebooks from scenarios](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=en#ModelBuilderInterface__generateNB)
 *  [Importing scenarios](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=en#ModelBuilderInterface__p_Importingscenarios)
 *  [Exporting scenarios](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html?context=cdpaas&locale=en#ModelBuilderInterface__p_Exportingscenarios)
<!-- </ul> -->
Note: To create and run Optimization models, you must have both a  Machine Learning service added to your project and a deployment space that is associated with your  experiment:
<!-- <ol> -->
1.  Add a [**Machine Learning** service](https://cloud.ibm.com/catalog/services/machine-learning) to your project\. You can either add this service at the project level (see [Creating a  Watson Machine Learning Service instance](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-service-instance.html)), or you can add it when you first create a new  Decision Optimization experiment: click  Add a  Machine Learning service, select, or create a  New service, click  Associate, then close the window\.
2.  Associate a [**deployment space**](https://dataplatform.cloud.ibm.com/ml-runtime/spaces) with your  Decision Optimization experiment (see [Deployment spaces](https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/ml-spaces_local.html#create))\. A deployment space can be created or selected when you first create a new  Decision Optimization experiment: click  Create a deployment space, enter a name for your deployment space, and click  Create\. For existing models, you can also create, or select a space in the [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html#ModelBuilderInterface__section_overview) information pane\.
<!-- </ol> -->
When you add a **Decision Optimization experiment** as an asset in your project, you open the **Decision Optimization experiment UI**\.
With the  Decision Optimization experiment UI, you can create and solve prescriptive optimization models that focus on the specific business problem that you want to solve\.  To edit and solve models, you must have Admin or Editor roles in the project\. Viewers of shared projects can only see  experiments, but cannot modify or run them\.
You can create a  Decision Optimization model from scratch by entering a name or by choosing a `.zip` file, and then selecting  Create\. Scenario 1 opens\.
With the  Decision Optimization experiment UI, you can create several scenarios, with different data sets and optimization models\. Thus, you, can create and compare different scenarios and see what impact changes can have on a problem\.
For a step\-by\-step guide to build, solve and deploy a  Decision Optimization model, by using the user interface, see the [Quick start tutorial with video](https://dataplatform.cloud.ibm.com/docs/content/wsj/getting-started/get-started-do.html)\.
For each of the following views, you can organize your screen as full\-screen or as a **split\-screen**\. To do so, hover over one of the  view tabs ( Prepare data,  Build model,  Explore solution) for a second or two\. A menu then appears where you can select  Full Screen,  Left or  Right\. For example, if you choose  Left for the  Prepare data view, and then choose  Right for the  Explore solution view, you can see both these views on the same screen\.
<!-- </article "role="article" "> -->
 | 
| 
	1C20BD9F24D670DD18B6BC28E020FBB23C742682 | 
	https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/CustomRules.html?context=cdpaas&locale=en | 
	Creating advanced custom constraints with Python in the Decision Optimization Modeling Assistant | 
	 Creating advanced custom constraints with Python 
This  Decision Optimization Modeling Assistant example shows you how to create advanced custom constraints that use Python.
 Procedure 
To create a new advanced custom constraint:
1.  In the   Build model view of your open  Modeling Assistant model, look at the  Suggestions pane. If you have  Display by category selected, expand the  Others section to locate  New custom constraint, and click it to add it to your model. Alternatively, without categories displayed, you can enter, for example, custom in the search field to find the same suggestion and click it to add it to your model.A new custom constraint is added to your model.

2.  Click  Enter your constraint. Use [brackets] for data, concepts, variables, or parameters  and enter the constraint you want to specify. For example, type No [employees] has [onCallDuties] for more than [2] consecutive days and press enter.The specification is displayed with default parameters (parameter1, parameter2, parameter3) for you to customize. These parameters will be passed to the Python function that implements this custom rule.

3.  Edit the default parameters in the specification to give them more meaningful names. For example, change the parameters to employees, on_call_duties, and limit and click enter.
4.  Click function name and enter a name for the function. For example, type limitConsecutiveAssignments and click enter.Your function name is added and an  Edit Python button appears.

5.  Click the  Edit Python button.A new window opens showing you Python code that you can edit to implement your custom rule. You can see your customized parameters in the code as follows:

Notice that the code is documented with corresponding data frames and table column names as you have defined in the custom rule. The limit is not documented as this is a numerical value.
6.  Optional: You can edit the Python code directly in this window, but you might find it useful to edit and debug your code in a notebook before using it here. In this case, close this window for now and in the  Scenario pane, expand the three vertical dots and select  Generate a notebook for this scenario that contains the custom rule. Enter a name for this notebook.The notebook is created in your project assets ready for you to edit and debug. Once you have edited, run and debugged it you can copy the code for your custom function back into this  Edit Python window in the Modeling Assistant.
7.  Edit the Python code in the  Modeling Assistant custom rule  Edit Python window. For example, you can define the rule for consecutive days in Python as follows:
def limitConsecutiveAssignments(self, mdl, employees, on_call_duties, limit):
global helper_add_labeled_cplex_constraint, helper_get_index_names_for_type, helper_get_column_name_for_property
print('Adding constraints for the custom rule')
for employee, duties in employees.associated(on_call_duties):
duties_day_idx = duties.join(Day)   Retrieve Day index from Day label
for d in Day['index']:
end = d + limit + 1   One must enforce that there are no occurence of (limit + 1) working consecutive days
duties_in_win = duties_day_idx[((duties_day_idx'index'] >= d) & (duties_day_idx'index'] <= end)) | (duties_day_idx'index'] <= end - 7)]
mdl.add_constraint(mdl.sum(duties_in_win.onCallDutyVar) <= limit)
8.  Click the  Run button to run your model with your custom constraint.When the run is completed you can see the results in the Explore solution view.
 | 
	# Creating advanced custom constraints with Python #
This  Decision Optimization Modeling Assistant example shows you how to create advanced custom constraints that use Python\.
## Procedure ##
To create a new advanced custom constraint:
<!-- <ol> -->
1.  In the   Build model view of your open  Modeling Assistant model, look at the  Suggestions pane\. If you have  Display by category selected, expand the  Others section to locate  New custom constraint, and click it to add it to your model\. Alternatively, without categories displayed, you can enter, for example, custom in the search field to find the same suggestion and click it to add it to your model\.A new custom constraint is added to your model\.
    
    
2.  Click  Enter your constraint\. Use \[brackets\] for data, concepts, variables, or parameters  and enter the constraint you want to specify\. For example, type No \[employees\] has \[onCallDuties\] for more than \[2\] consecutive days and press enter\.The specification is displayed with default parameters (`parameter1, parameter2, parameter3`) for you to customize\. These parameters will be passed to the Python function that implements this custom rule\.
    
    
3.  Edit the default parameters in the specification to give them more meaningful names\. For example, change the parameters to `employees, on_call_duties`, and `limit` and click enter\.
4.  Click function name and enter a name for the function\. For example, type limitConsecutiveAssignments and click enter\.Your function name is added and an  Edit Python button appears\.
    
    
5.  Click the  Edit Python button\.A new window opens showing you Python code that you can edit to implement your custom rule\. You can see your customized parameters in the code as follows:
    
    
    
    Notice that the code is documented with corresponding data frames and table column names as you have defined in the custom rule. The limit is not documented as this is a numerical value.
6.  Optional: You can edit the Python code directly in this window, but you might find it useful to edit and debug your code in a notebook before using it here\. In this case, close this window for now and in the  Scenario pane, expand the three vertical dots and select  Generate a notebook for this scenario that contains the custom rule\. Enter a name for this notebook\.The notebook is created in your project assets ready for you to edit and debug\. Once you have edited, run and debugged it you can copy the code for your custom function back into this  Edit Python window in the Modeling Assistant\.
7.  Edit the Python code in the  Modeling Assistant custom rule  Edit Python window\. For example, you can define the rule for consecutive days in Python as follows:
    
        def limitConsecutiveAssignments(self, mdl, employees, on_call_duties, limit):
                global helper_add_labeled_cplex_constraint, helper_get_index_names_for_type, helper_get_column_name_for_property
                print('Adding constraints for the custom rule')
                for employee, duties in employees.associated(on_call_duties):
                    duties_day_idx = duties.join(Day)  # Retrieve Day index from Day label
                    for d in Day['index']:
                        end = d + limit + 1  # One must enforce that there are no occurence of (limit + 1) working consecutive days
                        duties_in_win = duties_day_idx[((duties_day_idx'index'] >= d) & (duties_day_idx'index'] <= end)) | (duties_day_idx'index'] <= end - 7)]
                        mdl.add_constraint(mdl.sum(duties_in_win.onCallDutyVar) <= limit)
8.  Click the  Run button to run your model with your custom constraint\.When the run is completed you can see the results in the **Explore solution** view\.
<!-- </ol> -->
<!-- </article "role="article" "> -->
 | 
| 
	C07CD6DF8C92EDD0F2638573BFDCE7BF18AA2EB0 | 
	https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/advancedMA.html?context=cdpaas&locale=en | 
	Creating constraints and custom decisions with the Decision Optimization Modeling Assistant | 
	 Adding multi-concept constraints and custom decisions: shift assignment 
This  Decision Optimization Modeling Assistant example shows you how to use multi-concept iterations, the associated keyword in constraints, how to define your own custom decisions, and define logical constraints. For illustration, a resource assignment problem, ShiftAssignment, is used and its completed model with data is provided in the DO-samples.
 Procedure 
To download and open the sample:
1.  Download the  ShiftAssignment.zip file from the  Model_Builder subfolder in the [DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples).  Select the relevant product and version subfolder.
2.  Open your project or create an empty project.
3.  On the  Manage tab of your project, select the  Services and integrations section and click  Associate service. Then select an existing  Machine Learning service instance (or create a new one ) and click  Associate. When the service is associated, a success message is displayed, and you can then close the  Associate service window.
4.  Select the    Assets tab.
5.  Select   New asset > Solve optimization problems in the   Work with models section.
6.  Click  Local file in the   Solve optimization problems window that opens.
7.  Browse locally to find and choose the  ShiftAssignment.zip archive that you downloaded. Click  Open. Alternatively use drag and drop.
8.  Associate a Machine Learning service instance with your project and reload the page.
9.  If you haven't already associated a  Machine Learning service with your project, you must first select  Add a  Machine Learning service to select or create one before you choose a deployment space for your  experiment.
10. Click Create.A  Decision Optimization model is created with the same name as the sample.
11. Open the scenario pane and select the AssignmentWithOnCallDuties scenario.
 Using multi-concept iteration 
 Procedure 
To use multi-concept iteration, follow these steps.
1.  Click   Build model in the sidebar to view your model formulation.The model formulation shows the intent as being to assign employees to shifts, with its objectives and constraints.
2.  Expand the constraint For each Employee-Day combination , number of associated Employee-Shift assignments is less than or equal to 1.
 Defining custom decisions 
 Procedure 
To define custom decisions, follow these steps.
1.  Click   Build model to see the model formulation of the AssignmentWithOnCallDuties Scenario.
The custom decision OnCallDuties is used in the second objective. This objective ensures that the number of on-call duties are balanced over Employees.
The constraint  ensures that the on-call duty requirements that are listed in the Day table are satisfied.
The following steps show you how this custom decision OnCallDuties was defined.
2.  Open the  Settings pane and notice that the  Visualize and edit decisions is set to true (or set it to true if it is set to the default false).
This setting adds a  Decisions tab to your  Add to model window.

Here you can see OnCallDuty is specified as an assignment decision (to assign employees to on-call duties). Its two dimensions are defined with reference to the data tables Day and Employee. This means that your model will also assign on-call duties to employees. The Employee-Shift assignment decision is specified from the original intent.
3.  Optional:  Enter your own text to describe the OnCallDuty in the  [to be documented] field.
4.  Optional:  To create your own decision in the  Decisions tab, click the  enter name, type in a name and click enter. A new decision (intent) is created with that name with some highlighted fields to be completed by using the drop-down menus. If you, for example, select  assignment  as the  decision type, two dimensions are created. As assignment involves assigning at least one thing to another, at least two dimensions must be defined. Use  select a table fields to define the dimensions.
 Using logical constraints 
 Procedure 
To use logical constraints:
1.  Look at the constraint This constraint ensures that, for each employee and day combination, when no associated assignments exist (for example, the employee is on vacation on that day), that no on-call duties are assigned to that employee on that day. Note the use of the if...then keywords to define this logical constraint.
2.  Optional:  Add other logical constraints to your model by searching in the suggestions.
 | 
	# Adding multi\-concept constraints and custom decisions: shift assignment #
This  Decision Optimization Modeling Assistant example shows you how to use multi\-concept iterations, the `associated` keyword in constraints, how to define your own custom decisions, and define logical constraints\. For illustration, a resource assignment problem, `ShiftAssignment`, is used and its completed model with data is provided in the **DO\-samples**\.
## Procedure ##
To download and open the sample:
<!-- <ol> -->
1.  Download the  ShiftAssignment\.zip file from the  Model\_Builder subfolder in the **[DO\-samples](https://github.com/IBMDecisionOptimization/DO-Samples)**\.  Select the relevant product and version subfolder\.
2.  Open your project or create an empty project\.
3.  On the  Manage tab of your project, select the  Services and integrations section and click  Associate service\. Then select an existing  Machine Learning service instance (or create a new one ) and click  Associate\. When the service is associated, a success message is displayed, and you can then close the  Associate service window\. 
4.  Select the    Assets tab\.
5.  Select   New asset > Solve optimization problems in the   Work with models section\.
6.  Click  Local file in the   Solve optimization problems window that opens\.
7.  Browse locally to find and choose the  ShiftAssignment\.zip archive that you downloaded\. Click  Open\. Alternatively use drag and drop\.
8.  Associate a **Machine Learning service instance** with your project and reload the page\.
9.  If you haven't already associated a  Machine Learning service with your project, you must first select  Add a  Machine Learning service to select or create one before you choose a deployment space for your  experiment\.
10. Click **Create**\.A  Decision Optimization model is created with the same name as the sample\.
11. Open the scenario pane and select the `AssignmentWithOnCallDuties` scenario\.
<!-- </ol> -->
<!-- <article "class="topic task nested1" role="article" id="task_multiconceptiterations" "> -->
## Using multi\-concept iteration ##
### Procedure ###
To use multi\-concept iteration, follow these steps\.
<!-- <ol> -->
1.  Click   Build model in the sidebar to view your model formulation\.The model formulation shows the intent as being to assign employees to shifts, with its objectives and constraints\.
2.  Expand the constraint `For each Employee-Day combination , number of associated Employee-Shift assignments is less than or equal to 1`\.
<!-- </ol> -->
<!-- </article "class="topic task nested1" role="article" id="task_multiconceptiterations" "> -->
<!-- <article "class="topic task nested1" role="article" id="task_customdecision" "> -->
## Defining custom decisions ##
### Procedure ###
To define custom decisions, follow these steps\.
<!-- <ol> -->
1.  Click   Build model to see the model formulation of the `AssignmentWithOnCallDuties` Scenario\.
    
    The custom decision `OnCallDuties` is used in the second objective. This objective ensures that the number of on-call duties are balanced over Employees.
    
    The constraint  ensures that the on-call duty requirements that are listed in the Day table are satisfied.
    
    The following steps show you how this custom decision `OnCallDuties` was defined.
2.  Open the  Settings pane and notice that the  Visualize and edit decisions is set to `true` (or set it to true if it is set to the default false)\.
    
    This setting adds a  Decisions tab to your  Add to model window.
    
    
    
    Here you can see `OnCallDuty` is specified as an assignment decision (to assign employees to on-call duties). Its two dimensions are defined with reference to the data tables `Day` and `Employee`. This means that your model will also assign on-call duties to employees. The Employee-Shift assignment decision is specified from the original intent.
3.  Optional:  Enter your own text to describe the `OnCallDuty` in the  \[to be documented\] field\.
4.  Optional:  To create your own decision in the  Decisions tab, click the  enter name, type in a name and click enter\. A new decision (intent) is created with that name with some highlighted fields to be completed by using the drop\-down menus\. If you, for example, select  assignment  as the  decision type, two dimensions are created\. As assignment involves assigning at least one thing to another, at least two dimensions must be defined\. Use  select a table fields to define the dimensions\.
<!-- </ol> -->
<!-- </article "class="topic task nested1" role="article" id="task_customdecision" "> -->
<!-- <article "class="topic task nested1" role="article" id="task_impliedconstraints" "> -->
## Using logical constraints ##
### Procedure ###
To use logical constraints:
<!-- <ol> -->
1.  Look at the constraint This constraint ensures that, for each employee and day combination, when no associated assignments exist (for example, the employee is on vacation on that day), that no on\-call duties are assigned to that employee on that day\. Note the use of the `if...then` keywords to define this logical constraint\.
2.  Optional:  Add other logical constraints to your model by searching in the suggestions\.
<!-- </ol> -->
<!-- </article "class="topic task nested1" role="article" id="task_impliedconstraints" "> -->
<!-- </article "role="article" "> -->
 | 
| 
	0EFC1AA12637C84918CEF9FA5DE5DA424822330C | 
	https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuild.html?context=cdpaas&locale=en | 
	Decision Optimization Modeling Assistant scheduling tutorial | 
	 Formulating and running a model: house construction scheduling 
This tutorial shows you how to use the  Modeling Assistant to define, formulate and run a model for a house construction scheduling problem. The completed model with data is also provided in the DO-samples, see [Importing Model Builder samples](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/docExamples.htmlExamples__section_modelbuildersamples).
In this section:
*  [Modeling Assistant House construction scheduling tutorial](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuild.html?context=cdpaas&locale=encogusercase__section_The_problem)
*  [More about the model view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuild.html?context=cdpaas&locale=encogusercase__section_tbl_kdj_t1b)
*  [Generating a Python notebook from your scenario](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuild.html?context=cdpaas&locale=encogusercase__section_j2m_xnh_4bb)
 | 
	# Formulating and running a model: house construction scheduling #
This tutorial shows you how to use the  Modeling Assistant to define, formulate and run a model for a house construction scheduling problem\. The completed model with data is also provided in the **DO\-samples**, see [Importing Model Builder samples](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/docExamples.html#Examples__section_modelbuildersamples)\.
In this section:
<!-- <ul> -->
 *  [Modeling Assistant House construction scheduling tutorial](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuild.html?context=cdpaas&locale=en#cogusercase__section_The_problem)
 *  [More about the model view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuild.html?context=cdpaas&locale=en#cogusercase__section_tbl_kdj_t1b)
 *  [Generating a Python notebook from your scenario](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuild.html?context=cdpaas&locale=en#cogusercase__section_j2m_xnh_4bb)
<!-- </ul> -->
<!-- </article "role="article" "> -->
 | 
| 
	312E91752782553D39C335D0DAAF189025739BB4 | 
	https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuildintro.html?context=cdpaas&locale=en | 
	Decision Optimization Modeling Assistant models | 
	 Modeling Assistant models 
You can model and solve  Decision Optimization problems using the  Modeling Assistant (which enables you to formulate models in natural language). This requires little to no knowledge of Operational Research (OR) and does not require you to write Python code. The  Modeling Assistant is only available in English and is not globalized.
The basic workflow to create a model with the  Modeling Assistant and examine it under different scenarios is as follows:
1.  Create a project.
2.  Add a Decision Optimization  experiment (a scenario is created by default in the  experiment UI).
3.  Add and import your data into the scenario.
4.  Create a natural language model in the scenario, by first selecting your  decision domain and then using the  Modeling Assistant to guide you.
5.  Run the model to solve it and explore the solution.
6.  Create visualizations of solution and data.
7.  Copy the scenario and edit the model and/or the data.
8.  Solve the new scenario to see the impact of these changes.

This is demonstrated with a simple [planning and scheduling example ](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuild.htmlcogusercase).
For more information about deployment see .
 | 
	# Modeling Assistant models #
You can model and solve  Decision Optimization problems using the  Modeling Assistant (which enables you to formulate models in natural language)\. This requires little to no knowledge of Operational Research (OR) and does not require you to write Python code\. The  Modeling Assistant is **only available in English** and is not globalized\.
The basic workflow to create a model with the  Modeling Assistant and examine it under different scenarios is as follows:
<!-- <ol> -->
1.  Create a project\.
2.  Add a Decision Optimization  experiment (a scenario is created by default in the  experiment UI)\.
3.  Add and import your data into the scenario\.
4.  Create a natural language model in the scenario, by first selecting your  decision domain and then using the  Modeling Assistant to guide you\.
5.  Run the model to solve it and explore the solution\.
6.  Create visualizations of solution and data\.
7.  Copy the scenario and edit the model and/or the data\.
8.  Solve the new scenario to see the impact of these changes\.
<!-- </ol> -->

This is demonstrated with a simple [planning and scheduling example ](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/exhousebuild.html#cogusercase)\.
For more information about deployment see \.
<!-- </article "role="article" "> -->
 | 
| 
	2746F2E53D41F5810D92D843AF8C0AB2B36A0D47 | 
	https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Mdl_Assist/mdl_asst_domains.html?context=cdpaas&locale=en | 
	Selecting a Decision domain in the Modeling Assistant | 
	 Selecting a Decision domain in the  Modeling Assistant 
There are different  decision domains currently available in the  Modeling Assistant and you can be guided to choose the right  domain for your problem.
Once you have added and imported your data into your model, the  Modeling Assistant helps you to formulate your optimization model by offering you suggestions in natural language that you can edit. In order to make intelligent suggestions using your data, and to ensure that the proposed model formulation is well suited to your problem, you are asked to start by selecting a  decision domain for your model.
If you need a  decision domain that is not currently supported by the  Modeling Assistant, you can still formulate your model as a Python  notebook or as an OPL model in the  experiment UI editor.
 | 
	# Selecting a Decision domain in the  Modeling Assistant #
There are different  decision domains currently available in the  Modeling Assistant and you can be guided to choose the right  domain for your problem\.
Once you have added and imported your data into your model, the  Modeling Assistant helps you to formulate your optimization model by offering you suggestions in natural language that you can edit\. In order to make intelligent suggestions using your data, and to ensure that the proposed model formulation is well suited to your problem, you are asked to start by selecting a  decision domain for your model\.
If you need a  decision domain that is not currently supported by the  Modeling Assistant, you can still formulate your model as a Python  notebook or as an OPL model in the  experiment UI editor\.
<!-- </article "role="article" "> -->
 | 
| 
	F37BD72C28F0DAC8D9478ECEABA4F077ABCDE0C9 | 
	https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Notebooks/createScenario.html?context=cdpaas&locale=en | 
	Decision Optimization notebook tutorial create new scenario | 
	 Create new scenario 
To solve with different versions of your model or data you can create new scenarios in the  Decision Optimization experiment UI.
 Procedure 
To create a new scenario:
1.  Click the Open scenario pane icon  to open the Scenario panel.
2.  Use the  Create Scenario drop-down menu to create a new scenario from the current one.
3.  Add a name for the duplicate scenario and click Create.
4.  Working in your new scenario, in the  Prepare data view, open the diet_food data table in full mode.
5.  Locate the entry for Hotdog at row 9, and set the qmax value to 0 to exclude hot dog from possible solutions.
6.  Switch to the Build model view and run the model again.
7.  You can see the impact of your changes on the solution by switching from one scenario to the other.
 | 
	# Create new scenario #
To solve with different versions of your model or data you can create new scenarios in the  Decision Optimization experiment UI\.
## Procedure ##
To create a new scenario:
<!-- <ol> -->
1.  Click the **Open scenario pane** icon  to open the **Scenario** panel\. 
2.  Use the  Create Scenario drop\-down menu to create a new scenario from the current one\.
3.  Add a name for the duplicate scenario and click **Create**\.
4.  Working in your new scenario, in the  Prepare data view, open the `diet_food` data table in full mode\.
5.  Locate the entry for *Hotdog* at row 9, and set the `qmax` value to 0 to exclude hot dog from possible solutions\.
6.  Switch to the **Build model** view and run the model again\.
7.  You can see the impact of your changes on the solution by switching from one scenario to the other\.
<!-- </ol> -->
<!-- </article "role="article" "> -->
 | 
| 
	056E37762231E9E32F0F443987C32ACF7BF1AED4 | 
	https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Notebooks/multiIntro.html?context=cdpaas&locale=en | 
	Decision Optimization notebook multiple scenarios | 
	 Working with multiple scenarios 
You can generate multiple scenarios to test your model against a wide range of data and understand how robust the model is.
This example steps you through the process to generate multiple scenarios with a model. This makes it possible to test the performance of the model against multiple randomly generated data sets. It's important in practice to check the robustness of a model against a wide range of data. This helps ensure that the model performs well in potentially stochastic real-world conditions.
The example is the StaffPlanning model in the DO-samples.
The example is structured as follows:
*  The model StaffPlanning contains a default scenario based on two default data sets, along with five additional scenarios based on randomized data sets.
*  The Python  notebookCopyAndSolveScenarios contains the random generator to create the new scenarios in the StaffPlanning model.
For general information about scenario management and configuration, see [Scenario pane](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__scenariopanel) and [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_overview).
For information about writing methods and classes for scenarios, see the [ Decision Optimization Client Python API documentation](https://ibmdecisionoptimization.github.io/decision-optimization-client-doc/).
 | 
	# Working with multiple scenarios #
You can generate multiple scenarios to test your model against a wide range of data and understand how robust the model is\.
This example steps you through the process to generate multiple scenarios with a model\. This makes it possible to test the performance of the model against multiple randomly generated data sets\. It's important in practice to check the robustness of a model against a wide range of data\. This helps ensure that the model performs well in potentially stochastic real\-world conditions\.
The example is the `StaffPlanning` model in the **DO\-samples**\.
The example is structured as follows:
<!-- <ul> -->
 *  The model `StaffPlanning` contains a default scenario based on two default data sets, along with five additional scenarios based on randomized data sets\.
 *  The Python  notebook`CopyAndSolveScenarios` contains the random generator to create the new scenarios in the `StaffPlanning` model\.
<!-- </ul> -->
For general information about scenario management and configuration, see [Scenario pane](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html#ModelBuilderInterface__scenariopanel) and [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html#ModelBuilderInterface__section_overview)\.
For information about writing methods and classes for scenarios, see the [ Decision Optimization Client Python API documentation](https://ibmdecisionoptimization.github.io/decision-optimization-client-doc/)\.
<!-- </article "role="article" "> -->
 | 
| 
	3BEB81A5A5953CD570FA673B2496F8AF98725438 | 
	https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Notebooks/multiScenario.html?context=cdpaas&locale=en | 
	Decision Optimization notebook generating multiple scenarios | 
	 Generating multiple scenarios 
This tutorial shows you how to generate multiple scenarios from a  notebook using randomized data. Generating multiple scenarios lets you test a model by exposing it to a wide range of data.
 Procedure 
To create and solve a scenario using a sample:
1.  Download and extract all the [DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples) on to your machine. You can also download just the  StaffPlanning.zip file from the  Model_Builder subfolder for your product and version, but in this case do not extract it.
2.  Open your project or create an empty project.
3.  On the  Manage tab of your project, select the  Services and integrations section and click  Associate service. Then select an existing  Machine Learning service instance (or create a new one ) and click  Associate. When the service is associated, a success message is displayed, and you can then close the  Associate service window.
4.  Select the    Assets tab.
5.  Select   New asset > Solve optimization problems in the   Work with models section.
6.  Click  Local file in the   Solve optimization problems window that opens.
7.  Browse to choose the  StaffPlanning.zip file in the Model_Builder folder.  Select the relevant product and version subfolder in your downloaded  DO-samples.
8.  If you haven't already associated a  Machine Learning service with your project, you must first select  Add a  Machine Learning service to select or create one before you choose a deployment space for your  experiment.
9.  Click Create.A  Decision Optimization model is created with the same name as the sample.
10. Working in Scenario 1 of the StaffPlanning model, you can see that the solution contains tables to identify which resources work which days to meet expected demand. If there is no solution displayed, or to rerun the model, click Build model in the sidebar, then click Run to solve the model.
 | 
	# Generating multiple scenarios #
This tutorial shows you how to generate multiple scenarios from a  notebook using randomized data\. Generating multiple scenarios lets you test a model by exposing it to a wide range of data\.
## Procedure ##
To create and solve a scenario using a sample:
<!-- <ol> -->
1.  Download and extract all the **[DO\-samples](https://github.com/IBMDecisionOptimization/DO-Samples)** on to your machine\. You can also download just the  StaffPlanning\.zip file from the  Model\_Builder subfolder for your product and version, but in this case do not extract it\.
2.  Open your project or create an empty project\.
3.  On the  Manage tab of your project, select the  Services and integrations section and click  Associate service\. Then select an existing  Machine Learning service instance (or create a new one ) and click  Associate\. When the service is associated, a success message is displayed, and you can then close the  Associate service window\. 
4.  Select the    Assets tab\.
5.  Select   New asset > Solve optimization problems in the   Work with models section\.
6.  Click  Local file in the   Solve optimization problems window that opens\.
7.  Browse to choose the  StaffPlanning\.zip file in the **Model\_Builder** folder\.  Select the relevant product and version subfolder in your downloaded  DO\-samples\. 
8.  If you haven't already associated a  Machine Learning service with your project, you must first select  Add a  Machine Learning service to select or create one before you choose a deployment space for your  experiment\.
9.  Click **Create**\.A  Decision Optimization model is created with the same name as the sample\.
10. Working in Scenario 1 of the `StaffPlanning` model, you can see that the solution contains tables to identify which resources work which days to meet expected demand\. If there is no solution displayed, or to rerun the model, click **Build model** in the sidebar, then click **Run** to solve the model\.
<!-- </ol> -->
<!-- </article "role="article" "> -->
 | 
| 
	DECCA51BACC7BE33F484D36177B24C4BD0FE4CFD | 
	https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Notebooks/preparedataIO.html?context=cdpaas&locale=en | 
	Decision Optimization input and output data | 
	 Input and output data 
You can access the input and output data you defined in the  experiment UI by using the following dictionaries.
The data that you imported in the Prepare data view in the  experiment UI is accessible from the input dictionary. You must define each table by using the syntax inputs['tablename']. For example, here food is an entity that is defined from the table called diet_food:
food = inputs['diet_food']
Similarly, to show tables in the  Explore solution  view of the  experiment UI you must specify them using the syntax outputs['tablename']. For example,
outputs['solution'] = solution_df
defines an output table that is called solution. The entity solution_df in the Python model defines this table.
You can find this Diet example in the  Model_Builder folder of the [DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples). To import and run (solve) it in the  experiment UI, see [Solving and analyzing a model: the diet problem](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Notebooks/solveModel.htmltask_mtg_n3q_m1b).
 | 
	# Input and output data #
You can access the input and output data you defined in the  experiment UI by using the following dictionaries\.
The data that you imported in the **Prepare data view** in the  experiment UI is accessible from the input dictionary\. You must define each table by using the syntax `inputs['tablename']`\. For example, here food is an entity that is defined from the table called `diet_food`:
    food = inputs['diet_food']
Similarly, to show tables in the  Explore solution  view of the  experiment UI you must specify them using the syntax `outputs['tablename']`\. For example,
    outputs['solution'] = solution_df
defines an output table that is called `solution`\. The entity `solution_df` in the Python model defines this table\.
You can find this Diet example in the  Model\_Builder folder of the [DO\-samples](https://github.com/IBMDecisionOptimization/DO-Samples)\. To import and run (solve) it in the  experiment UI, see [Solving and analyzing a model: the diet problem](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Notebooks/solveModel.html#task_mtg_n3q_m1b)\.
<!-- </article "role="article" "> -->
 | 
| 
	726175290D457B10A02C27F08ECA1F6546E64680 | 
	https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Notebooks/solveIntro.html?context=cdpaas&locale=en | 
	Python DOcplex models | 
	 Python DOcplex models 
You can solve Python   DOcplex models in a  Decision Optimization experiment.
The  Decision Optimization environment currently supports Python  3.10. The default version is Python  3.10. You can modify this default version on the Environment tab of the [Run configuration pane](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_runconfig) or from the [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_overview) information pane.
The basic workflow to create a Python DOcplex model in  Decision Optimization, and examine it under different scenarios, is as follows:
1.  Create a project.
2.  Add data to the project.
3.  Add a  Decision Optimization experiment (a scenario is created by default in the  experiment UI).
4.  Select and import your data into the scenario.
5.  Create or import your Python model.
6.  Run the model to solve it and explore the solution.
7.  Copy the scenario and edit the data in the context of the new scenario.
8.  Solve the new scenario to see the impact of the changes to data.

 | 
	# Python DOcplex models #
You can solve Python   DOcplex models in a  Decision Optimization experiment\.
The  Decision Optimization environment currently supports Python  3\.10\. The default version is Python  3\.10\. You can modify this default version on the Environment tab of the [Run configuration pane](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html#ModelBuilderInterface__section_runconfig) or from the [Overview](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html#ModelBuilderInterface__section_overview) information pane\.
The basic workflow to create a Python DOcplex model in  Decision Optimization, and examine it under different scenarios, is as follows:
<!-- <ol> -->
1.  Create a project\.
2.  Add data to the project\.
3.  Add a  Decision Optimization experiment (a scenario is created by default in the  experiment UI)\.
4.  Select and import your data into the scenario\.
5.  Create or import your Python model\.
6.  Run the model to solve it and explore the solution\.
7.  Copy the scenario and edit the data in the context of the new scenario\.
8.  Solve the new scenario to see the impact of the changes to data\.
<!-- </ol> -->

<!-- </article "role="article" "> -->
 | 
| 
	2E1F6D5703CE75AF284903C20E5DBDFA1AE706B4 | 
	https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Notebooks/solveModel.html?context=cdpaas&locale=en | 
	Decision Optimization notebook tutorial | 
	 Solving and analyzing a model: the diet problem 
This example shows you how to create and solve a Python-based model by using a sample.
 Procedure 
To create and solve a Python-based model by using a sample:
1.  Download and extract all the [DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples) on to your computer. You can also download just the  diet.zip file from the  Model_Builder subfolder for your product and version, but in this case, do not extract it.
2.  Open your project or create an empty project.
3.  On the  Manage tab of your project, select the  Services and integrations section and click  Associate service. Then select an existing  Machine Learning service instance (or create a new one ) and click  Associate. When the service is associated, a success message is displayed, and you can then close the  Associate service window.
4.  Select the    Assets tab.
5.  Select   New asset > Solve optimization problems in the   Work with models section.
6.  Click  Local file in the   Solve optimization problems window that opens.
7.  Browse to find the  Model_Builder folder in your downloaded  DO-samples.  Select the relevant product and version subfolder. Choose the  Diet.zip file and click  Open. Alternatively use drag and drop.
8.  If you haven't already associated a  Machine Learning service with your project, you must first select  Add a  Machine Learning service to select or create one before you choose a deployment space for your  experiment.
9.  Click  New deployment space, enter a name, and click  Create (or select an existing space from the drop-down menu).
10. Click Create.A  Decision Optimization model is created with the same name as the sample.
11. In the   Prepare data view, you can see the data assets imported.These tables represent the min and max values for nutrients in the diet (diet_nutrients), the nutrients in different foods (diet_food_nutrients), and the price and quantity of specific foods (diet_food).

12. Click   Build model in the sidebar to view your model.The Python model minimizes the cost of the food in the diet while satisfying minimum nutrient and calorie requirements.

Note also how the inputs (tables in the  Prepare data view) and the outputs (in this case the solution table to be displayed in the Explore solution  view) are specified in this model.
13. Run the model by clicking the Run button in the   Build model view.
 | 
	# Solving and analyzing a model: the diet problem #
This example shows you how to create and solve a Python\-based model by using a sample\.
## Procedure ##
To create and solve a Python\-based model by using a sample:
<!-- <ol> -->
1.  Download and extract all the [DO\-samples](https://github.com/IBMDecisionOptimization/DO-Samples) on to your computer\. You can also download just the  diet\.zip file from the  Model\_Builder subfolder for your product and version, but in this case, do not extract it\.
2.  Open your project or create an empty project\.
3.  On the  Manage tab of your project, select the  Services and integrations section and click  Associate service\. Then select an existing  Machine Learning service instance (or create a new one ) and click  Associate\. When the service is associated, a success message is displayed, and you can then close the  Associate service window\. 
4.  Select the    Assets tab\.
5.  Select   New asset > Solve optimization problems in the   Work with models section\.
6.  Click  Local file in the   Solve optimization problems window that opens\.
7.  Browse to find the  Model\_Builder folder in your downloaded  DO\-samples\.  Select the relevant product and version subfolder\. Choose the  Diet\.zip file and click  Open\. Alternatively use drag and drop\.
8.  If you haven't already associated a  Machine Learning service with your project, you must first select  Add a  Machine Learning service to select or create one before you choose a deployment space for your  experiment\.
9.  Click  New deployment space, enter a name, and click  Create (or select an existing space from the drop\-down menu)\.
10. Click **Create**\.A  Decision Optimization model is created with the same name as the sample\.
11. In the   Prepare data view, you can see the data assets imported\.These tables represent the min and max values for nutrients in the diet (`diet_nutrients`), the nutrients in different foods (`diet_food_nutrients`), and the price and quantity of specific foods (`diet_food`)\.
    
    
12. Click   Build model in the sidebar to view your model\.The Python model minimizes the cost of the food in the diet while satisfying minimum nutrient and calorie requirements\.
    
    
    
    Note also how the **inputs** (tables in the  Prepare data view) and the **outputs** (in this case the solution table to be displayed in the Explore solution  view) are specified in this model.
13. Run the model by clicking the **Run** button in the   Build model view\.
<!-- </ol> -->
<!-- </article "role="article" "> -->
 | 
| 
	D51AD51E5407BF4EFAE5C97FE7E031DB56CF8733 | 
	https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_RunParameters/runparams.html?context=cdpaas&locale=en | 
	Decision Optimization run parameters | 
	 Run parameters and Environment 
You can select various run parameters for the optimization solve in the  Decision Optimization experiment UI.
Quick links to sections:
*  [CPLEX runtime version](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_RunParameters/runparams.html?context=cdpaas&locale=enRunConfig__cplexruntime)
*  [Python version](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_RunParameters/runparams.html?context=cdpaas&locale=enRunConfig__pyversion)
*  [Run configuration parameters](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_RunParameters/runparams.html?context=cdpaas&locale=enRunConfig__section_runconfig)
*  [Environment for scenario](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_RunParameters/runparams.html?context=cdpaas&locale=enRunConfig__section_runparamenv)
 | 
	# Run parameters and Environment #
You can select various run parameters for the optimization solve in the  Decision Optimization experiment UI\.
Quick links to sections:
<!-- <ul> -->
 *  [CPLEX runtime version](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_RunParameters/runparams.html?context=cdpaas&locale=en#RunConfig__cplexruntime)
 *  [Python version](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_RunParameters/runparams.html?context=cdpaas&locale=en#RunConfig__pyversion)
 *  [Run configuration parameters](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_RunParameters/runparams.html?context=cdpaas&locale=en#RunConfig__section_runconfig)
 *  [Environment for scenario](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_RunParameters/runparams.html?context=cdpaas&locale=en#RunConfig__section_runparamenv)
<!-- </ul> -->
<!-- </article "role="article" "> -->
 | 
| 
	C6EE4CACFC1E29BAFBB8ED5D98521EA68388D0CB | 
	https://dataplatform.cloud.ibm.com/docs/content/DO/DOWS-Cloud_home.html?context=cdpaas&locale=en | 
	Decision Optimization | 
	 Decision Optimization 
IBM®  Decision Optimization gives you access to IBM's industry-leading solution engines for mathematical programming and constraint programming. You can build  Decision Optimization models either with  notebooks or by using the powerful  Decision Optimization experiment UI (Beta version). Here you can import, or create and edit models in Python, in OPL or with natural language expressions provided by the intelligent  Modeling Assistant (Beta version). You can also deploy models with  Watson Machine Learning.
Data format
:   Tabular: .csv, .xls, .json files. See [Prepare data view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.htmlModelBuilderInterface__section_preparedata)
Data from [Connected data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)
For deployment see [Model input and output data file formats](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelIOFileFormats.html)
Data size
:   Any
 | 
	# Decision Optimization #
IBM®  Decision Optimization gives you access to IBM's industry\-leading solution engines for mathematical programming and constraint programming\. You can build  Decision Optimization models either with  notebooks or by using the powerful  Decision Optimization experiment UI (Beta version)\. Here you can import, or create and edit models in Python, in OPL or with natural language expressions provided by the intelligent  Modeling Assistant (Beta version)\. You can also deploy models with  Watson Machine Learning\.
Data format
:   Tabular: `.csv`, `.xls`, `.json` files\. See [Prepare data view](https://dataplatform.cloud.ibm.com/docs/content/DO/DODS_Introduction/modelbuilderUI.html#ModelBuilderInterface__section_preparedata)
    
    Data from [Connected data assets](https://dataplatform.cloud.ibm.com/docs/content/wsj/manage-data/connected-data.html)
    
    For deployment see [Model input and output data file formats](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelIOFileFormats.html)
Data size
:   Any
<!-- </article "role="article" "> -->
 | 
| 
	E45F37BDDB38D6656992642FBEA2707FE34E942A | 
	https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/CPLEXSolveWML.html?context=cdpaas&locale=en | 
	Delegating CPLEX solve to Watson Machine Learning | 
	 Delegating the  Decision Optimization solve to run on  Watson Machine Learning from Java or .NET CPLEX or CPO models 
You can delegate the  Decision Optimization solve to run on  Watson Machine Learning from your Java or .NET (CPLEX or CPO) models.
Delegating the solve is only useful if you are building and generating your models locally. You cannot deploy models and run jobs  Watson Machine Learning with this method. For full use of Java models on  Watson Machine Learning use the  Java™ worker Important: To deploy and test models on  Watson Machine Learning, use the  Java worker. For more information about deploying Java models, see the [Java worker GitHub](https://github.com/IBMDecisionOptimization/cplex-java-worker/blob/master/README.md).For the library and documentation for:
*  Java CPLEX or CPO models. See [Decision Optimization GitHub DOforWMLwithJava](https://github.com/IBMDecisionOptimization/DOforWMLwithJava).
*  .NET CPLEX or CPO models. See [Decision Optimization GitHub DOforWMLWith.NET](https://github.com/IBMDecisionOptimization/DOForWMLWith.NET).
 | 
	# Delegating the  Decision Optimization solve to run on  Watson Machine Learning from Java or \.NET CPLEX or CPO models #
You can delegate the  Decision Optimization solve to run on  Watson Machine Learning from your Java or \.NET (CPLEX or CPO) models\.
Delegating the solve is only useful if you are building and generating your models locally\. You cannot deploy models and run jobs  Watson Machine Learning with this method\. For full use of Java models on  Watson Machine Learning use the  Java™ worker Important: To deploy and test models on  Watson Machine Learning, use the  Java worker\. For more information about deploying Java models, see the [Java worker GitHub](https://github.com/IBMDecisionOptimization/cplex-java-worker/blob/master/README.md)\.For the library and documentation for:
<!-- <ul> -->
 *  Java CPLEX or CPO models\. See [Decision Optimization GitHub DOforWMLwithJava](https://github.com/IBMDecisionOptimization/DOforWMLwithJava)\.
 *  \.NET CPLEX or CPO models\. See [Decision Optimization GitHub DOforWMLWith\.NET](https://github.com/IBMDecisionOptimization/DOForWMLWith.NET)\.
<!-- </ul> -->
<!-- </article "role="article" "> -->
 | 
| 
	5BC48AB9A35E2E8BAEA5204C4406835154E2B836 | 
	https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployIntro.html?context=cdpaas&locale=en | 
	Decision Optimization deployment steps | 
	 Deployment steps 
With IBM  Watson Machine Learning you can deploy your  Decision Optimization prescriptive model and associated common data once and then submit job requests to this deployment with only the related transactional data. This deployment can be achieved by using the  Watson Machine Learning REST API or by using the  Watson Machine Learning Python client.
See [REST API example](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployModelRest.htmltask_deploymodelREST) for a full code example. See [Python client examples](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployPythonClient.htmltopic_wmlpythonclient) for a link to a Python  notebook available from the  Samples.
 | 
	# Deployment steps #
With IBM  Watson Machine Learning you can deploy your  Decision Optimization prescriptive model and associated common data once and then submit job requests to this deployment with only the related transactional data\. This deployment can be achieved by using the  Watson Machine Learning REST API or by using the  Watson Machine Learning Python client\.
See [REST API example](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployModelRest.html#task_deploymodelREST) for a full code example\. See [Python client examples](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployPythonClient.html#topic_wmlpythonclient) for a link to a Python  notebook available from the  Samples\.
<!-- </article "role="article" "> -->
 | 
| 
	134EB5D79038B55A3A6AC019016A21EC2B6A1917 | 
	https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployJava.html?context=cdpaas&locale=en | 
	Deploying Java models | 
	 Deploying  Java models for  Decision Optimization 
You can deploy  Decision Optimization Java models in  Watson Machine Learning by using the  Watson Machine Learning REST API.
With the  Java worker API, you can create optimization models with OPL, CPLEX, and CP Optimizer Java APIs. Therefore, you can easily create your models locally, package them and deploy them on Watson Machine Learning by using the boilerplate that is provided in the public [Java worker GitHub](https://github.com/IBMDecisionOptimization/cplex-java-worker/blob/master/README.md).
The  Decision Optimization[Java worker GitHub](https://github.com/IBMDecisionOptimization/cplex-java-worker/blob/master/README.md) contains a boilerplate with everything that you need to run, deploy, and verify your Java models in  Watson Machine Learning, including an example. You can use the code in this repository to package your  Decision Optimization Java model in a .jar file that can be used as a  Watson Machine Learning model. For more information about  Java worker parameters, see the [Java documentation](https://github.com/IBMDecisionOptimization/do-maven-repo/blob/master/com/ibm/analytics/optim/api_java_client/1.0.0/api_java_client-1.0.0-javadoc.jar).
You can build your  Decision Optimization models in Java or you can use  Java worker to package CPLEX, CPO, and OPL models.
For more information about these models, see the following reference manuals.
*  [Java CPLEX reference documentation](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.cplex.help/refjavacplex/html/overview-summary.html)
*  [Java CPO reference documentation](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.cpo.help/refjavacpoptimizer/html/overview-summary.html)
*  [Java OPL reference documentation](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.ide.help/refjavaopl/html/overview-summary.html)
 | 
	# Deploying  Java models for  Decision Optimization #
You can deploy  Decision Optimization Java models in  Watson Machine Learning by using the  Watson Machine Learning REST API\.
With the  Java worker API, you can create optimization models with OPL, CPLEX, and CP Optimizer Java APIs\. Therefore, you can easily create your models locally, package them and deploy them on Watson Machine Learning by using the boilerplate that is provided in the public [Java worker GitHub](https://github.com/IBMDecisionOptimization/cplex-java-worker/blob/master/README.md)\.
The  Decision Optimization[Java worker GitHub](https://github.com/IBMDecisionOptimization/cplex-java-worker/blob/master/README.md) contains a boilerplate with everything that you need to run, deploy, and verify your Java models in  Watson Machine Learning, including an example\. You can use the code in this repository to package your  Decision Optimization Java model in a `.jar` file that can be used as a  Watson Machine Learning model\. For more information about  Java worker parameters, see the [Java documentation](https://github.com/IBMDecisionOptimization/do-maven-repo/blob/master/com/ibm/analytics/optim/api_java_client/1.0.0/api_java_client-1.0.0-javadoc.jar)\.
You can build your  Decision Optimization models in Java or you can use  Java worker to package CPLEX, CPO, and OPL models\.
For more information about these models, see the following reference manuals\.
<!-- <ul> -->
 *  [Java CPLEX reference documentation](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.cplex.help/refjavacplex/html/overview-summary.html)
 *  [Java CPO reference documentation](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.cpo.help/refjavacpoptimizer/html/overview-summary.html)
 *  [Java OPL reference documentation](https://www.ibm.com/docs/en/SSSA5P_22.1.1/ilog.odms.ide.help/refjavaopl/html/overview-summary.html)
<!-- </ul> -->
<!-- </article "role="article" "> -->
 | 
| 
	B92F42609B54B82BFE38A69B781052E876258C2C | 
	https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployModelRest.html?context=cdpaas&locale=en | 
	Decision Optimization REST API deployment | 
	 REST API example 
You can deploy a  Decision Optimization model, create and monitor jobs and get solutions using the  Watson Machine Learning REST API.
 Procedure 
1.  Generate an IAM token using your [IBM Cloud API key](https://cloud.ibm.com/iam/apikeys) as follows.
curl "https://iam.bluemix.net/identity/token" 
-d "apikey=YOUR_API_KEY_HERE&grant_type=urn%3Aibm%3Aparams%3Aoauth%3Agrant-type%3Aapikey" 
-H "Content-Type: application/x-www-form-urlencoded" 
-H "Authorization: Basic Yng6Yng="
Output example:
{
"access_token": " obtained IAM token ",
"refresh_token": "",
"token_type": "Bearer",
"expires_in": 3600,
"expiration": 1554117649,
"scope": "ibm openid"
}
Use the obtained token (access_token value) prepended by the word Bearer in the Authorization header, and the Machine Learning service GUID in the ML-Instance-ID header, in all API calls.
2.  Optional: If you have not obtained your SPACE-ID from the user interface as described previously, you can create a space using the REST API as follows. Use the previously obtained token prepended by the word bearer in the Authorization header in all API calls.
curl --location --request POST 
"https://api.dataplatform.cloud.ibm.com/v2/spaces" 
-H "Authorization: Bearer TOKEN-HERE" 
-H "ML-Instance-ID: MACHINE-LEARNING-SERVICE-GUID-HERE" 
-H "Content-Type: application/json" 
--data-raw "{
"name": "SPACE-NAME-HERE",
"description": "optional description here",
"storage": {
"resource_crn": "COS-CRN-ID-HERE"
},
"compute": [{
"name": "MACHINE-LEARNING-SERVICE-NAME-HERE",
"crn": "MACHINE-LEARNING-SERVICE-CRN-ID-HERE"
}]
}"
For Windows users, put the --data-raw command on one line and replace all " with " inside this command as follows:
curl --location --request POST ^
"https://api.dataplatform.cloud.ibm.com/v2/spaces" ^
-H "Authorization: Bearer TOKEN-HERE" ^
-H "ML-Instance-ID: MACHINE-LEARNING-SERVICE-GUID-HERE" ^
-H "Content-Type: application/json" ^
--data-raw "{"name": "SPACE-NAME-HERE","description": "optional description here","storage": {"resource_crn": "COS-CRN-ID-HERE"  },"compute": [{"name": "MACHINE-LEARNING-SERVICE-NAME-HERE","crn": "MACHINE-LEARNING-SERVICE-CRN-ID-HERE"  }]}"
Alternatively put the data in a separate file.A SPACE-ID is returned in id field of the metadata section.
Output example:
{
"entity": {
"compute": [
{
"crn": "MACHINE-LEARNING-SERVICE-CRN",
"guid": "MACHINE-LEARNING-SERVICE-GUID",
"name": "MACHINE-LEARNING-SERVICE-NAME",
"type": "machine_learning"
}
],
"description": "string",
"members": [
{
"id": "XXXXXXX",
"role": "admin",
"state": "active",
"type": "user"
}
],
"name": "name",
"scope": {
"bss_account_id": "account_id"
},
"status": {
"state": "active"
}
},
"metadata": {
"created_at": "2020-07-17T08:36:57.611Z",
"creator_id": "XXXXXXX",
"id": "SPACE-ID",
"url": "/v2/spaces/SPACE-ID"
}
}
You must wait until your deployment space status is "active" before continuing. You can poll to check for this as follows.
curl --location --request GET "https://api.dataplatform.cloud.ibm.com/v2/spaces/SPACE-ID-HERE" 
-H "Authorization: bearer TOKEN-HERE" 
-H "Content-Type: application/json"
3.  Create a new  Decision Optimization model
All API requests require a version parameter that takes a date in the format version=YYYY-MM-DD. This code example posts a model that uses the file create_model.json. The URL will vary according to the chosen region/location for your machine learning service.  See [Endpoint URLs](https://cloud.ibm.com/apidocs/machine-learningendpoint-url).
curl --location --request POST 
"https://us-south.ml.cloud.ibm.com/ml/v4/models?version=2020-08-01" 
-H "Authorization: bearer TOKEN-HERE" 
-H "Content-Type: application/json" 
-d @create_model.json
The  create_model.json file contains the following code:
{
"name": "ModelName",
"description": "ModelDescription",
"type": "do-docplex_22.1",
"software_spec": {
"name": "do_22.1"
},
"custom": {
"decision_optimization": {
"oaas.docplex.python": "3.10"
}
},
"space_id": "SPACE-ID-HERE"
}
The Python version is stated explicitly here in a custom block. This is optional. Without it your model will use the default version which is currently Python  3.10. As the default version will evolve over time, stating the Python version explicitly enables you to easily change it later or to keep using an older supported version when the default version is updated. Currently supported versions are  3.10.
If you want to be able to run jobs for this model from the user interface, instead of only using the REST API , you must define the schema for the input and output data. If you do not define the schema when you create the model, you can only run jobs using the REST API and not from the user interface.
You can also use the schema specified for input and output in your optimization model:
{
"name": "Diet-Model-schema",
"description": "Diet",
"type": "do-docplex_22.1",
"schemas": {
"input": [
{
"id": "diet_food_nutrients",
"fields":
{ "name": "Food",  "type": "string" },
{ "name": "Calories", "type": "double" },
{ "name": "Calcium", "type": "double" },
{ "name": "Iron", "type": "double" },
{ "name": "Vit_A", "type": "double" },
{ "name": "Dietary_Fiber", "type": "double" },
{ "name": "Carbohydrates", "type": "double" },
{ "name": "Protein", "type": "double" }
]
},
{
"id": "diet_food",
"fields":
{ "name": "name", "type": "string" },
{ "name": "unit_cost", "type": "double" },
{ "name": "qmin", "type": "double" },
{ "name": "qmax", "type": "double" }
]
},
{
"id": "diet_nutrients",
"fields":
{ "name": "name", "type": "string" },
{ "name": "qmin", "type": "double" },
{ "name": "qmax", "type": "double" }
]
}
],
"output": [
{
"id": "solution",
"fields":
{ "name": "name", "type": "string" },
{ "name": "value", "type": "double" }
]
}
]
},
"software_spec": {
"name": "do_22.1"
},
"space_id": "SPACE-ID-HERE"
}
When you post a model you provide information about its model type and the software specification to be used.Model types can be, for example:
*  do-opl_22.1 for OPL models
*  do-cplex_22.1 for CPLEX models
*  do-cpo_22.1 for CP models
*  do-docplex_22.1 for Python models
Version  20.1 can also be used for these model types.
For the software specification, you can use the default specifications using their names do_22.1 or do_20.1. See also [Extend software specification notebook](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployPythonClient.htmltopic_wmlpythonclient__extendWML) which shows you how to extend the  Decision Optimization software specification (runtimes with additional Python libraries for docplex models).
A MODEL-ID is returned in id field in the metadata.
Output example:
{
"entity": {
"software_spec": {
"id": "SOFTWARE-SPEC-ID"
},
"type": "do-docplex_20.1"
},
"metadata": {
"created_at": "2020-07-17T08:37:22.992Z",
"description": "ModelDescription",
"id": "MODEL-ID",
"modified_at": "2020-07-17T08:37:22.992Z",
"name": "ModelName",
"owner": "",
"space_id": "SPACE-ID"
}
}
4.  Upload a  Decision Optimization model formulation ready for deployment.First compress your model into a (tar.gz, .zip or .jar) file and upload it to be deployed by the  Watson Machine Learning service.This code example uploads a model called  diet.zip that contains a Python model and no common data:
curl --location --request PUT 
"https://us-south.ml.cloud.ibm.com/ml/v4/models/MODEL-ID-HERE/content?version=2020-08-01&space_id=SPACE-ID-HERE&content_format=native" 
-H "Authorization: bearer TOKEN-HERE" 
-H "Content-Type: application/gzip" 
--data-binary "@diet.zip"
You can download this example and other models from the [DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples).  Select the relevant product and version subfolder.
5.  Deploy your modelCreate a reference to your model.  Use the SPACE-ID, the MODEL-ID obtained when you created your model ready for deployment and the hardware specification. For example:
curl --location --request POST "https://us-south.ml.cloud.ibm.com/ml/v4/deployments?version=2020-08-01" 
-H "Authorization: bearer TOKEN-HERE" 
-H "Content-Type: application/json" 
-d @deploy_model.json
The deploy_model.json file contains the following code:
{
"name": "Test-Diet-deploy",
"space_id": "SPACE-ID-HERE",
"asset": {
"id": "MODEL-ID-HERE"
},
"hardware_spec": {
"name": "S"
},
"batch": {}
}
The DEPLOYMENT-ID is returned in id field in the metadata. Output example:
{
"entity": {
"asset": {
"id": "MODEL-ID"
},
"custom": {},
"description": "",
"hardware_spec": {
"id": "HARDWARE-SPEC-ID",
"name": "S",
"num_nodes": 1
},
"name": "Test-Diet-deploy",
"space_id": "SPACE-ID",
"status": {
"state": "ready"
}
},
"metadata": {
"created_at": "2020-07-17T09:10:50.661Z",
"description": "",
"id": "DEPLOYMENT-ID",
"modified_at": "2020-07-17T09:10:50.661Z",
"name": "test-Diet-deploy",
"owner": "",
"space_id": "SPACE-ID"
}
}
6.  Once deployed, you can monitor your model's deployment state. Use the DEPLOYMENT-ID.For example:
curl --location --request GET "https://us-south.ml.cloud.ibm.com/ml/v4/deployments/DEPLOYMENT-ID-HERE?version=2020-08-01&space_id=SPACE-ID-HERE" 
-H "Authorization: bearer TOKEN-HERE" 
-H "Content-Type: application/json"
Output example:
7.  You can then Submit jobs for your deployed model defining the input data and the output (results of the optimization solve) and the log file.For example, the following shows the contents of a file called myjob.json. It contains (inline) input data, some solve parameters, and specifies that the output will be a .csv file. For examples of other types of input data references, see [Model input and output data adaptation](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelIODataDefn.htmltopic_modelIOAdapt).
{
"name":"test-job-diet",
"space_id": "SPACE-ID-HERE",
"deployment": {
"id": "DEPLOYMENT-ID-HERE"
},
"decision_optimization" : {
"solve_parameters" : {
"oaas.logAttachmentName":"log.txt",
"oaas.logTailEnabled":"true"
},
"input_data": [
{
"id":"diet_food.csv",
"fields" : "name","unit_cost","qmin","qmax"],
"values" :
"Roasted Chicken", 0.84, 0, 10],
"Spaghetti W/ Sauce", 0.78, 0, 10],
"Tomato,Red,Ripe,Raw", 0.27, 0, 10],
"Apple,Raw,W/Skin", 0.24, 0, 10],
"Grapes", 0.32, 0, 10],
"Chocolate Chip Cookies", 0.03, 0, 10],
"Lowfat Milk", 0.23, 0, 10],
"Raisin Brn", 0.34, 0, 10],
"Hotdog", 0.31, 0, 10]
]
},
{
"id":"diet_food_nutrients.csv",
"fields" : "Food","Calories","Calcium","Iron","Vit_A","Dietary_Fiber","Carbohydrates","Protein"],
"values" :
"Spaghetti W/ Sauce", 358.2, 80.2, 2.3, 3055.2, 11.6, 58.3, 8.2],
"Roasted Chicken", 277.4, 21.9, 1.8, 77.4, 0, 0, 42.2],
"Tomato,Red,Ripe,Raw", 25.8, 6.2, 0.6, 766.3, 1.4, 5.7, 1],
"Apple,Raw,W/Skin", 81.4, 9.7, 0.2, 73.1, 3.7, 21, 0.3],
"Grapes", 15.1, 3.4, 0.1, 24, 0.2, 4.1, 0.2],
"Chocolate Chip Cookies", 78.1, 6.2, 0.4, 101.8, 0, 9.3, 0.9],
"Lowfat Milk", 121.2, 296.7, 0.1, 500.2, 0, 11.7, 8.1],
"Raisin Brn", 115.1, 12.9, 16.8, 1250.2, 4, 27.9, 4],
"Hotdog", 242.1, 23.5, 2.3, 0, 0, 18, 10.4]
]
},
{
"id":"diet_nutrients.csv",
"fields" : "name","qmin","qmax"],
"values" :
"Calories", 2000, 2500],
"Calcium", 800, 1600],
"Iron", 10, 30],
"Vit_A", 5000, 50000],
"Dietary_Fiber", 25, 100],
"Carbohydrates", 0, 300],
"Protein", 50, 100]
]
}
],
"output_data": [
{
"id":"..csv"
}
]
}
}
This code example posts a job that uses this file myjob.json.
curl --location --request POST "https://us-south.ml.cloud.ibm.com/ml/v4/deployment_jobs?version=2020-08-01&space_id=SPACE-ID-HERE" 
-H "Authorization: bearer TOKEN-HERE" 
-H "Content-Type: application/json" 
-H "cache-control: no-cache" 
-d @myjob.json
A JOB-ID is returned. Output example: (the job is queued)
{
"entity": {
"decision_optimization": {
"input_data": [{
"id": "diet_food.csv",
"fields": "name", "unit_cost", "qmin", "qmax"],
"values": "Roasted Chicken", 0.84, 0, 10], "Spaghetti W/ Sauce", 0.78, 0, 10], "Tomato,Red,Ripe,Raw", 0.27, 0, 10], "Apple,Raw,W/Skin", 0.24, 0, 10], "Grapes", 0.32, 0, 10], "Chocolate Chip Cookies", 0.03, 0, 10], "Lowfat Milk", 0.23, 0, 10], "Raisin Brn", 0.34, 0, 10], "Hotdog", 0.31, 0, 10]]
}, {
"id": "diet_food_nutrients.csv",
"fields": "Food", "Calories", "Calcium", "Iron", "Vit_A", "Dietary_Fiber", "Carbohydrates", "Protein"],
"values": "Spaghetti W/ Sauce", 358.2, 80.2, 2.3, 3055.2, 11.6, 58.3, 8.2], "Roasted Chicken", 277.4, 21.9, 1.8, 77.4, 0, 0, 42.2], "Tomato,Red,Ripe,Raw", 25.8, 6.2, 0.6, 766.3, 1.4, 5.7, 1], "Apple,Raw,W/Skin", 81.4, 9.7, 0.2, 73.1, 3.7, 21, 0.3], "Grapes", 15.1, 3.4, 0.1, 24, 0.2, 4.1, 0.2], "Chocolate Chip Cookies", 78.1, 6.2, 0.4, 101.8, 0, 9.3, 0.9], "Lowfat Milk", 121.2, 296.7, 0.1, 500.2, 0, 11.7, 8.1], "Raisin Brn", 115.1, 12.9, 16.8, 1250.2, 4, 27.9, 4], "Hotdog", 242.1, 23.5, 2.3, 0, 0, 18, 10.4]]
}, {
"id": "diet_nutrients.csv",
"fields": "name", "qmin", "qmax"],
"values": "Calories", 2000, 2500], "Calcium", 800, 1600], "Iron", 10, 30], "Vit_A", 5000, 50000], "Dietary_Fiber", 25, 100], "Carbohydrates", 0, 300], "Protein", 50, 100]]
}],
"output_data": [
{
"id": "..csv"
}
],
"solve_parameters": {
"oaas.logAttachmentName": "log.txt",
"oaas.logTailEnabled": "true"
},
"status": {
"state": "queued"
}
},
"deployment": {
"id": "DEPLOYMENT-ID"
},
"platform_job": {
"job_id": "",
"run_id": ""
}
},
"metadata": {
"created_at": "2020-07-17T10:42:42.783Z",
"id": "JOB-ID",
"name": "test-job-diet",
"space_id": "SPACE-ID"
}
}
8.  You can also monitor job states. Use the JOB-IDFor example:
curl --location --request GET 
"https://us-south.ml.cloud.ibm.com/ml/v4/deployment_jobs/JOB-ID-HERE?version=2020-08-01&space_id=SPACE-ID-HERE" 
-H "Authorization: bearer TOKEN-HERE" 
-H "Content-Type: application/json"
Output example: (job has completed)
{
"entity": {
"decision_optimization": {
"input_data": [{
"id": "diet_food.csv",
"fields": "name", "unit_cost", "qmin", "qmax"],
"values": "Roasted Chicken", 0.84, 0, 10], "Spaghetti W/ Sauce", 0.78, 0, 10], "Tomato,Red,Ripe,Raw", 0.27, 0, 10], "Apple,Raw,W/Skin", 0.24, 0, 10], "Grapes", 0.32, 0, 10], "Chocolate Chip Cookies", 0.03, 0, 10], "Lowfat Milk", 0.23, 0, 10], "Raisin Brn", 0.34, 0, 10], "Hotdog", 0.31, 0, 10]]
}, {
"id": "diet_food_nutrients.csv",
"fields": "Food", "Calories", "Calcium", "Iron", "Vit_A", "Dietary_Fiber", "Carbohydrates", "Protein"],
"values": "Spaghetti W/ Sauce", 358.2, 80.2, 2.3, 3055.2, 11.6, 58.3, 8.2], "Roasted Chicken", 277.4, 21.9, 1.8, 77.4, 0, 0, 42.2], "Tomato,Red,Ripe,Raw", 25.8, 6.2, 0.6, 766.3, 1.4, 5.7, 1], "Apple,Raw,W/Skin", 81.4, 9.7, 0.2, 73.1, 3.7, 21, 0.3], "Grapes", 15.1, 3.4, 0.1, 24, 0.2, 4.1, 0.2], "Chocolate Chip Cookies", 78.1, 6.2, 0.4, 101.8, 0, 9.3, 0.9], "Lowfat Milk", 121.2, 296.7, 0.1, 500.2, 0, 11.7, 8.1], "Raisin Brn", 115.1, 12.9, 16.8, 1250.2, 4, 27.9, 4], "Hotdog", 242.1, 23.5, 2.3, 0, 0, 18, 10.4]]
}, {
"id": "diet_nutrients.csv",
"fields": "name", "qmin", "qmax"],
"values": "Calories", 2000, 2500], "Calcium", 800, 1600], "Iron", 10, 30], "Vit_A", 5000, 50000], "Dietary_Fiber", 25, 100], "Carbohydrates", 0, 300], "Protein", 50, 100]]
}],
"output_data": [{
"fields": "Name", "Value"],
"id": "kpis.csv",
"values": "Total Calories", 2000], "Total Calcium", 800.0000000000001], "Total Iron", 11.278317739831891], "Total Vit_A", 8518.432542485823], "Total Dietary_Fiber", 25], "Total Carbohydrates", 256.80576358904455], "Total Protein", 51.17372234135308], "Minimal cost", 2.690409171696264]]
}, {
"fields": "name", "value"],
"id": "solution.csv",
"values": "Spaghetti W/ Sauce", 2.1551724137931036], "Chocolate Chip Cookies", 10], "Lowfat Milk", 1.8311671008899097], "Hotdog", 0.9296975991385925]]
}],
"output_data_references": [],
"solve_parameters": {
"oaas.logAttachmentName": "log.txt",
"oaas.logTailEnabled": "true"
},
"solve_state": {
"details": {
"KPI.Minimal cost": "2.690409171696264",
"KPI.Total Calcium": "800.0000000000001",
"KPI.Total Calories": "2000.0",
"KPI.Total Carbohydrates": "256.80576358904455",
"KPI.Total Dietary_Fiber": "25.0",
"KPI.Total Iron": "11.278317739831891",
"KPI.Total Protein": "51.17372234135308",
"KPI.Total Vit_A": "8518.432542485823",
"MODEL_DETAIL_BOOLEAN_VARS": "0",
"MODEL_DETAIL_CONSTRAINTS": "7",
"MODEL_DETAIL_CONTINUOUS_VARS": "9",
"MODEL_DETAIL_INTEGER_VARS": "0",
"MODEL_DETAIL_KPIS": "["Total Calories", "Total Calcium", "Total Iron", "Total Vit_A", "Total Dietary_Fiber", "Total Carbohydrates", "Total Protein", "Minimal cost"]",
"MODEL_DETAIL_NONZEROS": "57",
"MODEL_DETAIL_TYPE": "LP",
"PROGRESS_CURRENT_OBJECTIVE": "2.6904091716962637"
},
"latest_engine_activity": [
"2020-07-21T16:37:36Z, INFO] Model: diet",
"2020-07-21T16:37:36Z, INFO]  - number of variables: 9",
"2020-07-21T16:37:36Z, INFO]    - binary=0, integer=0, continuous=9",
"2020-07-21T16:37:36Z, INFO]  - number of constraints: 7",
"2020-07-21T16:37:36Z, INFO]    - linear=7",
"2020-07-21T16:37:36Z, INFO]  - parameters: defaults",
"2020-07-21T16:37:36Z, INFO]  - problem type is: LP",
"2020-07-21T16:37:36Z, INFO] Warning: Model: "diet" is not a MIP problem, progress listeners are disabled",
"2020-07-21T16:37:36Z, INFO] objective: 2.690",
"2020-07-21T16:37:36Z, INFO]   "Spaghetti W/ Sauce"=2.155",
"2020-07-21T16:37:36Z, INFO]   "Chocolate Chip Cookies"=10.000",
"2020-07-21T16:37:36Z, INFO]   "Lowfat Milk"=1.831",
"2020-07-21T16:37:36Z, INFO]   "Hotdog"=0.930",
"2020-07-21T16:37:36Z, INFO] solution.csv"
],
"solve_status": "optimal_solution"
},
"status": {
"completed_at": "2020-07-21T16:37:36.989Z",
"running_at": "2020-07-21T16:37:35.622Z",
"state": "completed"
}
},
"deployment": {
"id": "DEPLOYMENT-ID"
}
},
"metadata": {
"created_at": "2020-07-21T16:37:09.130Z",
"id": "JOB-ID",
"modified_at": "2020-07-21T16:37:37.268Z",
"name": "test-job-diet",
"space_id": "SPACE-ID"
}
}
9.  Optional: You can delete jobs as follows:
curl --location --request DELETE "https://us-south.ml.cloud.ibm.com/ml/v4/deployment_jobs/JOB-ID-HERE?version=2020-08-01&space_id=SPACE-ID-HERE&hard_delete=true" 
-H "Authorization: bearer TOKEN-HERE"
If you delete a job using the API, it will still be displayed in the user interface.
10. Optional: You can delete deployments as follows:If you delete a deployment that contains jobs using the API, the jobs will still be displayed in the deployment space in the user interface.
 | 
	# REST API example #
You can deploy a  Decision Optimization model, create and monitor jobs and get solutions using the  Watson Machine Learning REST API\.
## Procedure ##
<!-- <ol> -->
1.  **Generate an IAM token** using your [IBM Cloud API key](https://cloud.ibm.com/iam/apikeys) as follows\.
    
        curl "https://iam.bluemix.net/identity/token" \
          -d "apikey=YOUR_API_KEY_HERE&grant_type=urn%3Aibm%3Aparams%3Aoauth%3Agrant-type%3Aapikey" \
          -H "Content-Type: application/x-www-form-urlencoded" \
          -H "Authorization: Basic Yng6Yng="
    
    Output example:
    
        {
           "access_token": "****** obtained IAM token ******************************",
           "refresh_token": "**************************************",
           "token_type": "Bearer",
           "expires_in": 3600,
           "expiration": 1554117649,
           "scope": "ibm openid"
        }
    
    Use the obtained token (access\_token value) prepended by the word `Bearer` in the `Authorization` header, and the `Machine Learning service GUID` in the `ML-Instance-ID` header, in all API calls.
2.  **Optional:** If you have not obtained your **SPACE\-ID** from the user interface as described previously, you can create a space using the REST API as follows\. Use the previously obtained token prepended by the word `bearer` in the `Authorization` header in all API calls\.
    
        curl --location --request POST \
          "https://api.dataplatform.cloud.ibm.com/v2/spaces" \
          -H "Authorization: Bearer TOKEN-HERE" \
          -H "ML-Instance-ID: MACHINE-LEARNING-SERVICE-GUID-HERE" \
          -H "Content-Type: application/json" \
          --data-raw "{
          "name": "SPACE-NAME-HERE",
          "description": "optional description here",
          "storage": {
              "resource_crn": "COS-CRN-ID-HERE"
          },
          "compute": [{
            "name": "MACHINE-LEARNING-SERVICE-NAME-HERE",
            "crn": "MACHINE-LEARNING-SERVICE-CRN-ID-HERE"
          }]
        }"
    
    For **Windows** users, put the `--data-raw` command on one line and replace all `"` with `\"` inside this command as follows:
    
        curl --location --request POST ^
          "https://api.dataplatform.cloud.ibm.com/v2/spaces" ^
          -H "Authorization: Bearer TOKEN-HERE" ^
          -H "ML-Instance-ID: MACHINE-LEARNING-SERVICE-GUID-HERE" ^
          -H "Content-Type: application/json" ^
          --data-raw "{\"name\": "SPACE-NAME-HERE",\"description\": \"optional description here\",\"storage\": {\"resource_crn\": \"COS-CRN-ID-HERE\"  },\"compute\": [{\"name\": "MACHINE-LEARNING-SERVICE-NAME-HERE\",\"crn\": \"MACHINE-LEARNING-SERVICE-CRN-ID-HERE\"  }]}"
    
    Alternatively put the data in a separate file.A **SPACE-ID** is returned in `id` field of the `metadata` section.
    
    Output example:
    
        {
          "entity": {
            "compute": [
              {
                "crn": "MACHINE-LEARNING-SERVICE-CRN",
                "guid": "MACHINE-LEARNING-SERVICE-GUID",
                "name": "MACHINE-LEARNING-SERVICE-NAME",
                "type": "machine_learning"
              }
            ],
            "description": "string",
            "members": [
              {
                "id": "XXXXXXX",
                "role": "admin",
                "state": "active",
                "type": "user"
              }
            ],
            "name": "name",
            "scope": {
              "bss_account_id": "account_id"
            },
            "status": {
              "state": "active"
            }
          },
          "metadata": {
            "created_at": "2020-07-17T08:36:57.611Z",
            "creator_id": "XXXXXXX",
            "id": "SPACE-ID",
            "url": "/v2/spaces/SPACE-ID"
          }
        }
    
    You must wait until your deployment space status is `"active"` before continuing. You can poll to check for this as follows.
    
        curl --location --request GET "https://api.dataplatform.cloud.ibm.com/v2/spaces/SPACE-ID-HERE" \
          -H "Authorization: bearer TOKEN-HERE" \
          -H "Content-Type: application/json"
3.  Create a **new  Decision Optimization model**
    
    All API requests require a version parameter that takes a date in the format `version=YYYY-MM-DD`. This code example posts a model that uses the file `create_model.json`. The URL will vary according to the chosen region/location for your machine learning service.  See [Endpoint URLs](https://cloud.ibm.com/apidocs/machine-learning#endpoint-url).
    
        curl --location --request POST \
          "https://us-south.ml.cloud.ibm.com/ml/v4/models?version=2020-08-01" \
          -H "Authorization: bearer TOKEN-HERE" \
          -H "Content-Type: application/json" \
          -d @create_model.json
    
    The  create\_model.json file contains the following code:
    
        {
          "name": "ModelName",
          "description": "ModelDescription",
          "type": "do-docplex_22.1",
          "software_spec": {
            "name": "do_22.1"
          },
          "custom": {
              "decision_optimization": {
                "oaas.docplex.python": "3.10"
              }
          },
          "space_id": "SPACE-ID-HERE"
        }
    
    The *Python version* is stated explicitly here in a `custom` block. This is optional. Without it your model will use the default version which is currently Python  3.10. As the default version will evolve over time, stating the Python version explicitly enables you to easily change it later or to keep using an older supported version when the default version is updated. Currently supported versions are  3.10.
    
    If you want to be able to run jobs for this model *from the user interface*, instead of only using the REST API , you must define the **schema** for the input and output data. If you do not define the schema when you create the model, you can only run jobs using the REST API and not from the user interface.
    
    You can also use the schema specified for input and output in your optimization model:
    
        {
          "name": "Diet-Model-schema",
          "description": "Diet",
          "type": "do-docplex_22.1",
          "schemas": {
            "input": [
              {
                "id": "diet_food_nutrients",
                "fields": 
                  { "name": "Food",  "type": "string" },
                  { "name": "Calories", "type": "double" },
                  { "name": "Calcium", "type": "double" },
                  { "name": "Iron", "type": "double" },
                  { "name": "Vit_A", "type": "double" },
                  { "name": "Dietary_Fiber", "type": "double" },
                  { "name": "Carbohydrates", "type": "double" },
                  { "name": "Protein", "type": "double" }
                ]
              },
              {
                "id": "diet_food",
                "fields": 
                  { "name": "name", "type": "string" },
                  { "name": "unit_cost", "type": "double" },
                  { "name": "qmin", "type": "double" },
                  { "name": "qmax", "type": "double" }
                ]
              },
              {
                "id": "diet_nutrients",
                "fields": 
                  { "name": "name", "type": "string" },
                  { "name": "qmin", "type": "double" },
                  { "name": "qmax", "type": "double" }
                ]
              }
            ],
            "output": [
              {
                "id": "solution",
                "fields": 
                  { "name": "name", "type": "string" },
                  { "name": "value", "type": "double" }
                ]
              }
            ]
          },
          "software_spec": {
            "name": "do_22.1"
          },
          "space_id": "SPACE-ID-HERE"
        }
    
    When you post a model you provide information about its **model type** and the **software specification** to be used.**Model types** can be, for example:
    
    <!-- <ul> -->
    
     *  `do-opl_22.1` for OPL models
     *  `do-cplex_22.1` for CPLEX models
     *  `do-cpo_22.1` for CP models
     *  `do-docplex_22.1` for Python models
    
    <!-- </ul> -->
    
    Version  20.1 can also be used for these model types.
    
    For the **software specification**, you can use the default specifications using their names `do_22.1` or `do_20.1`. See also [Extend software specification notebook](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployPythonClient.html#topic_wmlpythonclient__extendWML) which shows you how to extend the  Decision Optimization software specification (runtimes with additional Python libraries for docplex models).
    
    A **MODEL-ID** is returned in `id` field in the `metadata`.
    
    Output example:
    
        {
          "entity": {
            "software_spec": {
              "id": "SOFTWARE-SPEC-ID"
            },
            "type": "do-docplex_20.1"
          },
          "metadata": {
            "created_at": "2020-07-17T08:37:22.992Z",
            "description": "ModelDescription",
            "id": "MODEL-ID",
            "modified_at": "2020-07-17T08:37:22.992Z",
            "name": "ModelName",
            "owner": "***********",
            "space_id": "SPACE-ID"
          }
        }
4.  **Upload a  Decision Optimization model formulation** ready for deployment\.First **compress your model** into a (`tar.gz, .zip or .jar`) file and upload it to be deployed by the  Watson Machine Learning service\.This code example uploads a model called  diet\.zip that contains a Python model and no common data:
    
        curl --location --request PUT \
          "https://us-south.ml.cloud.ibm.com/ml/v4/models/MODEL-ID-HERE/content?version=2020-08-01&space_id=SPACE-ID-HERE&content_format=native" \
          -H "Authorization: bearer TOKEN-HERE" \
          -H "Content-Type: application/gzip" \
          --data-binary "@diet.zip"
    
    You can download this example and other models from the **[DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples)**.  Select the relevant product and version subfolder.
5.  **Deploy your model**Create a reference to your model\.  Use the **SPACE\-ID**, the **MODEL\-ID** obtained when you created your model ready for deployment and the **hardware specification**\. For example:
    
        curl --location --request POST "https://us-south.ml.cloud.ibm.com/ml/v4/deployments?version=2020-08-01" \
          -H "Authorization: bearer TOKEN-HERE" \
          -H "Content-Type: application/json" \
          -d @deploy_model.json
    
    `The deploy_model.json file contains the following code:`
    
        {
          "name": "Test-Diet-deploy",
          "space_id": "SPACE-ID-HERE",
          "asset": {
            "id": "MODEL-ID-HERE"
          },
          "hardware_spec": {
            "name": "S"
          },
          "batch": {}
        }
    
    The **DEPLOYMENT-ID** is returned in `id` field in the `metadata`. Output example:
    
        {
          "entity": {
            "asset": {
              "id": "MODEL-ID"
            },
            "custom": {},
            "description": "",
            "hardware_spec": {
              "id": "HARDWARE-SPEC-ID",
              "name": "S",
              "num_nodes": 1
            },
            "name": "Test-Diet-deploy",
            "space_id": "SPACE-ID",
            "status": {
              "state": "ready"
            }
          },
          "metadata": {
            "created_at": "2020-07-17T09:10:50.661Z",
            "description": "",
            "id": "DEPLOYMENT-ID",
            "modified_at": "2020-07-17T09:10:50.661Z",
            "name": "test-Diet-deploy",
            "owner": "**************",
            "space_id": "SPACE-ID"
          }
        }
6.  Once deployed, you can **monitor your model's deployment state\.** Use the **DEPLOYMENT\-ID**\.For example:
    
        curl --location --request GET "https://us-south.ml.cloud.ibm.com/ml/v4/deployments/DEPLOYMENT-ID-HERE?version=2020-08-01&space_id=SPACE-ID-HERE" \
          -H "Authorization: bearer TOKEN-HERE" \
          -H "Content-Type: application/json"
    
    Output example:
7.  You can then **Submit jobs** for your deployed model defining the input data and the output (results of the optimization solve) and the log file\.For example, the following shows the contents of a file called `myjob.json`\. It contains (**inline**) input data, some solve parameters, and specifies that the output will be a \.csv file\. For examples of other types of input data references, see [Model input and output data adaptation](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelIODataDefn.html#topic_modelIOAdapt)\.
    
        {
        	"name":"test-job-diet",
        	"space_id": "SPACE-ID-HERE",
        	"deployment": {
        		"id": "DEPLOYMENT-ID-HERE"
        	},
        	"decision_optimization" : {
        		"solve_parameters" : {
        			"oaas.logAttachmentName":"log.txt",
        			"oaas.logTailEnabled":"true"
        		},
        		"input_data": [
        			{
        				"id":"diet_food.csv",
        				"fields" : "name","unit_cost","qmin","qmax"],
        				"values" : 
        					"Roasted Chicken", 0.84, 0, 10],
        					"Spaghetti W/ Sauce", 0.78, 0, 10],
        					"Tomato,Red,Ripe,Raw", 0.27, 0, 10],
        					"Apple,Raw,W/Skin", 0.24, 0, 10],
        					"Grapes", 0.32, 0, 10],
        					"Chocolate Chip Cookies", 0.03, 0, 10],
        					"Lowfat Milk", 0.23, 0, 10],
        					"Raisin Brn", 0.34, 0, 10],
        					"Hotdog", 0.31, 0, 10]
        				]
        			},
        			{
        				"id":"diet_food_nutrients.csv",
        				"fields" : "Food","Calories","Calcium","Iron","Vit_A","Dietary_Fiber","Carbohydrates","Protein"],
        				"values" : 
        					"Spaghetti W/ Sauce", 358.2, 80.2, 2.3, 3055.2, 11.6, 58.3, 8.2],
        					"Roasted Chicken", 277.4, 21.9, 1.8, 77.4, 0, 0, 42.2],
        					"Tomato,Red,Ripe,Raw", 25.8, 6.2, 0.6, 766.3, 1.4, 5.7, 1],
        					"Apple,Raw,W/Skin", 81.4, 9.7, 0.2, 73.1, 3.7, 21, 0.3],
        					"Grapes", 15.1, 3.4, 0.1, 24, 0.2, 4.1, 0.2],
        					"Chocolate Chip Cookies", 78.1, 6.2, 0.4, 101.8, 0, 9.3, 0.9],
        					"Lowfat Milk", 121.2, 296.7, 0.1, 500.2, 0, 11.7, 8.1],
        					"Raisin Brn", 115.1, 12.9, 16.8, 1250.2, 4, 27.9, 4],
        					"Hotdog", 242.1, 23.5, 2.3, 0, 0, 18, 10.4]
        				]
        			},
        			{
        				"id":"diet_nutrients.csv",
        				"fields" : "name","qmin","qmax"],
        				"values" : 
        					"Calories", 2000, 2500],
        					"Calcium", 800, 1600],
        					"Iron", 10, 30],
        					"Vit_A", 5000, 50000],
        					"Dietary_Fiber", 25, 100],
        					"Carbohydrates", 0, 300],
        					"Protein", 50, 100]
        				]
        			}
        		],
        		"output_data": [
        			{
        				"id":".*\.csv"
        			}
        		]
        	}
        }
    
    This code example posts a job that uses this file `myjob.json`.
    
        curl --location --request POST "https://us-south.ml.cloud.ibm.com/ml/v4/deployment_jobs?version=2020-08-01&space_id=SPACE-ID-HERE" \
          -H "Authorization: bearer TOKEN-HERE" \
          -H "Content-Type: application/json" \
          -H "cache-control: no-cache" \
          -d @myjob.json
    
    A **JOB-ID** is returned. Output example: (the job is queued)
    
        {
          "entity": {
            "decision_optimization": {
              "input_data": [{
                "id": "diet_food.csv",
                "fields": "name", "unit_cost", "qmin", "qmax"],
                "values": "Roasted Chicken", 0.84, 0, 10], "Spaghetti W/ Sauce", 0.78, 0, 10], "Tomato,Red,Ripe,Raw", 0.27, 0, 10], "Apple,Raw,W/Skin", 0.24, 0, 10], "Grapes", 0.32, 0, 10], "Chocolate Chip Cookies", 0.03, 0, 10], "Lowfat Milk", 0.23, 0, 10], "Raisin Brn", 0.34, 0, 10], "Hotdog", 0.31, 0, 10]]
              }, {
                "id": "diet_food_nutrients.csv",
                "fields": "Food", "Calories", "Calcium", "Iron", "Vit_A", "Dietary_Fiber", "Carbohydrates", "Protein"],
                "values": "Spaghetti W/ Sauce", 358.2, 80.2, 2.3, 3055.2, 11.6, 58.3, 8.2], "Roasted Chicken", 277.4, 21.9, 1.8, 77.4, 0, 0, 42.2], "Tomato,Red,Ripe,Raw", 25.8, 6.2, 0.6, 766.3, 1.4, 5.7, 1], "Apple,Raw,W/Skin", 81.4, 9.7, 0.2, 73.1, 3.7, 21, 0.3], "Grapes", 15.1, 3.4, 0.1, 24, 0.2, 4.1, 0.2], "Chocolate Chip Cookies", 78.1, 6.2, 0.4, 101.8, 0, 9.3, 0.9], "Lowfat Milk", 121.2, 296.7, 0.1, 500.2, 0, 11.7, 8.1], "Raisin Brn", 115.1, 12.9, 16.8, 1250.2, 4, 27.9, 4], "Hotdog", 242.1, 23.5, 2.3, 0, 0, 18, 10.4]]
              }, {
                "id": "diet_nutrients.csv",
                "fields": "name", "qmin", "qmax"],
                "values": "Calories", 2000, 2500], "Calcium", 800, 1600], "Iron", 10, 30], "Vit_A", 5000, 50000], "Dietary_Fiber", 25, 100], "Carbohydrates", 0, 300], "Protein", 50, 100]]
              }],
              "output_data": [
                {
                  "id": ".*\.csv"
                }
              ],
              "solve_parameters": {
                "oaas.logAttachmentName": "log.txt",
                "oaas.logTailEnabled": "true"
              },
              "status": {
                "state": "queued"
              }
            },
            "deployment": {
              "id": "DEPLOYMENT-ID"
            },
            "platform_job": {
              "job_id": "",
              "run_id": ""
            }
          },
          "metadata": {
            "created_at": "2020-07-17T10:42:42.783Z",
            "id": "JOB-ID",
            "name": "test-job-diet",
            "space_id": "SPACE-ID"
          }
        }
8.  You can also **monitor job states**\. Use the **JOB\-ID**For example:
    
        curl --location --request GET \
          "https://us-south.ml.cloud.ibm.com/ml/v4/deployment_jobs/JOB-ID-HERE?version=2020-08-01&space_id=SPACE-ID-HERE" \
          -H "Authorization: bearer TOKEN-HERE" \
          -H "Content-Type: application/json"
    
    Output example: (job has completed)
    
        {
          "entity": {
            "decision_optimization": {
              "input_data": [{
                "id": "diet_food.csv",
                "fields": "name", "unit_cost", "qmin", "qmax"],
                "values": "Roasted Chicken", 0.84, 0, 10], "Spaghetti W/ Sauce", 0.78, 0, 10], "Tomato,Red,Ripe,Raw", 0.27, 0, 10], "Apple,Raw,W/Skin", 0.24, 0, 10], "Grapes", 0.32, 0, 10], "Chocolate Chip Cookies", 0.03, 0, 10], "Lowfat Milk", 0.23, 0, 10], "Raisin Brn", 0.34, 0, 10], "Hotdog", 0.31, 0, 10]]
              }, {
                "id": "diet_food_nutrients.csv",
                "fields": "Food", "Calories", "Calcium", "Iron", "Vit_A", "Dietary_Fiber", "Carbohydrates", "Protein"],
                "values": "Spaghetti W/ Sauce", 358.2, 80.2, 2.3, 3055.2, 11.6, 58.3, 8.2], "Roasted Chicken", 277.4, 21.9, 1.8, 77.4, 0, 0, 42.2], "Tomato,Red,Ripe,Raw", 25.8, 6.2, 0.6, 766.3, 1.4, 5.7, 1], "Apple,Raw,W/Skin", 81.4, 9.7, 0.2, 73.1, 3.7, 21, 0.3], "Grapes", 15.1, 3.4, 0.1, 24, 0.2, 4.1, 0.2], "Chocolate Chip Cookies", 78.1, 6.2, 0.4, 101.8, 0, 9.3, 0.9], "Lowfat Milk", 121.2, 296.7, 0.1, 500.2, 0, 11.7, 8.1], "Raisin Brn", 115.1, 12.9, 16.8, 1250.2, 4, 27.9, 4], "Hotdog", 242.1, 23.5, 2.3, 0, 0, 18, 10.4]]
              }, {
                "id": "diet_nutrients.csv",
                "fields": "name", "qmin", "qmax"],
                "values": "Calories", 2000, 2500], "Calcium", 800, 1600], "Iron", 10, 30], "Vit_A", 5000, 50000], "Dietary_Fiber", 25, 100], "Carbohydrates", 0, 300], "Protein", 50, 100]]
              }],
              "output_data": [{
                "fields": "Name", "Value"],
                "id": "kpis.csv",
                "values": "Total Calories", 2000], "Total Calcium", 800.0000000000001], "Total Iron", 11.278317739831891], "Total Vit_A", 8518.432542485823], "Total Dietary_Fiber", 25], "Total Carbohydrates", 256.80576358904455], "Total Protein", 51.17372234135308], "Minimal cost", 2.690409171696264]]
              }, {
                "fields": "name", "value"],
                "id": "solution.csv",
                "values": "Spaghetti W/ Sauce", 2.1551724137931036], "Chocolate Chip Cookies", 10], "Lowfat Milk", 1.8311671008899097], "Hotdog", 0.9296975991385925]]
              }],
              "output_data_references": [],
              "solve_parameters": {
                "oaas.logAttachmentName": "log.txt",
                "oaas.logTailEnabled": "true"
              },
              "solve_state": {
                "details": {
                  "KPI.Minimal cost": "2.690409171696264",
                  "KPI.Total Calcium": "800.0000000000001",
                  "KPI.Total Calories": "2000.0",
                  "KPI.Total Carbohydrates": "256.80576358904455",
                  "KPI.Total Dietary_Fiber": "25.0",
                  "KPI.Total Iron": "11.278317739831891",
                  "KPI.Total Protein": "51.17372234135308",
                  "KPI.Total Vit_A": "8518.432542485823",
                  "MODEL_DETAIL_BOOLEAN_VARS": "0",
                  "MODEL_DETAIL_CONSTRAINTS": "7",
                  "MODEL_DETAIL_CONTINUOUS_VARS": "9",
                  "MODEL_DETAIL_INTEGER_VARS": "0",
                  "MODEL_DETAIL_KPIS": "[\"Total Calories\", \"Total Calcium\", \"Total Iron\", \"Total Vit_A\", \"Total Dietary_Fiber\", \"Total Carbohydrates\", \"Total Protein\", \"Minimal cost\"]",
                  "MODEL_DETAIL_NONZEROS": "57",
                  "MODEL_DETAIL_TYPE": "LP",
                  "PROGRESS_CURRENT_OBJECTIVE": "2.6904091716962637"
                },
                "latest_engine_activity": [
                  "2020-07-21T16:37:36Z, INFO] Model: diet",
                  "2020-07-21T16:37:36Z, INFO]  - number of variables: 9",
                  "2020-07-21T16:37:36Z, INFO]    - binary=0, integer=0, continuous=9",
                  "2020-07-21T16:37:36Z, INFO]  - number of constraints: 7",
                  "2020-07-21T16:37:36Z, INFO]    - linear=7",
                  "2020-07-21T16:37:36Z, INFO]  - parameters: defaults",
                  "2020-07-21T16:37:36Z, INFO]  - problem type is: LP",
                  "2020-07-21T16:37:36Z, INFO] Warning: Model: \"diet\" is not a MIP problem, progress listeners are disabled",
                  "2020-07-21T16:37:36Z, INFO] objective: 2.690",
                  "2020-07-21T16:37:36Z, INFO]   \"Spaghetti W/ Sauce\"=2.155",
                  "2020-07-21T16:37:36Z, INFO]   \"Chocolate Chip Cookies\"=10.000",
                  "2020-07-21T16:37:36Z, INFO]   \"Lowfat Milk\"=1.831",
                  "2020-07-21T16:37:36Z, INFO]   \"Hotdog\"=0.930",
                  "2020-07-21T16:37:36Z, INFO] solution.csv"
                ],
                "solve_status": "optimal_solution"
              },
              "status": {
                "completed_at": "2020-07-21T16:37:36.989Z",
                "running_at": "2020-07-21T16:37:35.622Z",
                "state": "completed"
              }
            },
            "deployment": {
              "id": "DEPLOYMENT-ID"
            }
          },
          "metadata": {
            "created_at": "2020-07-21T16:37:09.130Z",
            "id": "JOB-ID",
            "modified_at": "2020-07-21T16:37:37.268Z",
            "name": "test-job-diet",
            "space_id": "SPACE-ID"
          }
        }
9.  Optional: You can **delete jobs** as follows:
    
        curl --location --request DELETE "https://us-south.ml.cloud.ibm.com/ml/v4/deployment_jobs/JOB-ID-HERE?version=2020-08-01&space_id=SPACE-ID-HERE&hard_delete=true" \
          -H "Authorization: bearer TOKEN-HERE"
    
    If you delete a job using the API, it will still be displayed in the user interface.
10. Optional: You can **delete deployments** as follows:If you delete a deployment that contains jobs using the API, the jobs will still be displayed in the deployment space in the user interface\.
<!-- </ol> -->
<!-- </article "role="article" "> -->
 | 
| 
	DEB599F49C3E459A08E8BF25304B063B50CAA294 | 
	https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployModelUI-WML.html?context=cdpaas&locale=en | 
	Deploying a Decision Optimization model by using the user interface | 
	 Deploying a  Decision Optimization model by using the user interface 
You can save a model for deployment in the  Decision Optimization experiment UI and promote it to your  Watson Machine Learning deployment space.
 Procedure 
To save your model for deployment:
1.  In the  Decision Optimization experiment UI, either from the  Scenario or from the  Overview pane, click the menu icon  for the scenario that you want to deploy, and select Save for deployment
2.  Specify a name for your model and add a description, if needed, then click Next.
1.  Review the  Input and  Output schema and select the tables you want to include in the schema.
2.  Review the  Run parameters and add, modify or delete any parameters as necessary.
3.  Review the  Environment and  Model files that are listed in the  Review and save window.
4.  Click  Save.
The model is then available in the Models section of your project.
To promote your model to your deployment space:
3.  View your model in the  Models section of your project.You can see a summary with input and output schema. Click Promote to deployment space.
4.  In the  Promote to space window that opens, check that the  Target space field displays the name of your deployment space and click Promote.
5.  Click the link deployment space in the message that you receive that confirms successful promotion. Your promoted model is displayed in the  Assets tab of your Deployment space. The information pane shows you the Type, Software specification, description and any defined tags such as the Python version used.
To create a new deployment:
6.  From the Assets tab of your deployment space, open your model and click New Deployment.
7.  In the  Create a deployment window that opens, specify a name for your deployment and select a Hardware specification.Click Create to create the deployment. Your deployment window opens from which you can later create jobs.
 Creating and running  Decision Optimization jobs 
You can create and run jobs to your deployed model.
 Procedure 
1.  Return to your deployment space by using the navigation path and (if the data pane isn't already open) click the  data icon to open the data pane. Upload your input data tables, and solution and kpi output tables here. (You must have output tables defined in your model to be able to see the solution and kpi values.)
2.  Open your deployment model, by selecting it in the Deployments tab of your deployment space and click New job.
3.  Define the details of your job by entering a name, and an optional description for your job and click Next.
4.  Configure your job by selecting a hardware specification and Next.You can choose to schedule you job here, or leave the default schedule option off and click Next. You can also optionally choose to turn on notifications or click  Next.
5.  Choose the data that you want to use in your job by clicking Select the source for each of your input and output tables. Click Next.
6.  You can now review and create your model by clicking Create.When you receive a successful job creation message, you can then view it by opening it from your deployment space. There you can see the run status of your job.
7.  Open the run for your job.Your job log opens and you can also view and copy the payload information.
 | 
	# Deploying a  Decision Optimization model by using the user interface #
You can save a model for deployment in the  Decision Optimization experiment UI and promote it to your  Watson Machine Learning deployment space\.
## Procedure ##
To save your model for deployment:
<!-- <ol> -->
1.  In the  Decision Optimization experiment UI, either from the  Scenario or from the  Overview pane, click the menu icon  for the scenario that you want to deploy, and select **Save for deployment**
2.  Specify a name for your model and add a description, if needed, then click **Next**\.
    
    <!-- <ol> -->
    
    1.  Review the  Input and  Output schema and select the tables you want to include in the schema.
    2.  Review the  Run parameters and add, modify or delete any parameters as necessary.
    3.  Review the  Environment and  Model files that are listed in the  Review and save window. 
    4.  Click  Save.
    
    <!-- </ol> -->
    
    The model is then available in the **Models** section of your project.
<!-- </ol> -->
To promote your model to your deployment space:
<!-- <ol> -->
3.  View your model in the  Models section of your project\.You can see a summary with input and output schema\. Click **Promote to deployment space**\.
4.  In the  Promote to space window that opens, check that the  Target space field displays the name of your deployment space and click **Promote**\.
5.  Click the link **deployment space** in the message that you receive that confirms successful promotion\. Your promoted model is displayed in the  Assets tab of your **Deployment space**\. The information pane shows you the Type, Software specification, description and any defined tags such as the Python version used\.
<!-- </ol> -->
To create a new deployment:
<!-- <ol> -->
6.  From the **Assets tab** of your deployment space, open your model and click **New Deployment**\.
7.  In the  Create a deployment window that opens, specify a name for your deployment and select a **Hardware specification**\.Click **Create** to create the deployment\. Your deployment window opens from which you can later create jobs\.
<!-- </ol> -->
<!-- <article "class="topic task nested1" role="article" id="task_ktn_fkv_5mb" "> -->
## Creating and running  Decision Optimization jobs ##
You can create and run jobs to your deployed model\.
### Procedure ###
<!-- <ol> -->
1.  Return to your deployment space by using the navigation path and (if the data pane isn't already open) click the  data icon to open the data pane\. Upload your input data tables, and solution and kpi output tables here\. (You must have output tables defined in your model to be able to see the solution and kpi values\.)
2.  Open your deployment model, by selecting it in the Deployments tab of your deployment space and click **New job**\.
3.  Define the details of your job by entering a name, and an optional description for your job and click **Next**\.
4.  Configure your job by selecting a hardware specification and **Next**\.You can choose to schedule you job here, or leave the default schedule option off and click **Next**\. You can also optionally choose to turn on notifications or click  Next\.
5.  Choose the data that you want to use in your job by clicking Select the source for each of your input and output tables\. Click **Next**\.
6.  You can now review and create your model by clicking **Create**\.When you receive a successful job creation message, you can then view it by opening it from your deployment space\. There you can see the run status of your job\.
7.  Open the run for your job\.Your job log opens and you can also view and copy the payload information\.
<!-- </ol> -->
<!-- </article "class="topic task nested1" role="article" id="task_ktn_fkv_5mb" "> -->
<!-- </article "class="nested0" role="article" id="task_deployUIWML" "> -->
 | 
| 
	95689297B729A4186914E81A59FFB3A09289F8D8 | 
	https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/DeployPythonClient.html?context=cdpaas&locale=en | 
	Decision Optimization Python client examples | 
	 Python client examples 
You can deploy a  Decision Optimization model, create and monitor jobs, and get solutions by using the  Watson Machine Learning Python client.
To deploy your model, see [Model deployment](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelDeploymentTaskCloud.html).
For more information, see [Watson Machine Learning Python client documentation](https://ibm.github.io/watson-machine-learning-sdk/core_api.htmldeployments).
See also the following sample  notebooks located in the  jupyter folder of the [DO-samples](https://github.com/IBMDecisionOptimization/DO-Samples).  Select the relevant product and version subfolder..
*  Deploying a DO model with WML
*  RunDeployedModel
*  ExtendWMLSoftwareSpec
The  Deploying a DO model with WML sample shows you how to deploy a  Decision Optimization model, create and monitor jobs, and get solutions by using the  Watson Machine Learning Python client. This  notebook uses the diet sample for the  Decision Optimization model and takes you through the whole procedure without using the  Decision Optimization experiment UI.
The  RunDeployedModel shows you how to run jobs and get solutions from an existing deployed model. This  notebook uses a model that is saved for deployment from a  Decision Optimization experiment UI scenario.
The  ExtendWMLSoftwareSpec notebook shows you how to extend the  Decision Optimization software specification within  Watson Machine Learning. By extending the software specification, you can use your own pip package to add custom code and deploy it in your model and send jobs to it.
You can also find in the samples several  notebooks for deploying various models, for example CPLEX, DOcplex and OPL models with different types of data.
 | 
	# Python client examples #
You can deploy a  Decision Optimization model, create and monitor jobs, and get solutions by using the  Watson Machine Learning Python client\.
To deploy your model, see [Model deployment](https://dataplatform.cloud.ibm.com/docs/content/DO/WML_Deployment/ModelDeploymentTaskCloud.html)\.
For more information, see [Watson Machine Learning Python client documentation](https://ibm.github.io/watson-machine-learning-sdk/core_api.html#deployments)\.
See also the following sample  notebooks located in the  jupyter folder of the **[DO\-samples](https://github.com/IBMDecisionOptimization/DO-Samples)**\.  Select the relevant product and version subfolder\.\.
<!-- <ul> -->
 *  Deploying a DO model with WML
 *  RunDeployedModel
 *  ExtendWMLSoftwareSpec
<!-- </ul> -->
The  Deploying a DO model with WML sample shows you how to deploy a  Decision Optimization model, create and monitor jobs, and get solutions by using the  Watson Machine Learning Python client\. This  notebook uses the diet sample for the  Decision Optimization model and takes you through the whole procedure without using the  Decision Optimization experiment UI\.
The  RunDeployedModel shows you how to run jobs and get solutions from an existing deployed model\. This  notebook uses a model that is saved for deployment from a  Decision Optimization experiment UI scenario\.
The  ExtendWMLSoftwareSpec notebook shows you how to extend the  Decision Optimization software specification within  Watson Machine Learning\. By extending the software specification, you can use your own pip package to add custom code and deploy it in your model and send jobs to it\.
You can also find in the samples several  notebooks for deploying various models, for example CPLEX, DOcplex and OPL models with different types of data\.
<!-- </article "role="article" "> -->
 | 
End of preview. Expand
						in Data Studio
					
watsonxDocsQA Dataset
Overview
watsonxDocsQA is a new open-source dataset and benchmark contributed by IBM. The dataset is derived from enterprise product documentation and is designed specifically for end-to-end Retrieval-Augmented Generation (RAG) evaluation. The dataset consists of two components:
- Documents: A corpus of 1,144 text and markdown files generated by crawling enterprise documentation (main page - crawl March 2024).
- Benchmark: A set of 75 question-answer (QA) pairs with gold document labels and answers. The QA pairs are crafted as follows:- 25 questions: Human-generated by two subject matter experts.
- 50 questions: Synthetically generated using the tiiuae/falcon-180bmodel, then manually filtered and reviewed for quality. The methodology is detailed in Yehudai et al. 2024.
 
Data Description
Corpus Dataset
The corpus dataset contains the following fields:
| Field | Description | 
|---|---|
| doc_id | Unique identifier for the document | 
| title | Document title as it appears on the HTML page | 
| document | Textual representation of the content | 
| md_document | Markdown representation of the content | 
| url | Origin URL of the document | 
Question-Answers Dataset
The QA dataset includes these fields:
| Field | Description | 
|---|---|
| question_id | Unique identifier for the question | 
| question | Text of the question | 
| correct_answer | Ground-truth answer | 
| ground_truths_contexts_ids | List of ground-truth document IDs | 
| ground_truths_contexts | List of grounding texts on which the answer is based | 
Samples
Below is an example from the question_answers dataset:
- question_id: watsonx_q_2
- question: What foundation models have been built by IBM?
- correct_answer:
 "Foundation models built by IBM include:- granite-13b-chat-v2
- granite-13b-chat-v1
- granite-13b-instruct-v1"
 
- ground_truths_contexts_ids: B2593108FA446C4B4B0EF5ADC2CD5D9585B0B63C
- ground_truths_contexts: Foundation models built by IBM \n\nIn IBM watsonx.ai, ...
Citation
If you decide to use this dataset, please consider citing our preprint
@misc{orbach2025analysishyperparameteroptimizationmethods,
      title={An Analysis of Hyper-Parameter Optimization Methods for Retrieval Augmented Generation}, 
      author={Matan Orbach and Ohad Eytan and Benjamin Sznajder and Ariel Gera and Odellia Boni and Yoav Kantor and Gal Bloch and Omri Levy and Hadas Abraham and Nitzan Barzilay and Eyal Shnarch and Michael E. Factor and Shila Ofek-Koifman and Paula Ta-Shma and Assaf Toledo},
      year={2025},
      eprint={2505.03452},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2505.03452}, 
}
Contact
For questions or feedback, please:
- Email: [email protected]
- Or, open an pull request/discussion in this repository.
- Downloads last month
- 115
