Skip to content

MIKE Workbench User Interface

The MIKE Workbench has an IDE-style user interface (UI) where all windows reside under a single parent window, referred to as the Shell.

The Shell contains dockable and collapsible child windows, tabbed windows and splitters for resizing of child windows.

There are four types of windows available: Explorers, Toolbox, Data views, and a Property control. The Toolbox and Property control are Shell controls that are always available, whereas the available Explorers and Data views depend on the system configuration of the Workbench.

The default windows docking configuration is displayed below, but the use may change this.

The MIKE Workbench is based on a plug-in architecture where content and functionality is added to the shell by loading one or more so-called Managers. Each manager may add one or more Explorer and/or Data views to the shell. Moreover, each manager may also supply a number of tools to the toolbox.

Explorers

Explorers are supplied the Managers that has been loaded into the system.

Each manager may add any number of explorers to the Shell, but typically a manager will add one or two Explorers.

Explorers are often used to retrieve all, or a subset of the data entities owned by the manager from the database.

The exact functionality will vary from manager to manager, and there are no rules defining what an Explorer should do.

By default, the explorers are docked in the left-hand side of the shell. If more than one explorer has been loaded, they will by default be grouped in a tab control.

The user may change this configuration by dragging and dropping either the individual tabs, or the entire tab control.

Data Views

In the default configuration, data views are located in the center of the application. Several data views may be opened and docked in many different configurations

The data view can be rearranged by drag dragging the data view tabs to the location indicators that appears when dragging, as illustrated below.

In the example above, the data views have been arranged in three so-called tab groups. The data view tab context menu has 4 entries:

  • Close: Will close the data view.
  • Prominent: This will maximize the tab, and hide all other data view tab groups.
  • Rebalance: This will bring back all other data view tab groups.
  • Move to next tab group: Moves the tab to the next tab group.

Property control

When a user interface entity or tool is selected, the corresponding properties are displayed in the Property control. The properties can be edited, and changes will be applied instantaneously with no explicit save operation required.

Toolbox

Tools are components for analyzing and processing selected data entities (such as time series or map layers). Tools are presented in a context sensitive manner in the Tools Explorer, e.g., time series relevant tools are made available only when time series are selected in the Time series explore and GIS tools are only presented when map layers are selected in GIS Explorer.

Editing tool settings

When a tool has been selected in the Tools explorer, the tool settings will appear in the Property control. The settings of the tool can be edited the same was as the properties of any other entity, and changes will be applied instantaneously with no explicit save operation required.

Saving tool settings

After configuring a tool, the settings can be saved for later use. This is done by clicking on the Save button in the property control.

After saving, the saved tool settings will be represented as a new node under the tool node.

Notice that the saved tool represents the settings, not the input items, and hence a saved tool can be executed on any supported input items.

To run a saved tool, simply select it in the toolbox.

Running a tool

After configuring a tool it can be executed by clicking the "Run" button in the Property control toolstrip.

When clicking the button, a list of available output tools will be listed. Each output tool represents one possible way to visualise or save the result of the tool execution.

Select the appropriate output tool to execute the tool.

Tool sequences

Tool Sequences allows users to store sequences of tools to be executed.

A tool sequence takes output from one tool and use the output as the input for the next tool in the sequence.

Tool sequences are stored by tool input data type. This means that clicking an entity in the explorers, will show Stored Sequences containing tools that takes the specified entity type as input.

Note

Note that once a tool has been added to a tool sequence, only tools with the same input type can be added to the tool sequence.

Create a Tool Sequence

A tool sequence is created from the Add to Sequence button of the properties window of a tool.

This will open the Tool Sequence dialog with the new tool sequence.

Clicking the Save button of the tool sequence, will save the tool sequence into the Stored Sequences node.

Add Tool to a Stored Sequence

Adding a tool to a sequence, can be done after opening the tool sequence dialog from the stored sequences, by double clicking the tool sequence.

Select either the Selected Object node to add a tool to the end of the sequence using the selected object as input, or an existing tool in the stored sequence to use the output from this tool as input to the tool being added.

The Resample and the Extract time period tools are using the selected obejct as input. The Time shift tool is using the output of the Resample tool as input.

Update Tool Properties

Properties of tools stored into sequences, can be updated in the Properties Window after opening the tool sequence dialog and selecting the tool to update properties for.

Run a Tool Sequence

A tool sequence can be executed from the contect menu of the tool sequence. Remember to select the enitities to use as input for the tools of the tool sequence before executing.

Tool Output Configurations

Tool Output configurations are additional output when running tools.

  • Tool Name: The name of the tool that the output configuration should be available for.
  • Configuration Name: The name of the tool output configuration.
  • Presentation Tool: Select the presentation tool to use for displaying the result.

Start Page

The Start Page in MIKE Workbench, shows the web site configured for the active user.

By default, the DHI news page are shown.

To change the the Start Page URL, go to User Settings in the System Explorer and change the user setting for Category: DHI.Solutions.Core.StartPageDataView, Key: StartPageDataView.

The format of the setting should cotain URL tags e.g. <URL>https://www.dhigroup.com/news/mike-operations</URL>.

Please refer to the User Settings help page.

Find Data

Note

Find Data is available with MIKE OPERATIONS version 2024.2 and later.

Find Data allows searching for data enities by name in the MIKE OPERATIONS database providers by either selecting the Find in Data Providers ribbon item or by pressing the keyboard keys Ctrl+Shift+F

Note

Note that Find Data will not look in remote data providers configured in explorers.

Selecting Find Data will open the Find Data dialog.

Type in the search text in the Search field (use * to find all enitites) and press Enter or click the Find All button.

  • Match case will only make case sensitive searches.
  • Match whole word will only search for entity names
  • Look in specifies the data provider to use for searching.
    By default the active explorer will determine the provider selected.

The search results dialog Find will show the search results.

  • Type specifies the data provider type.
  • Name is the name of the entity found.
  • Path is the full path of the entity.
  • Source specifies the matching text.
  • Line specifies the line number of the match in the script storage for the script storage provider.
  • Col specifies the column number of the match in the script storage for script storage provider.


Find data in Script storage

Tip

Double clicking a search result, will expand the associated explorer tree and highlight the entity. For the Script storage provider, the script storage will open and the curser will jump to the column and line of the search result.

Data Providers Available

The following data providers are available in Look in.

Data Provider Description
Document Search for documents with a specific name in the document explorer.
Document Feature Association Search for documents associated to features of feature classes in the GIS manager by matching the associated feature class name or document name. Updated with MIKE OPERATIONS 2024.3
Document Group Search for document groups with a specific name in the document explorer.
Favorite Search for favorites with a specific name in the favorite explorer.
Favorite Group Search for favorite groups with a specific name in the favorite explorer.
Feature Class Search for feature classes with a specific name in the GIS explorer.
Feature Class Group Search for feature class groups with a specific name in the GIS explorer.
Indicator Search for indicators with a specific name in the Scenario explorer.
Indicator Definition Search for indicator definitions with a specific name in the Scenario explorer.
Indicator Group Search for indicator groups with a specific name in the Scenario explorer.
Job Search for jobs with a specific name in the Job explorer.
Job Group Search for job groups with a specific name in the Job explorer.
Job Instance Search for job instances (job runs) matching (status, computer name and job log content). This search option is available with MIKE OPERATIONS 2025
MCA Comparison Search for multi criteria analysis comparisons with a specific name in the Analysis explorer.
MCA Session Search for multi criteria analysis sessions with a specific name in the Analysis explorer.
MCA Setup Search for multi criteria analysis setups with a specific name in the Analysis explorer.
MCA Setup Group Search for multi criteria analysis setup groups with a specific name in the Analysis explorer.
MCA Trade-off Search for multi criteria analysis trade-offs with a specific name in the Analysis explorer.
Model Setup Search for model setups with a specific name in the Scenario explorer.
Model Setup Group Search for model setup groups with a specific name in the Scenario explorer.
Place Search for places with a specific name in the Places explorer.
Place Collection Search for place collections with a specific name in the Places explorer.
Place Collection Group Search for place collection groups with a specific name in the Places explorer.
Place Indicator Search for place indicators with a specific name in the Places explorer.
Place Indicator Style Search for place indicator styles with a specific name in the Places explorer.
Place Time Interval Search for place time intervals with a specific name in the Places explorer.
Raster Search for rasters with a specific name in the GIS explorer.
Raster Group Search for raster gropus with a specific name in the GIS explorer.
Rating Curve Search for rating curves created with the Rating Curve Tool.
Report Definition Search for report definitions with a specif name in the Report explorer.
Report Definition Group Search for report definition groups with a specif name in the Report explorer.
Scenario Search for scenarios with a specific name in the Scenario explorer.
Script Search for scripts with a specific name in the Script explorer.
Script Group Search for script groups with a specific name in the Script explorer.
Script Storage Search for script content containing the search text in script storages in the Script explorer.
Simulation Search for simulations with a specific name in the Scenario explorer.
Spreadsheet Search for spreadsheets with a specific name in the Spreadsheet explorer.
Spreadsheet Feature Association Search for spreadsheet associations to features of feature classes in the Spreadsheet explorer by matching the associated feature class name or spreadsheet name. Updated with MIKE OPERATIONS 2024.3
Spreadsheet Group Search for spreadsheet groups with a specific name in the Spreadsheet explorer.
Time series Search for time series with a specific name in the Time series explorer.
Time series Feature Association Search for Time series associations to features of feature classes in the Time series explorer by matching the associated feature class name or time series name. Updated with MIKE OPERATIONS 2024.3
Time series Group Search for Time series groups with a specific name in the Time series explorer.

Trouble Shoot

The Trouble Shooting wizard helps the user of MIKE OPERATIONS to configure the system and to solve known configuration issues.

Rule

A number of rules has been made to help the administrator of MIKE OPERATIONS to solve common configuration issues.

Click the "Analyse all" button to make the wizard search for various configuration issue.

If the rule is valid, the green icon will be shown ( for rules passed successfully (i.e. no issue was detected). A red icon means the system has an issue. The button Details will give more information regarding the selected rule, including how to fix it. The button Troubleshoot will help in this process.

View Log

MIKE OPERATIONS logs various issues and errors when they occur.

Click the links to open log locations.

Application Log

Opens the latest application log found in %temp%\DHIDSS.

Go here to find crash reports (*.svclog) and the exceptions caused by invalid data and application behavior.

Job log

Opens the folder where logs written by the job manager can be found in c:\Windows\Temp.

Go here to find job logs containing information about jobs being started as well as jobs failing.

Log file File name Description
JobExecution<yyyy-MM-dd_HH-mm-ss>.Log The log file will write into the last JobExecution*.Log file available. If it does not exist, it is created when the first job is executed. This means that the date of the file name The file contains information about jobs started including properties used as well as exceptions thrown when jobs are failing.
DHI.Solutions.JobManager.JobRunner-<PID>.svclog Created when a job is executed. PID is the Windows process id at the time the process was running (seen e.g. in the Windows Task Manager). The file contains execption information in case the job is failing unexpetable.

Note

A new JobExecution.Log file is created if:

  • the age of the log file is exeeding the ServiceLogFilesMaxHourssetting of the DHI.Solutions.JobManager.Config file.
  • the log file size exceeds the ServiceLogFilesMaxMBSizesetting of the DHI.Solutions.JobManager.Config file.
  • Log files are deleted (oldest first) when the log file count exceeds the ServiceLogFilesMaxCountsetting of the DHI.Solutions.JobManager.Config file.

    The DHI.Solutions.JobManager.Config section file below, shows settings associated to the job log.

    <?xml version="1.0" encoding="utf-8" ?>
    <configuration>
      <appSettings>
        <!-- Log file settings -->
        <add key="ServiceLogFilesLocation" value="" />
        <add key="ServiceLogFilesMaxMBSize" value="1" />
        <add key="ServiceLogFilesMaxHours" value="4" />
        <add key="ServiceLogFilesMaxCount" value="10" />
      </appSettings>
    </configuration>
    

    Note

    The process id (PID) of the .svclog file can also be found in the job log file (.Log).

    When jobs a failing as a result of an exception being thrown, events are also added to the Windows Event Log (see below). Use the date and time of the failing job to find the associated event in the Windows Application Event Log.

    Event log

    The Windows Event Viewer displays system events and can be used for monitoring and troubleshooting Windows and applications.

    By default, MIKE OPERATIONS will write MIKE OPERATIONS service information (Job Manager Service and Event Manager Service) into the Windows Event Log.

    This means that information events are written when services are started and stopped as well as event are written if processes of the services are failing.

    Windows Events written as a result of MIKE OPERATIONS Services will have the following information.

    Manager Service Name Event Source Description
    Event Manager DHI Solutions Event Manager Service DSSEventService Information event when starting and stopping the service or an error if the service was not able to start.
    Job Manager DHI Solutions Job Manager Service DHI.Solutions.JobManager.Service Warning and Error messages written e.g when the service is started or if log files
  • Error: The job runner was not found
  • Error: A scheduled job was not able to start.
  • Warning: The job log did not cleanup properly.
  • Job Manager DHI Solutions Job Manager Service JobRunner, DSS JobRunner Written when a job is failing as a result of an exception being thrown.

    All default messages will get Event ID = 0 in the Windows Event Log.

    Job Schedules Opens Windows Explorer showing the folder C:\ProgramData\DHI containing the file with job schedule information JobSchedules.xml of the job service runnning on the current workstation.

    The file contains information about all jobs scheduled.

    JobSchedules.xml is read by the job service when startring up and written when the job service stops. This means that jobs being scheduled, cannot be found in JobSchedules.xml before the job service restarts.

    To force JobSchedules.xml to be written, go to Services from the Windows Start menu and restart the DHI Solutions Job Manager Service \<version>.

    Note

    For viewing the JobSchedules.xml of remote a job service, go to the server running the job service.

    Application log folder Opens the Windows Explorer showing the application log folder %temp%\DHIDSS.

    The aplication log folder contains log files <ApplicationName>-<ProcessId>.svclog written when exceptions is thrown.

    Check this log if e.g. MIKE OPERATIONS crashes and inform DHI about the issue.

    License log Shows the DHI license log NetLmLcw.log for the current session.

    License log (Jobs) Shows the DHI license log NetLmLcw.log for jobs.

    Clicking this link requires administrator rights as the license log of jobs is usually found in another users folder.

    Additional Information

    This tab contains links to additional information provided by DHI.

    • DHI Knowledge Base Will display the DHI Knowledge Base.
    • Go to YouTube channel Will open the DHI YouTube channel, containing various videos about DHI projects and software.
    • Request training course Link to DHI Training, where information about training, research, papers and on how to contact DHI can be found.
    • Make a support request Send a mail to MIKE Support for help on solving an issue in a DHI software product.
    • DHI Developers Will show the DHI Developers page with usefull information for developers and configurators.

    Database usage

    The Database usage wizard helps the user of MIKE OPERATIONS to monitor how the database storage is being used. This provides useful insight when doing database maintenance as it shows what are the items that take the most space on the disk.

    Database usage

    The table shows information about each MIKE OPERATIONS database table. On top of the table size and the number of entries (rows), several details are shown, such as the module it belongs to, the API Namespace it refers to, the name of the entity.

    • Database Size (GB) Is the total database size in GB reported by PostgreSQL using the command SELECT pg_size_pretty(pg_database_size('<database name>'))
    • Calculated table size (GB) Is the sum of all tables sizes in GB of the tables in the list.

    Note

    The total size in MB of a table, could be large even though only a few rows are in the database.
    This mismatch is usually because a database vacuum is needed to free up space on disk.

    Table details

    Some tables can be investigated further.

    time_series_value:

    For all time-series that are not stored as blobs, the actual time steps are stores in the table time_series_value. The size of table reflects all the time steps of all such timeseries. Clicking analyse allows the user to investigate the size of individual timeseries in the database.

    blob

    Blobs are binary objects that are stored in the database. In a MIKE OPERATIONS context, those can be various types of data like: model objects, model result file, model folder, model initial conditions, spreadsheets, documents from Document Manager, report templates, etc.

    Clicking the Analyse button will help the user understand which manager is using this table and how.

    Python Package

    Note

    This wizzard was released with MIKE OPERATIONS 2025.2.

    MIKE OPERATIONS automatically installs Python libraries commonly used for Data Scientist and Water Engineers. Those libraries are directly usable in the Script Manager. However, libraries are hidden in the bin folder, e.g.:

    C:\Program Files (x86)\DHI\MIKE OPERATIONS\2025\bin\python-stdlib\python\site-packages

    This makes it hard for user to know what is available (and in which version) and to update them.

    To simply this, a Python Package Wizzard was created. It can be opened by clicking on the Python package button in the Scripting group.

    This opens a form that lists the python packages already installed.

    Several options are available

    Control Description
    Package name textbox Name of package to install / update
    Search button Looks for the Package name on pypi and fills in the available versions
    Version drop down Allows user to pick the version to install
    Install button Get the library from pypi and install in bin folder
    Refresh button Reads the bin folder and checks for available libraries
    Close button Saves

    To install Python modules, pip should be installed. Please find more information here.

    Provider Configuration

    Note

    This wizzard was released with MIKE OPERATIONS 2025.2.

    MIKE OPERATIONS supports the concept of Providers, which allows MIKE Workbench (i.e. user interface) to work with data not stored in the database and services hosted on other machines. These features were hidden and cumbersome to use.

    This provider configuration makes it easy for users to define the different data sources. It can be opened by clicking on the Providers button in the Data group.

    This opens a form that lists the data sources already configured.

    Several options are available

    Control Description
    New button This creates a new line in the table to configure a data source
    Validate all button Runs the validations routine on all data sources to ensure they are confugured properly and still accesible
    Delete selected button Deletes the lines that are ticked in the Select colum
    Select colum Allows selection of data sources for deletion
    Source colum Defines the type of source
    Connection button Opens a configuration form
    Apply to column Defines in which Manager to create the Prodiver to enable this data source
    OK button Validates and create the new providers in the corresponding Managers
    Cancel button Closes the form without saving

    Source

    Several data sources can be chosen, as explained in the table below.

    source Description Applicable Managers
    DevOps A DevOps repository Script
    DIMS.Core Time series
    Folder Raster, Mesh and Time series
    GitHub Script
    Job Host Job
    MIKE Cloud Document, Raster, Mesh, Feature Class, Job, Scenario and Time series
    MIKE OPERATIONS Time series
    WFS/WMS Layer (GIS)

    Configuration

    For each type of Source, a different configuration form will open. The parameters to specify are documented in the different Managers (see links in table above). Several minor additional parameters have been added:

    Parameter Source type Description
    Alias All Defines the name visible in the Manager data tree view
    SSO MIKE Cloud This check box allows to connect to MIKE Cloud with Single Sign On. A new Provider will shows the data that the current user has access to on MIKE Cloud. If not selected, only the data accessible to the API key will be visible.
    Script library GitHub Make a connection to the GitHub MIKE OPERATIONS script library