User interface support¶
The description of jobs has mainly been based on what could be called a source code level -- namely the job XML format. However, this is not the approach that most users will implement when working with jobs. The MIKE Workbench Job Manager provides an extensive user interface completely shielding the user from the intricacies of the XML format.
This section describes how to create, execute and schedule jobs from within the Job Manager User interface.
The Job Explorer¶
The Job Explorer which is depicted in Figure 3 is used for managing jobs in terms of opening jobs for editing, renaming and deleting. Jobs are also executed and scheduled from here.
Figure 3 Job explorer and job context menu
Figure 3 briefly describes the menu item actions.
Menu item | Description |
---|---|
Open | Opens the selected job for editing with the Job view editor |
Copy full path | Copies the full path of the job in the job explorer to the clipboard. |
Refresh | Refreshes the job tree |
Delete | Deletes the selected job and all schedules defined for it. |
Rename | Renames the selected job |
Execute | Executes the selected job. Note: a schedule is typically set-up for automatic scheduled execution while the Execute menu item is for direct one-time execution |
Create a Schedule | Sets up an execution schedule for the selected job in a specified job host. |
Edit Schedule | Edits existing job schedules on a specified job host. |
Unschedule | Unschedules the selected job in a selected job host. |
Enable | Enables or disables the job schedule. |
Export | Exports the selected job as an XML file |
Import | Imports an XML job file to the Workbench database. |
Table 2 Job menu items
Note: In order to create a new job, select the Create job menu item on the root node (Database) in the Explorer.
An executed job, scheduled as well as directly executed, always produces
a log of the execution. These logs are available as job child nodes. In
Figure 1 it can thus de deducted that Job-3
has been executed once and
Job-1
7 times. The job log name is the time stamp of execution.
Defining job hosts and provider services¶
From the context menu of the root node in the job explorer, Edit Provider(s) Connection(s) can be selected where hosts and provider services can be specified.
The dialog allows configuration of the hosts and job service providers.
- Provider - Select a provider service to configure.
- Connection - Specify the connection string of the provider.
Refer to the provider service on how the connection string is configured. - Alias - Optional alias used as display name for the provider service.
Hosts¶
Hosts are computers that can run simulations and jobs as background processes, i.e. processes that are separate from the application process that a user works with when defining and starting the execution run.
Provider Services¶
A provider service is a service able to run jobs stored in the job manager of MIKE Workbench.
MO Job Service Provider¶
The native job service provider MO Job Service Provider
to be used
when Windows Servers and workstations are used for running jobs.
This job service provider used the Windows Scheduler for scheduling and running jobs.
The connection string requires the following format:
<hostname>,<port>
(separate the name/IP and port with a comma)
- The name or IP address of the computer
- The Job Service port, default values is 8089.
If the job service port is left empty, port 8089 is used.
MIKE Cloud Job Service Provider¶
The MIKE Cloud Job Service Provider is a job service for executing jobs in the DHI MIKE CloudPlatform.
The MIKE Cloud Job Service is running a container based on a predefined container image. For scheduling a job, the MIKE Cloud Job Service is using cron jobs.
The connection string should have the following format:
<Environment>;<API key>;<project path>
- Environment is the Platform environment to use. By default, the environment is the production environment "Prod".
- API key is a Guid generated in the MIKE Cloud Admin Center.
- Project path is the path to the project in the MIKE Cloud Platform. The path should start with a slash
/
.
The project specified will be the project charged for the use of the service.
When using the MIKE Cloud Job Service, it is important that the cloud job service has access to the MIKE OPERATIONS database.
It is recommended to use an Azure Database for PostgreSQL.
The MIKE Cloud Job Service Provider has been tested using Azure Database for PostgreSQL, Single Server and Flexible Server.
Refer to the section about configuring Azure Database for PostgreSQL in the Database Manager Utility help file.
Tip
If the connection fails, you should clear the cache by deleting the file (for Prod):
%localappdata%\DHI.oidccache.bin
Supported Job Tasks¶
When creating jobs based on the MIKE Cloud Job Service Provider, only a subset of the standard tasks supported by the native MO Job Service Provider can be used.
The following tasks are supported.
- ExportDocument
- RunJob
- GetScenarioInfo
- RunScenario
- RunScript
- MakeTimeStamp
- ManageChangeLog
- ManageInitialConditions
- ManageJobLogs
- ManageSimulations
- Vacuum
Container Image¶
The container image used for running jobs on the MIKE Cloud Job Service, is based on the "mcr.microsoft.com/dotnet/runtime:6.0-bullseye-slim" targeting Microsoft .NET 6.0 running on a Debian Linux image (Bullseye).
For more information, refer to the MIKE OPERATIONS SDK section on DHI Developers.
The container image used, contains MIKE OPERATIONS managers, tools, providers and adapters, based on MIKE OPERATIONS NuGet packages targeting .NET Standard 2.0 and published on the internal DHI NuGet feed.
The following NuGet packages are installed in the container image (including dependent managers).
For an updated list of NuGet packages installed on the container image used, refer to DHIDevelopers.
For information on what each NuGet package contains, refer to the MIKE OPERATIONS SDK section on DHI Developers.
NuGet package
- DHI.MikeOperations.DocumentManager.Provider.MikeCloud
- DHI.MikeOperations.DocumentManager.Tools.Export
- DHI.MikeOperations.DocumentManager.Tools.Import
- DHI.MikeOperations.GISManager.Provider.MikeCloud.GIS
- DHI.MikeOperations.GISManager.Provider.MikeCloud.Raster
- DHI.MikeOperations.GISManager.Tools.Dfs2Import
- DHI.MikeOperations.GISManager.Tools.Dfs2TemporalRasterFileExport
- DHI.MikeOperations.GISManager.Tools.Dfs3Import
- DHI.MikeOperations.GISManager.Tools.DfsuImport
- DHI.MikeOperations.IndicatorManager
- DHI.MikeOperations.JobManager.Tools.JobExport
- DHI.MikeOperations.MetadataManager
- DHI.MikeOperations.PlacesManager
- DHI.MikeOperations.ScenarioManager
- DHI.MikeOperations.ScriptManager.IronPython
- DHI.MikeOperations.SpreadsheetManager
- DHI.MikeOperations.TimeseriesManager.Provider.MikeCloud
- DHI.MikeOperations.TimeseriesManager.Tools.AdvancedStatistics
- DHI.MikeOperations.TimeseriesManager.Tools.BasicStatistics
- DHI.MikeOperations.TimeseriesManager.Tools.ImportTools
- DHI.MikeOperations.TimeseriesManager.Tools.ImportTools.USGS
- DHI.MikeOperations.TimeseriesManager.Tools.MikeCloudUpload
- DHI.MikeOperations.TimeseriesManager.Tools.Processing
- DHI.MikeOperations.TimeseriesManager.Tools.TimeseriesExport
- DHI.MikeCore.Linux.rhel7
- DHI.MikeOperations.ScenarioManager.Adapters.MIKE1D
- DHI.MikeOperations.ScenarioManager.Adapters.MIKE21FM
- DHI.MikeOperations.ScenarioManager.Adapters.MIKEFlood
- Npgsql
If no job host is defined, the system will default to "localhost". This is only feasible in case of a single-machine setup where MIKE OPERATIONS and the database is in the same machine.
In systems with multiple servers, named jobhosts should be defined. It will make it clear which machine has the schedule and allow cross-machine scheduling where a client can schedule any job in any machine.
Figure 12 Provider Connection dialog, defining 5 remote hosts and service providers
When clicking OK, the name and connection of the job hosts are validated.
The Job Editor View¶
This view which is depicted in Figure 4 is used for editing jobs. As can be seen, it provides a tree- structured view with 3 types of root level nodes. These are:
- A
Properties
node used for defining global job properties - An
ItemGroup
node used for defining global items Target
nodes
Figure 4 Job editor view
Note the following from the figure:
PrepareData
at point 1 is a target including 15 tasksCreateProperty
at point 2 is a task including an Output elementValue
at point 3 represents an Output element- Properties for any selected node can be edited from the Property control shown at point 5
- The toolstrip shown at point 5 provides functionality for saving the job being edited, adding a new element to the job where the type of element depends on the current selected node, copying and pasting elements, moving elements up and down and finally deleting elements.
The adding of elements, properties, items, targets, tasks and output all takes place from the element parent node. This is done in the following way:
- In order to add a new target: select the root node (representing the whole job) and click the
Add
button - In order to add a new property: select the
Properties
node and click theAdd
button - In order to add a new item: select the
ItemGroup
node and click theAdd
button - In order to add a new task: select the relevant target node and click the
Add
button - In order to add a new output element: select the relevant task and click the
Add
button.
When clicking the Add button for adding a task, a task selection form as shown in Figure 5, appears.
Figure 5 Task selection form
As can be seen from the figure, tasks are categorized according to
functionality area, scenarios and time series etc. The latter task
category - MSBuild3
Task includes a number of general purpose tasks
which are not Workbench specific but rather deal with file handling and
job building functionalities.
This includes tasks like CallTarget
, OnError
, Message
, WriteLinesToFile
, ReadLinesFromFile
etc.
The Job Instance Log¶
Whenever a job is executed, a job instance log is being generated. The log includes status of each executed task including its input, output, processing time and memory usage. An example of such a log is displayed in Figure 6.
Figure 6 Job instance log
The log displayed in the figure comes from an execution of the job from
Listing 1 sample job source. Note in the figure how the log displays a
green icon indicating that the task executed without errors, how each
task execution is logged in three sections, namely Information
with an
overall status of the execution, Properties
with a listing of all the
task input and out parameters with values as well as a Log
section
which includes various task specific log information.
Note
Note: The log is being created and updated in the database as the job progresses with the task execution. At the same time the Job explorer is being notified about the changes to the job log. This is in order to refresh the job user interface with the latest information on the executing jobs. This notification however, relies on the MIKE Workbench Event Manager being active. Should this not be the case, the job user interface will not be updated as the job is being executed.
Execute a job¶
Job execution takes place from the Job Explorer context menu by
selecting the Execute...
menu item (see Figure 3) which will lead to the
Job Execution
form shown in Figure 7.
Figure 7 Execute dialog.
The user will have to specify:
- The name of the computer - Job host - that will host the simulation. Job host shall be defined as described, before they can be used for executing jobs.
- The target or targets from the job file that shall be executed.
- Optionally, specify if the job shall have a maximum allowed execution time (after which it will be killed)
The Settings
tab which is shown below in Figure 8 is used for defining
job properties. They are key=value type settings for the job, allowing
the user to specify a spcific value of a job property for the specific
execution of the job (overriding an eventual value set in the job).
Figure 8 Job property definition
Schedule a job¶
Instead of directly executing a job as described in the previous
section, a job can be scheduled for execution. This happens very similar
to direct execution, through selection of the Create a schedule...
menu
item on the Job context menu. Should an existing schedule be changed,
the user would select the Edit schedule...
menu item.
When creating a schedule the user will select the job host to schedule the job in and then Add the schedule, see Figure 9 Create a Schedule. It is only possible to add schedules to jobhosts which do not already have a schedule for the job. To add new schedules for a job already scheduled in jobhost, use Edit Schedule.
Figure 9 Create a Schedule
Note
Note that when scheduling jobs on a remote host, the ServiceHost name should be specified in the DHI.Solutions.JobManager.Config
file on the host computer so that it does not use the default value localhost.
A schedule is a specification of how to run the job in a job host at predefined times. Times are give by one or more triggers, which each can be recursive.
Figure 10 Add Schedule - General
Adding a schedule, the user will be presented with three tab-pages - the same two as used with the direct execution as well as a Triggers
tab page. The latter is depicted in Figure 11 below.
Figure 11 Triggers
Users will apply triggers, which are date and time specifications, for defining when the execution shall take place. Executions can be defined as single or recurrent executions, for example every three hours.
Users can schedule a job to run only once, or on a daily, weekly and monthly basis.
Disable a scheduled job¶
Scheduled executions can be cancelled through the Unschedule... menu item in the Job context menu.
Note
Note that in cases where a job has been scheduled for execution on multiple job hosts, the user will be prompted as to which of the hosts the schedule should be cancelled.
Schedule status information¶
Jobs being scheduled, has an icon showing the status of the schedule.
Status icon | Status | Description |
---|---|---|
![]() |
Enabled | The job is scheduled and enabled on the Computer specified on the job. |
![]() |
Disabled | The job is scheduled but disabled on the Computer specified on the job. |
![]() |
Unavailable | The job is scheduled either locally or an a remote host (Computer is specified on the job), but the schedule on the host computer running the job service cannot be found. Either because the host cannot be reached or because the job schedule fo the job cannot be found in the jobschedules.xml of the computer of the job service. |
Mail notification after running a scheduled job¶
MIKE Workbench allows sending mails when a job has been executed using a job schedule.
Configure SMTP¶
In the property window of the job root node, the SMTP (Simple Mail Transfer Protocol) information for sending mails can be configured.
When clicking the eclipse button, the following dialog is shown for configuring SMTP.
Property | Description |
---|---|
Server name | The SMTP server name as defined by the SMTP provider. You can find it consulting the web page of your provider. |
Port number | The port number used by the SMTP server for sending mails. |
Use SSL Encrytion | Use Secure Socket Layer (SSL) encryption for sending mail. SSL provide a way to encrypt a communication channel between two computers over the Internet. |
User Name | The username used to login to the SMTP Server. |
Password | The password used to login to the SMTP Server. |
Sending mails using job schedules¶
When the SMTP has been configured, it is possible to configure a job schedule to send a mail after the job schedule has completed.
Check the Send mail check box and specify the recipients of the mail being send. When sending to more recipients, separate the mail addresses with a semicolon.
The mail being send will contain the following information:
- Subject - The job name and the job host name.
- Message Body - The job instance log also found when opening a job instance.
Accessing Remote host/job service details¶
The Job Service in a remote job host may serve multiple MIKE OPERATION databases. Job Explorer only displays information about jobs in the current database.
The Remote Host dialog (see Figure 12) enable direct communication with the Job Service and displays details of all the jobs in the Job Service. It also provides possibility to control the Job Service.
Job Service Provider details¶
Selecting a job host and clicking the "Schedule" button will display all job schedules in the Job Service of the remote host.
Figure 13 Job Service schedules
From this dialog is is possible to inspect the triggers of a job using the "Show Triggers" button (see Figure 14) as well as "Unschedule" a job (select the whole line in the list) - even if it belongs to a job defined in another database (see Figure 15)
Figure 14 Job triggers
Figure 15 Unschedule in Job Service
Job Servcie Control¶
The "Settings..." button in the Remote hosts dialog opens a small dialog from where it is possible to control the Job Service, see Figure 16.
Figure 16 Controlling Job Service
The options are:
- Active/Pause
Active is the normal running mode of the Job Serivce. In this mode, it will trigger jobs to be run. Puse will suspend triggering of jobs. This can be useful if extraordinary maintenance of the database is needed without new jobs starting and connecting to it. - Stardard / Verbose logging will instruct how much the Job Service
will write to the
JobExecution*.log
files that it writes. Verbose is useful if extended debugging of behaviour is needed. The log files are usually written inC:\Windows\temp\DHIDSS
- Reload. Clicking this button will instruct the Job Service to reload
the schedules, thus resetting the schedule details in memory. The
schedules are kept in
C:\ProgramData\DHI\JobSchedules.xml
Note that directoryC:\ProgramData
is by default hidden, but may be referred directly in Windows Explorer.