Inmanta Service Orchestrator 6.2 Release

April 11, 2023 by
Inmanta Service Orchestrator 6.2 Release
Bart Vanbrabant

The main feature of the Inmanta Service Orchestrator 6.2 release is the LSM expert mode. In addition, it contains many improvements, for example several compiler improvements to offer a smoother experience to the model developer. One such improvement stands out from the rest: namespace inference for constructors. See below for a more in-depth overview of the LSM expert mode and the compiler’s namespace inference. For a full changelog, see the documentation.

LSM Expert mode

LSM expert mode is a feature that was first introduced in the API in 6.1. This release completes it with some more functionality, and more importantly integrates it into the Web Console.

Expert mode allows you to circumvent some of the protections we put in place when interacting with LSM service instances. This gives you more direct control over the instance. Example use cases are to update a lot of service instances without triggering a recompile in between each update, or to forcefully set an instance’s attributes or lifecycle state if the model lacks appropriate modifiers or lifecycle edges. It is a powerful feature that requires a good understanding of the orchestrator in order to use it safely (hence the name).

You can enable LSM expert mode from the environment’s settings page. When enabled, a red banner will show at the top of the Web Console to warn that expert mode is enabled. Only leave it enabled as long as required.

With the expert mode enabled, some new buttons appear on the service instance status and attributes tabs. These allow you to completely destroy the service instance, to force it to a different lifecycle state or to update the attributes in any attribute set (e.g. rollback attributes). None of these expert actions will trigger any effects like a compile or attribute set promotions. The user is responsible for manually triggering any appropriate actions by hand.

For more advanced scenarios the orchestrator offers API endpoints for expert mode requests directly. This allows for expert mode changes at large scale by using a script. We can achieve the earlier example to update the attributes of many service instances at once, with a simple script to perform the expert mode attribute patch for each instance. Afterwards, we can trigger a single compile to build the new version of the desired state (resources).

Compiler namespace inference

In an ongoing effort to improve the usability of our modelling language, we now support type inference. A first major building block in that effort was the support for constructor trees. This release adds namespace inference: constructors that are assigned to a relation can now be used with the entity’s name only, without having to explicitly specify the namespace. The compiler will then automatically find the appropriate type, provided that the namespace is imported.

This greatly improves ease of development and readability of large models. This is especially true for those using yang-generated models, which are known to have very deeply nested namespaces.

For example, consider the partial model below taken from our networking quickstart. The support for constructor trees already allows to model it intuitively as a tree, but it is still very verbose and the entities involved are difficult to see at a glance because of the long namespaces.

import nokia_srlinux import nokia_srlinux::network_instance import nokia_srlinux::network_instance::protocols import nokia_srlinux::network_instance::protocols::ospf import nokia_srlinux::network_instance::protocols::ospf::instance import nokia_srlinux::network_instance::protocols::ospf::instance::area import yang  leaf1 = nokia_srlinux::GnmiDevice(     auto_agent = true,     name = "leaf1",     mgmt_ip = "172.30.0.210",     yang_credentials = yang::Credentials(         username = "admin",         password = "NokiaSrl1!"     ) )  leaf1_net_instance = nokia_srlinux::NetworkInstance(     device=leaf1,     name="default",     interface=[         nokia_srlinux::network_instance::Interface(name="ethernet-1/1.0"),     ],     protocols=nokia_srlinux::network_instance::Protocols(         ospf=nokia_srlinux::network_instance::protocols::Ospf(             instance=nokia_srlinux::network_instance::protocols::ospf::Instance(                 name="1",                 router_id="10.20.30.210",                 admin_state="enable",                 version="ospf-v2",                 area=nokia_srlinux::network_instance::protocols::ospf::instance::Area(                     area_id="0.0.0.0",                     interface=nokia_srlinux::network_instance::protocols::ospf::instance::area::Interface(                         interface_name="ethernet-1/1.0",                     ),                 ),             ),         ),     ), )

Namespace inference allows us to drop the long namespaces. In the case of this model, the context is very clear, therefore the explicit namespaces are more clutter than added value. Dropping them makes the constructor tree way more readable.

While the namespace qualifiers are dropped from the constructor statements, the rest of the model, including the imports, remains the same. The imports bring the relevant constructors into scope, which is

leaf1_net_instance = nokia_srlinux::NetworkInstance(     device=leaf1,     name="default",     interface=[         Interface(name="ethernet-1/1.0"),     ],     protocols=Protocols(         ospf=Ospf(             instance=Instance(                 name="1",                 router_id="10.20.30.210",                 admin_state="enable",                 version="ospf-v2",                 area=Area(                     area_id="0.0.0.0",                     interface=Interface(                         interface_name="ethernet-1/1.0",                     ),                 ),             ),         ),     ), )
Inmanta Service Orchestrator 6.2 Release
Bart Vanbrabant April 11, 2023
Share this post