Component-based Integration of Interconnected Vehicle Architectures
Paper accompanying website
Alexander David Hellwig, Stefan Kriebel, Evgeny Kusmenko, and Bernhard Rumpe,Contents for Supplementary Material according to Paper Outline:
I Intro
II Background
III Running Example and Problem Statement
IV Tag-based multi-platform code generation
VI Evaluation
Middleware Modeling Toolchain and Simulation Demo using CoinCar Simulator:
1 Intro
This website adds additional information for each section of the paper.2 Background
The EmbeddedMontiArc project can be found here and the Documentation is located here. Some referenced projects are:3 Running Example and Problem Statement
The MontiCore Symbol Management Infrastructure(SMI) parses our EmbeddedMontiArc model as well as the tag file containing all RosConnections and creates a single consistent model representation:EmbeddedMontiArc model:

Tag file containing RosConnections:

Graphical model representation:

4 Tag-based multi-platform code generation
The repositories of the referenced generators can be found under:An overview of the generated files for the IntersectionController:

According to the tag file, no OpenDaVinci adapter needs to be generated here; the corresponding project is listed anyway to underline the multi-target capability of the approach.
The source code that is generated for the RosAdapter of the IntersectionController is structured as seen below:

5 Evaluation


The experiments presented in this section are preconfigured in the following virtual machine with all needed software preinstalled. You are invited to see our tools in action: EmbeddedMontiArc Studio - preconfigured Ubuntu VM.
Instructions
Import the .ova file into your virtual machine software(we used VirtualBox), start the VM and follow the instructions in the README_VM.pdf located on the Desktop.Model
The real model used in the evaluation can be found here. Its main file is System.emam while RosConnections.tag contains the ROS middleware model here . The IntersectionController is located in this artifactDeep Driving Example (not covered in Paper)
Deep Driving Architectures using EmbeddedMontiArc, Deep Learning, and the presented Middleware Approach:Detailed instructions on how to use EmbeddedMontiArc and MontiAnna to create a deep learning based robot software are given in here.
The workflow of the system design with EmbeddedMontiArc + MontiAnna + Middleware is depicted in the following diagram.

The generation workflow is depicted next:

MontiAnna is the umbrella term for a set of frameworks containing two languages and several code generators. The modules are:
- CNNArc: the language to define the architecture of a deep artificial neural network. As the framework was originally intended for CNNs, the name still contains the term. However, MontiAnna can handle any kind of deep layered networks.
- CNNTrain: the training language to define the training hyperparameters, loss function, etc.
- CNNArch2MXNet: The MontiAnna-to-MxNet compiler. MxNet is a widely used deep learning framework used in industry applications, e.g. by Amazon. Our compiler produces C++-code for deployment and Python code for training. Furthermore, we provide CMAKE files to facilitate the final compilation process of the generated MxNet code.
The composed language governing the sub-languages is EmbeddedMontiArcDL.
The composed generator compiling the architecture model, the MontiMath behavior, as well as MontiAnna deep neural networls to C++ code as well as corresponding CMAKE files for building the executable is EMADL2CPP.
To demonstrate our methodology in action, we developed a self-driving vehicle software based on the direct perception principle.
Detailed instructions on how to use EmbeddedMontiArc and MontiAnna to create a deep learning based robot software are given in here.
The complete project is available here. The model sources can be found under src/main/dp. The main component is stored as plain text in the Mastercomponent.emadl file. Subcomponents can be found in the subdirectory subcomponents. In particular the deep learning component is stored in Dpnet.emadl and its training is specified in Dpnet.cnnt.
The system uses a deep neural network to extract a dozen of scalar affordance indicators from a front camera image. The affordance indicators are scalar quantities, e.g. the distance to the front car, the distance to the lane marking, etc.
The affordance indicators extracted by the neural network are then fed into Kalman filters and a controller written in MontiMath which in turn computes the actuator commands, i.e. steering, acceleration, and braking.
The graphical component and connector architecture of the autonomous vehicle is depicted below. The Deep Leaning component is highlighted in violet. Other components are either implemented in MontiMath or are a composition of subcomponents. Of course it is possible to have multiple deep learning components in an architecture:
