Magne Jørgensen, Simula Metropolitan Center for Digital Engineering, Oslo, Norway
In this paper, we propose improvements in how estimation bias,e.g., the tendency towards under-estimating the effort,is measured. The proposed approach emphasizes the need to know what the estimates are meant to represent, i.e., the type of estimate we evaluate and the need for a match between the type of estimate given and the bias measure used. We show that even perfect estimates of the mean effort will not lead to an expectation of zero estimation bias when applying the frequently used bias measure: (actual effort – estimated effort)/actual effort. This measure will instead reward under-estimates of the mean effort. We also provide examples of bias measures that match estimates of the mean and the median effort, and argue that there are, in general, no practical bias measures for estimates of the most likely effort. The paper concludes with implications for the evaluation of bias of software development effort estimates.
Effort estimates, measurement of estimation overrun, proper measurement.
Ning Luo and Yue Xiong, Visual Computing Group, Intel Asia-Pacific Research & Development Ltd, Shanghai, China
Modern platform software delivery cost increases rapidly as it usually needs to align with many hardware and silicon’s TTMs, feature evolvement and involves hundreds of engineers. In this paper, citing one ultra-large-scale software - Intel Media Driver as an example, we analyze the hotspots leading to delivery cost increase in continuous software development, the challenges on our software design and our experiences on software delivery cost shrink against the targeted design enhancements. We expect the identified hotspots can help more researchers to form the corresponding research agendas and the experiences shared can help following practitioners to apply similar enhancements.
Software Delivery Cost Control, Predictable Software Evolvement, Streamlined Parallel Development, Continuous Integration.
Linlin Zhang and Ning Luo, Visual Computing Group, Intel Asia-Pacific Research & Development Ltd, Shanghai, China
Grey box fuzzing is one of the most successful methods for automatic vulnerability detection. However, conventional Grey box Fuzzers like AFL can open perform fuzzing against the whole input and tends to spend more time on smaller seeds with lower execution time, which greatly impact fuzzing efficiency for complicated input types. In this work, we introduce one intelligent grey box fuzzing for Intel Media driver, MediaFuzzer, which can perform effective fuzzing based on selective fields of complicated input. Also, with one novel calling depth-based power schedule biased toward seed corpus which can lead to deeper calling chain, it dramatically improves the vulnerability exposures (~6.6 times more issues exposed) and fuzzing efficiency (~2.7 times more efficient) against the baseline AFL for Intel media driver with almost negligible overhead.
vulnerability detection, automated testing, fuzzing, Grey box fuzzer.
Mridula Prakash, Department of Chief Technology Officer, Mysore, India
As the pace of design and development of new software picks up, automated processes are playing an increasingly vital role in ensuring a seamless and continuous integration. With the importance of software build automation tools taking centerstage, the present paper undertakes a comparative analysis of three available solutions - Maven, Gradle and Bazel, evaluating their efficiency and performance in terms of software build automation and deployment. The aim of this study is also to provide the reader with a complete overview of the selected build automation tools and, the relevant features and capabilities of interest. In addition, the paper leads to a broader view on the future of the build automation tools ecosystem.
Automated process, Build automation tools, Maven, Gradle, Bazel.
Yew Kee Wong, School of Information Engineering, HuangHuai University, Henan, China
In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. Such minimal human intervention can be provided using machine learning, which is the application of advanced deep learning techniques on big data. This paper aims to analyse some of the different machine learning and deep learning algorithms and methods, as well as the opportunities provided by the AI applications in various decision making domains.
Artificial Intelligence, Machine Learning, Deep Learning, Big Data.
Priyanka Addagudi and Wendy MacCaull, Department of Computer Science, St. Francis Xavier University, Canada
Question Answering (QA), a branch of Natural Language Processing (NLP), automates information retrieval of answers to natural language questions from databases or documents without human intervention. Motivated by the COVID-19 pandemic and the increasing awareness of Social Determinants of Health (SDoH), we built a prototype QA system that combines NLP, semantics, and IR systems with the focus on SDoH and COVID-19. Our goal was to demonstrate how such technologies could be leveraged to allow decision-makers to retrieve answers to queries from very large databases of documents. We used documents from CORD-19 and PubMed datasets, merged the COVID-19 (CODO) ontology with published ontologies for homelessness and gender, and used the mean average precision metric to evaluate the system. Given the interdisciplinary nature of this research, we provide details of the methodologies used. We anticipate that QA systems can play a significant role in providing information leading to improved health outcomes.
Question Answering, Ontology, Information Retrieval, Social Determinants of Health, COVID-19.
Chun-Hsien Lin and Pu-Jen Cheng, Department of Computer Science & Information Engineering, National Taiwan University, Taipei, Taiwan
Currently, the most common approach to unsupervised word encoding in the form of vectors is the word embedding model. Converting the vocabulary in a legal document into a word embedding model facilitates subjecting legal documents to machine learning, deep learning, and other algorithms and subsequently performing the downstream tasks of natural language processing vis-à-vis, for instance, document classification, contract review, and machine translation. The most common and practical approach of accuracy evaluation with the word embedding model uses a benchmark set with linguistic rules or the relationship between words to perform analogy reasoning via algebraic calculation. This paper proposes establishing an 1,134 Legal Analogical Reasoning Questions Set (LARQS) from the 2,388 Chinese Codex corpus using five kinds of legal relations, which are then used to evaluate the accuracy of the Chinese word embedding model. Moreover, we discovered that legal relations might be ubiquitous in the word embedding model.
Legal Word Embedding, Chinese Word Embedding, Word Embedding Benchmark, Legal Term Categories.
Kunhao Li, Bo Lang, Hongyu Liu and Shaojie Chen, State Key Laboratory of Software Development Environment, Beijing, China
Network traffic protocols and service classification are the foundations of network quality of service (QoS) and security technologies, which have attracted increasing attention in recent years. At present, encryption technologies, such as SSL/TLS, are widely used in network transmission, so traditional traffic classification technologies cannot analyze encrypted packet payload. This paper first proposes a two-level application layer protocol classification model that combines packets and sessions information to address this problem. The first level extracts packet features, such as entropy and randomness of ciphertext, and then classifies the protocol. The second level regards the session as a unit and determines the final classification results by voting on the results of the first level. Many application layer protocols only correspond to one specific service, but HTTPS is used for many services. For the HTTPS service classification problem, we combine session features and packet features and establish a service identification model based on CNN-LSTM. We construct a dataset in a laboratory environment. The experimental results show that the proposed method achieves 99.679% and 96.27% accuracy in SSL/TLS application layer protocol classification and HTTPS service classification, respectively. Thus, the service classification model performs better than other existing methods.
SSL/TLS, HTTPS, Protocol Classification, Service Classification.
Meghashyam Ashwathnarayan, Vaishnavi J, Ananth Kamath and Jayakrishna Guddeti, Infineon Technologies India Pvt Ltd, 11MG Road, Bengaluru, Karnataka
In automotive electronics, new technologies are getting integrated into basic framework creating ways for new software-defined architectures. Virtualization is one of most discussed technologies which will offer a functionality growth into architecture of automobile. This paper introduces concept of validating testcases from multiple IPs on virtualised framework and also investigate the feasibility of implementing protection mechanism on memory segment dedicated for a virtual machine (VM). We describe a proof-of-concept which can be used to promote the use of virtualisation to extend the coverage of post silicon validation. Experimental results are presented as a quantitative evaluation of using virtualization for different testcase scenarios.
Virtualisation, Automotive, Multi-Core Systems, Hypervisor, Post Silicon Validation.
Mathieu Febvay and Ahmed Bounekkar, Université de Lyon, Lyon 2, ERIC UR 3083, F69676 Bron Cedex, France
Each new generation of smartphone gains capabilities that increase performance and power efficiency allowing us to use them for increasingly complex calculations such as Deep Learning. In this paper, four Android deep learning inference frameworks (TFLite, MNN, NCNN and PyTorch) were implemented to evaluate the most recent generation of System On a Chip (SoC) Samsung Exynos 2100, Qualcomm Snapdragon 865+ and 865. Our work focused on image classification task using five state-of-the-art models. The 50 000 images of the ImageNet 2012 validation subset were inferred. Latency and accuracy with various scenarios like CPU, OpenCL, Vulkan with and without multi-threading were measured. Power efficiency and real-world use-case were evaluated from these results as we run the same experiment on the devices camera stream until they consumed 3% of their battery. Our results show that low-level software optimizations, image pre-processing algorithms, conversion process and cooling design have an impact on latency, accuracy and energy efficiency.
Deep Learning, On-device inference, Image classification, Mobile, Quantized Models.
Yosi Ben-Asher, Western Digital Tefen and The University of Haifa CS, Nidal Faour, Western Digital Tefen, Ofer Shinaar, Western Digital Tefen
We consider the problem of selecting an optimized subset of inlinings (replacing a call to a function by its body) that minimize the resulting code size. Frequently, in embedded systems, the program’s executable file size must fit into a small size memory. In such cases, the compiler should generate as small as possible executables. In particular, we seek to improve the code size obtained by the LLVM inliner executed with the -Oz option. One important aspect is whether or not this problem requires a global solution that considers the full span of the call graph or a local solution (as is the case with the LLVM inliner) that decides whether to apply inlining to each call separately based on the expected code-size improvement. We have implemented a global type of inlining algorithm called Mutual Inlining that selects the next call-site (f()callsg() to be inline based on its global properties, namely:
Mridula Prakash, Department of Chief Technology Officer, Mysore, India
This aim of this paper is to provide details on the Open Asymmetric Multi-Processing(OpenAMP) framework in mixed critical systems. OpenAMP is an open-source software framework that provides software components for working with Asymmetric multiprocessing (AMP) systems. The paper will provide in depth details on how to use OpenAMP in multicore systems while designing safety critical projects.
OpenAMP, Multicore, Mixed Critical, & Embedded Systems.
Chidambaram Baskaran, Pawan Nayak, R.Manoj, Sampath Shantanu and Karuppiah Aravindhan, Texas Instruments India Ltd, Bangalore, India
Safety needs of real-time embedded devices are becoming a must in automotive and industrial markets. The BootROM firmware being part of the device drives the need of the firmware adhering to required safety standards for these end markers. Most of the software practices for safety compliance assumes that the development of software is carried out once the devices are available. The BootROM firmware development discussed in this paper involves meeting safety compliance need while device on which it is to be executed is being designed concurrently. In this case, the firmware development is done primarily on pre-silicon development environments which are slow and developers have limited access. These aspects present a unique challenge to developing safety compliant BootROM firmware. Hence, it is important to understand the challenges and identify the right methodology for ensuring that the firmware meets the safety compliance with right level of efficiency. The authors in this paper share their learnings from three safety compliant BootROM firmware development and propose an iterative development flow including safety artifacts generation iteratively. Concurrent firmware development along with device design may sound risky for iterative development and one may wonder it may lead to more effort but the learnings suggests that iterative development is ideal. All the three BootROM firmware development has so far not resulted in any critical bugs that needed another update of the firmware and refabrication of the device.
Concurrent development, Firmware development, Safety compliance, Pre-silicon software development.
Prasang Gupta, Antoinette Youngand Anand Rao, AI and Emerging Technologies, PwC
Cargo loss/damage is a very common problem faced by almost any business with a supply chain arm, leading to major problems like revenue loss and reputation tarnishing. This problem can be solved by employing an asset and impact tracking solution. This would be more practical and effective for high-cost cargo in comparison to low-cost cargo due to the high costs associated with the sensors and overall solution. In this study, we propose a low-cost solution architecture that is scalable, user-friendly, easy to adopt and is viable for a large range of cargo and logistics systems. Taking inspiration from a real-life use case we solved for a client, we also provide insights into the architecture as well as the design decisions that make this a reality.
Asset tracking, Logistics, Cargo loss, Cargo damage, Impact sensor, Accelerometer sensor, Low-cost solution, No code AEP (Application Enablement Platform).
Rishabh Garg, Department of Electrical & Electronics Engineering, Birla Institute of Technology & Science, K.K. Birla Goa Campus, India
Blockchain comes with a possibility to oust the outdated identity system and eliminate the intermediaries. The identity management, through blockchain, can allow individuals to have ownership of their identity by creating a global ID to serve multiple purposes. For user security and ledger consistency, asymmetric cryptography and distributed consensus algorithms can be employed. The blockchain technology, by virtue of its key features like decentralization, persistency, anonymity and auditability, would save the cost and increase the efficiency. Further, the digital identity platform would benefit citizens by allowing them to save time when accessing or providing their personal data and records. Instead of being required to show up to services in-person to produce a physical form of ID, users could be provided with a digital ID through a personal device, like smartphone, that can be shared with services conveniently and securely through a DLT.
Blockchain, Decentralized Apps, Data Portability, Decentralized Public Key Infrastructure (DPKI), DID, Ethereum, Hash, IAM framework, Identity Management System (IMS), IPFS, Private Key, Public Key, Revocation, SSI, Storage Variables, Validation, Zero Knowledge Proof.
João Victor Barcellos Machado Correia, Law Department, Vale do Cricaré University Center (UVC), São Mateus, Brazil
When we analyze the bitcoin mining process, it is first necessary to understand if the bitcoin protocol truly guarantees against everyone the ownership of the cryptocurrency. In this sense, starting from Kantian theory, it is clear that the phatic arrangement of the technology does not guarantee ownership, which can only be guaranteed through the state. Moreover, there are practical cases of protocol failure. Thus, in the face of the immutability of the blockchain protocol, it remains to legally solve the problem of original appropriation in the bitcoin mining process, which is done through the theory of cryptographic fruits. Broadly speaking, the mined bitcoin should be viewed as a new kind of civil fruit of the mining device. As a result, by owning the mining machine, one ends up owning what it mines. For all that, the theoretical structure of cryptographic fruits justifies the legal flaws of the protocol bitcoin.
Original Appropriation, Bitcoin Mining Process, Cryptocurrency, Blockchain, Cryptographic Fruits.
Riccardo Occa1 and Francesco Bertolotti, LIUC – Università Catteneo, Corso G. Matteotti 22, Castellanza (VA), Italy
In the context of the increasing diffusion of technologies related to the world of Industry 4.0 and the Internet of Things in particular, we have developed an agent-based model to simulate the effect of IoT diffusion in companies and verify potential benefits and risks. The model shows how IoT diffusion has the potential to influence the market by supporting both quality and cost improvements. The results of the model also confirm the potential for significant benefits for businesses, suggesting the opportunity to support the introduction and application of IoT, and clearly show how the use of IoT can be a key strategic choice in competitive market contexts focused on cost strategies to increase business performance and prospects.
IoT, agent-based modelling, simulation, adoption, risk, blockchain.
Manga,I.,1 Garba,E.J2 and Ahmadu, A.S, 1Department of Computer, Adamawa State University, Mubi, Nigeria, 2Department of Computer Science, Modibbo Adama University, Yola, Nigeria
The growth and development of modern information and communication technologies, has led the demand for data compression to increase rapidly. Recent development in the field of Computer Science and information has led to the generation of large amount of data always. Data compression is an important aspect of information processing. Data that can be compressed include image data, video data, textual data or even audio data. Image compression refers to the process of representing image using fewer number of bits. Basically, two types of data compression exist. The major aim of lossless image compression is to reduce the redundancy and irreverence of image data for better storage and transmission of data in the better form. The lossy compression scheme leads to high compression ratio while the image experiences lost in quality. However, there are many cases where the loss of image quality or information due to compression needs to be avoided, such as medical, artistic and scientific images. Efficient lossless compression become paramount, although the lossy compressed images are usually satisfactory in divers’ cases. The objectives of the research include to explore existing lossless image compression algorithm, to design efficient and effective lossless image compression technique based on LZW- BCH lossless image compression to reduce redundancies in the image, to demonstrate image enhancement using Gaussian filter algorithm, Secondary method of data collection was used to collect the data. Standard research images were used to validate the new scheme. To achieve these objectives, Java programming language was used to develop the compression scheme using JDK 8.0 and MATLAB was used to conduct the analysis to analyze the space and time complexity of the existing compression scheme against the enhanced scheme. From the findings, it was revealed that the average compression ratio of the enhanced lossless image compression scheme was 1.6489 and the average bit per pixel was 5.416667.
Lossless, Image, Compression, Processing.
Yang Liu1, Evan Gunnell2, Yu Sun2, Hao Zheng3, 1Department of Mechanical and Aerospace Engineering, George Washington University, Washington, DC 20052, 2California State Polytechnic University, Pomona, CA, 91768, 3ASML, Wilton, CT 06897
Autonomous driving is one of the most popular technologies in artificial intelligence. Collision detection is an important issue in automatic driving, which is related to the safety of automatic driving. Many collision detection methods have been proposed, but they all have certain limitations and cannot meet the requirements for automatic driving. Camera is one of the most popular methods to detect objects. The obstacle detection of the current camera is mostly completed by two or more cameras (binocular technology) or used in conjunction with other sensors (such as a depth camera) to achieve the purpose of distance detection. In this paper, we propose an algorithm to detect obstacle distances from photos or videos of a single camera.
Autonomous driving, computer vision, machine learning, artificial intelligence, distance detection, collision detection.
Yuting Xue, Heng Zhou, Yuxuan Ding, Xiao Shan, School of Electronic Engineering, Xidian University, Xi’an, China
In this paper, we propose to boost the text-to-image synthesis through an Adaptive Learning and Generating Generative Adversarial Networks (ALG-GANs). First, we propose an adaptive forgetting mechanism in the generator to reduce the error accumulation and learn knowledge flexibly in the cascade structure. Besides, to evade the mode collapse caused by a strong biased surveillance, we propose a multi-task discriminator using weak-supervision information to guide the generator more comprehensively and maintain the semantic consistency in the cascade generation process. To avoid the refine dif iculty aroused by the bad initialization, we judge the quality of initialization before further processing. The generator will re-sample the noise and re-initialize the bad initializations to obtain good ones. All the above contributions have been integrated in a unified framework, which is an adaptive forgetting, drafting and comprehensive guiding based text-to-image synthesis method with hierarchical generative adversarial networks. The model is evaluated on the Caltech-UCSD Birds 200 (CUB) dataset and the Oxford 102 Category Flowers (Oxford) dataset with standard metrics. The results on Inception Score (IS) and Fréchet Inception Distance (FID) show that our model outperforms the previous methods.
Text-to-Image Synthesis, Generative Adversarial Network, Forgetting Mechanism, Semantic Consistency.
Hazirah Bee Yusof Ali and Lili Marziana Abdullah, Kulliyyah of Information and Communication Technology (KICT), International Islamic University Malaysia, Kuala Lumpur, Malaysia
The emergence of big data and the cloud has changed the way companies conduct their businesses. Businesses and leisure activities are performed in the cloud extensively by the day. The Internet connection is too appealing for companies not to use it. Likewise, many companies emerged to give solutions for analysing massive datasets, thus help to deliver meaningful information to the public. Powerful data analysis tools coupled with the big storage of the cloud enable companies to understand their businesss data and consequently allow them to proceed with proper solutions. Regrettably, big data and cloud usage benefits come hand in hand with the abundance of security vulnerabilities. Companies are being hacked, and data are stolen. So, the terrifying thought of being hacked and vandalized makes companies extra careful before they can start storing or trusting their data in the cloud. Thus, this research is needed whereby the researcher validated the factors of trust gained from Qualitative Data Analysis with Quantitative Data Analysis. Due to those facts, this research is required. The study focuses on the trust of big data and the cloud for companies in Malaysia. Companies must trust the cloud to put their data in the hands of the cloud provider.
Trust, Big Data, Cloud.
Lucas Salvador Bernardo and Robertas Damaševičius, Department of Software Engineering, Kaunas University of Technology, Kaunas, Lithuania
Parkinson disease is the second most world widespread neural impairment. It affects approximately 2 to 3% of world ‘s population with age over 65 years. Part of Parkinson ‘s disease progress happens due the loss of cells in a brain region called Substantia Nigra (SN). Nerve cells in this region are responsible for improving the control of movements and coordination. The loss of such cells results in the emerge of the motor symptoms characteristic of the disease. However, motor symptoms appear when brain cells are already damaged, while oppositely voice impairments appear before the brain cells are being affected. This study aims to recognize Parkinson disease using 22 attributes, extracted from 195 voice records, being 14 from Parkinson disease patients and 48 from healthy individuals. The data is passed through a series of pre-processing steps, being them: balancing, where we applied a Synthetic Minority Oversampling Technique (SMOTE) to make the number of data per class equal, training and test segmentation, where the data was divided in 30% for testing and 70% for training, scaling, the data into intervals of 0 to 1, amplification, step where the values are converted into intervals from 0 to 100 and image generation, con-verting the numerical dataset into an image dataset. Later, the resulted image dataset was used to train a Visual Geometry Group 11 (VGG11) Convolutional Neural Network (CNN). The proposed solution achieved 93.1% accuracy, 92.31% f1-score, 96% recall, 88.89% precision on testing dataset. This metrics were later compared to other Convolutional Neural Network solutions such as ResNet34 and MobileNet, which allowed to see its improvement of performance when compared with those solutions.
Parkinson, VGG11, SMOTE
Iftikhar U. Sikder,1 and James J. Ribero2, 1Department of Information Systems, Cleveland State University, USA, 2IBA, University of Dhaka, Bangladesh
The paper examines the bivariate relationship between COVID-19 and temperature time series values using Singular Value Decomposition (SVD) and cross-wavelet analysis. The COVID-19 incidence data and the temperature data of the corresponding period were transformed using SVD into significant eigen states. Wavelet transformation was performed to analyze and compare the frequency structure of the single and the bivariate time series. The result provides synchronicity and coherence measures in the ranges time period. Additionally, wavelet power spectrum and paired wavelet coherence statistics and phase difference were computed. The result suggests statistically significant coherence at various frequencies. It also indicates complex conjugate dynamic relationships in terms phases and phase differences.
COVID-19, SVD, Wavelet analysis, Cross-wavelet power, Wavelet coherence.
Mohamed Khalefa, SUNY College at Old Westbury Old Westbury, NY, USA
The proposed framework generates efficient C code for the SQL function by optimizing the memory layout, utilize compiler optimization and use function properties. These function properties may be manually supplied or extracted automatically. Our experiments show that our approach gives efficient algorithms that are faster than their counterpart from the state-of-the-art algorithms. The ultimate goal of our work is to smoothly integrate imperative and declarative code. and to generate efficient code based on function properties and data distributions.
Database, UDF, SQL.
Kekun HU, Gang Dong, Yaqian Zhao, Rengang Li, Jian Zhao, Qichun Cao, Hongbin Yang and Hongzhi Shi, Inspur Electronic Information Industry Co., Ltd., Jinan 250014, China & State Key Laboratory of High-end Server & Storage Technology, Inspur Group Co., Ltd., Jinan 250014, China
Graph partitioning is one of the key technologies for the parallel processing of big graphs. Existing offline partitioning algorithms are too time-consuming to partition big graphs while online ones cannot offer high-quality partitions. To this end, we propose a distributed streaming partitioning method with buffer support. It adopts a multi-loader-multi-partitioner architecture, where multiple loaders read graph data in parallel to accelerate data loading. Each partitioner first buffers and sorts the vertex stream read by the corresponding loader and then assigns vertices in the buffer by using one of our proposed four streaming heuristics with different goals. To further improve the quality of graph partitions, we design a restreaming mechanism. Experimental results on real and synthetic big graphs show that the proposed distributed streaming partitioning algorithm outperforms the state-of-the-art online ones in terms of partition quality and scalability.
Big graph, streaming partitioning, distributed, buffering, restreaming.
Lea Matlekovic and Peter Schneider-Kamp, Department of Mathematics and Computer Science, University of Southern Denmark, Odense, Denmark
Linear-infrastructure Mission Control (LiMiC) is an application for autonomous Unmanned Aerial Vehicle (UAV) infrastructure inspection mission planning developed in monolithic software architecture. The application calculates routes along the infrastructure based on the users’ inputs, the number of UAVs participating in the mission, and UAVs’ locations. User selects inspection targets on a 2D map, and the application calculates inspection routes as a set of waypoints for each UAV. LiMiC1.0 is the latest application version migrated from monolith to microservices, continuously integrated and deployed using DevOps tools like GitLab, Docker, and Kubernetes in order to facilitate future features development, enable better traffic management, and improve the route calculation processing time. In this paper, we discuss the differences between the monolith and microservice architecture to justify our decision for migration. We describe the methodology for the application’s migration and implementation processes, technologies we use for continuous integration and deployment, and we present microservices improved performance results compared with the monolithic application.
autonomous UAV, mission planning, microservices, Docker, Kubernetes, CI/CD.