keyboard_arrow_up
Accepted Papers
Measurement of Software Development Effort Estimation Bias: Avoiding Biased Measures of Estimationbias

Magne Jørgensen, Simula Metropolitan Center for Digital Engineering, Oslo, Norway

ABSTRACT

In this paper, we propose improvements in how estimation bias,e.g., the tendency towards under-estimating the effort,is measured. The proposed approach emphasizes the need to know what the estimates are meant to represent, i.e., the type of estimate we evaluate and the need for a match between the type of estimate given and the bias measure used. We show that even perfect estimates of the mean effort will not lead to an expectation of zero estimation bias when applying the frequently used bias measure: (actual effort – estimated effort)/actual effort. This measure will instead reward under-estimates of the mean effort. We also provide examples of bias measures that match estimates of the mean and the median effort, and argue that there are, in general, no practical bias measures for estimates of the most likely effort. The paper concludes with implications for the evaluation of bias of software development effort estimates.

KEYWORDS

Effort estimates, measurement of estimation overrun, proper measurement.


Towards Maintainable Platform Software - Delivery Cost Control in Continuous Software Development

Ning Luo and Yue Xiong, Visual Computing Group, Intel Asia-Pacific Research & Development Ltd, Shanghai, China

ABSTRACT

Modern platform software delivery cost increases rapidly as it usually needs to align with many hardware and silicon’s TTMs, feature evolvement and involves hundreds of engineers. In this paper, citing one ultra-large-scale software - Intel Media Driver as an example, we analyze the hotspots leading to delivery cost increase in continuous software development, the challenges on our software design and our experiences on software delivery cost shrink against the targeted design enhancements. We expect the identified hotspots can help more researchers to form the corresponding research agendas and the experiences shared can help following practitioners to apply similar enhancements.

KEYWORDS

Software Delivery Cost Control, Predictable Software Evolvement, Streamlined Parallel Development, Continuous Integration.


Enhanced Grey Box Fuzzing for Intel Media Driver

Linlin Zhang and Ning Luo, Visual Computing Group, Intel Asia-Pacific Research & Development Ltd, Shanghai, China

ABSTRACT

Grey box fuzzing is one of the most successful methods for automatic vulnerability detection. However, conventional Grey box Fuzzers like AFL can open perform fuzzing against the whole input and tends to spend more time on smaller seeds with lower execution time, which greatly impact fuzzing efficiency for complicated input types. In this work, we introduce one intelligent grey box fuzzing for Intel Media driver, MediaFuzzer, which can perform effective fuzzing based on selective fields of complicated input. Also, with one novel calling depth-based power schedule biased toward seed corpus which can lead to deeper calling chain, it dramatically improves the vulnerability exposures (~6.6 times more issues exposed) and fuzzing efficiency (~2.7 times more efficient) against the baseline AFL for Intel media driver with almost negligible overhead.

KEYWORDS

vulnerability detection, automated testing, fuzzing, Grey box fuzzer.


Know How of AI, Machine Learning, Deep Learning and Big Data Analysis

Yew Kee Wong, School of Information Engineering, HuangHuai University, Henan, China

ABSTRACT

In the information era, enormous amounts of data have become available on hand to decision makers. Big data refers to datasets that are not only big, but also high in variety and velocity, which makes them difficult to handle using traditional tools and techniques. Due to the rapid growth of such data, solutions need to be studied and provided in order to handle and extract value and knowledge from these datasets. Machine learning is a method of data analysis that automates analytical model building. It is a branch of artificial intelligence based on the idea that systems can learn from data, identify patterns and make decisions with minimal human intervention. Such minimal human intervention can be provided using machine learning, which is the application of advanced deep learning techniques on big data. This paper aims to analyse some of the different machine learning and deep learning algorithms and methods, as well as the opportunities provided by the AI applications in various decision making domains.

KEYWORDS

Artificial Intelligence, Machine Learning, Deep Learning, Big Data.


An IR-based QA System for Impact of Social Determinants of Health on Covid-19

Priyanka Addagudi and Wendy MacCaull, Department of Computer Science, St. Francis Xavier University, Canada

ABSTRACT

Question Answering (QA), a branch of Natural Language Processing (NLP), automates information retrieval of answers to natural language questions from databases or documents without human intervention. Motivated by the COVID-19 pandemic and the increasing awareness of Social Determinants of Health (SDoH), we built a prototype QA system that combines NLP, semantics, and IR systems with the focus on SDoH and COVID-19. Our goal was to demonstrate how such technologies could be leveraged to allow decision-makers to retrieve answers to queries from very large databases of documents. We used documents from CORD-19 and PubMed datasets, merged the COVID-19 (CODO) ontology with published ontologies for homelessness and gender, and used the mean average precision metric to evaluate the system. Given the interdisciplinary nature of this research, we provide details of the methodologies used. We anticipate that QA systems can play a significant role in providing information leading to improved health outcomes.

KEYWORDS

Question Answering, Ontology, Information Retrieval, Social Determinants of Health, COVID-19.


SSL/TLS Encrypted Traffic Application Layer Protocol and Service Classification

Kunhao Li, Bo Lang, Hongyu Liu and Shaojie Chen, State Key Laboratory of Software Development Environment, Beijing, China

ABSTRACT

Network traffic protocols and service classification are the foundations of network quality of service (QoS) and security technologies, which have attracted increasing attention in recent years. At present, encryption technologies, such as SSL/TLS, are widely used in network transmission, so traditional traffic classification technologies cannot analyze encrypted packet payload. This paper first proposes a two-level application layer protocol classification model that combines packets and sessions information to address this problem. The first level extracts packet features, such as entropy and randomness of ciphertext, and then classifies the protocol. The second level regards the session as a unit and determines the final classification results by voting on the results of the first level. Many application layer protocols only correspond to one specific service, but HTTPS is used for many services. For the HTTPS service classification problem, we combine session features and packet features and establish a service identification model based on CNN-LSTM. We construct a dataset in a laboratory environment. The experimental results show that the proposed method achieves 99.679% and 96.27% accuracy in SSL/TLS application layer protocol classification and HTTPS service classification, respectively. Thus, the service classification model performs better than other existing methods.

KEYWORDS

SSL/TLS, HTTPS, Protocol Classification, Service Classification.


Virtualised Ecosystem to Envisage Numerous Applications on an Automotive Microcontroller

Meghashyam Ashwathnarayan, Vaishnavi J, Ananth Kamath and Jayakrishna Guddeti, Infineon Technologies India Pvt Ltd, 11MG Road, Bengaluru, Karnataka

ABSTRACT

In automotive electronics, new technologies are getting integrated into basic framework creating ways for new software-defined architectures. Virtualization is one of most discussed technologies which will offer a functionality growth into architecture of automobile. This paper introduces concept of validating testcases from multiple IPs on virtualised framework and also investigate the feasibility of implementing protection mechanism on memory segment dedicated for a virtual machine (VM). We describe a proof-of-concept which can be used to promote the use of virtualisation to extend the coverage of post silicon validation. Experimental results are presented as a quantitative evaluation of using virtualization for different testcase scenarios.

KEYWORDS

Virtualisation, Automotive, Multi-Core Systems, Hypervisor, Post Silicon Validation.


Deep Learning Frameworks Evaluation for Image Classification on Resource Constrained Device

Mathieu Febvay and Ahmed Bounekkar, Université de Lyon, Lyon 2, ERIC UR 3083, F69676 Bron Cedex, France

ABSTRACT

Each new generation of smartphone gains capabilities that increase performance and power efficiency allowing us to use them for increasingly complex calculations such as Deep Learning. In this paper, four Android deep learning inference frameworks (TFLite, MNN, NCNN and PyTorch) were implemented to evaluate the most recent generation of System On a Chip (SoC) Samsung Exynos 2100, Qualcomm Snapdragon 865+ and 865. Our work focused on image classification task using five state-of-the-art models. The 50 000 images of the ImageNet 2012 validation subset were inferred. Latency and accuracy with various scenarios like CPU, OpenCL, Vulkan with and without multi-threading were measured. Power efficiency and real-world use-case were evaluated from these results as we run the same experiment on the devices camera stream until they consumed 3% of their battery. Our results show that low-level software optimizations, image pre-processing algorithms, conversion process and cooling design have an impact on latency, accuracy and energy efficiency.

KEYWORDS

Deep Learning, On-device inference, Image classification, Mobile, Quantized Models.


Mutual Inlining: An Inlining Algorithm to Reduce the executable Size

Yosi Ben-Asher, Western Digital Tefen and The University of Haifa CS, Nidal Faour, Western Digital Tefen, Ofer Shinaar, Western Digital Tefen

ABSTRACT

We consider the problem of selecting an optimized subset of inlinings (replacing a call to a function by its body) that minimize the resulting code size. Frequently, in embedded systems, the program’s executable file size must fit into a small size memory. In such cases, the compiler should generate as small as possible executables. In particular, we seek to improve the code size obtained by the LLVM inliner executed with the -Oz option. One important aspect is whether or not this problem requires a global solution that considers the full span of the call graph or a local solution (as is the case with the LLVM inliner) that decides whether to apply inlining to each call separately based on the expected code-size improvement. We have implemented a global type of inlining algorithm called Mutual Inlining that selects the next call-site (f()callsg() to be inline based on its global properties, namely:


Leveraging Open Amp in Embedded Mixed Critical Systems

Mridula Prakash, Department of Chief Technology Officer, Mysore, India

ABSTRACT

This aim of this paper is to provide details on the Open Asymmetric Multi-Processing(OpenAMP) framework in mixed critical systems. OpenAMP is an open-source software framework that provides software components for working with Asymmetric multiprocessing (AMP) systems. The paper will provide in depth details on how to use OpenAMP in multicore systems while designing safety critical projects.

KEYWORDS

OpenAMP, Multicore, Mixed Critical, & Embedded Systems.


Investigating Cargo Loss in Logistics Systems using Low-Cost Impact Sensors

Prasang Gupta, Antoinette Youngand Anand Rao, AI and Emerging Technologies, PwC

ABSTRACT

Cargo loss/damage is a very common problem faced by almost any business with a supply chain arm, leading to major problems like revenue loss and reputation tarnishing. This problem can be solved by employing an asset and impact tracking solution. This would be more practical and effective for high-cost cargo in comparison to low-cost cargo due to the high costs associated with the sensors and overall solution. In this study, we propose a low-cost solution architecture that is scalable, user-friendly, easy to adopt and is viable for a large range of cargo and logistics systems. Taking inspiration from a real-life use case we solved for a client, we also provide insights into the architecture as well as the design decisions that make this a reality.

KEYWORDS

Asset tracking, Logistics, Cargo loss, Cargo damage, Impact sensor, Accelerometer sensor, Low-cost solution, No code AEP (Application Enablement Platform).


DECENTRALIZED TRANSACTION MECHANISM BASED ON SMART CONTRACTS

Rishabh Garg, Department of Electrical & Electronics Engineering, Birla Institute of Technology & Science, K.K. Birla Goa Campus, India

ABSTRACT

Blockchain comes with a possibility to oust the outdated identity system and eliminate the intermediaries. The identity management, through blockchain, can allow individuals to have ownership of their identity by creating a global ID to serve multiple purposes. For user security and ledger consistency, asymmetric cryptography and distributed consensus algorithms can be employed. The blockchain technology, by virtue of its key features like decentralization, persistency, anonymity and auditability, would save the cost and increase the efficiency. Further, the digital identity platform would benefit citizens by allowing them to save time when accessing or providing their personal data and records. Instead of being required to show up to services in-person to produce a physical form of ID, users could be provided with a digital ID through a personal device, like smartphone, that can be shared with services conveniently and securely through a DLT.

KEYWORDS

Blockchain, Decentralized Apps, Data Portability, Decentralized Public Key Infrastructure (DPKI), DID, Ethereum, Hash, IAM framework, Identity Management System (IMS), IPFS, Private Key, Public Key, Revocation, SSI, Storage Variables, Validation, Zero Knowledge Proof.


Original Appropriation in the Bitcoin Mining Process

João Victor Barcellos Machado Correia, Law Department, Vale do Cricaré University Center (UVC), São Mateus, Brazil

ABSTRACT

When we analyze the bitcoin mining process, it is first necessary to understand if the bitcoin protocol truly guarantees against everyone the ownership of the cryptocurrency. In this sense, starting from Kantian theory, it is clear that the phatic arrangement of the technology does not guarantee ownership, which can only be guaranteed through the state. Moreover, there are practical cases of protocol failure. Thus, in the face of the immutability of the blockchain protocol, it remains to legally solve the problem of original appropriation in the bitcoin mining process, which is done through the theory of cryptographic fruits. Broadly speaking, the mined bitcoin should be viewed as a new kind of civil fruit of the mining device. As a result, by owning the mining machine, one ends up owning what it mines. For all that, the theoretical structure of cryptographic fruits justifies the legal flaws of the protocol bitcoin.

KEYWORDS

Original Appropriation, Bitcoin Mining Process, Cryptocurrency, Blockchain, Cryptographic Fruits.


Understanding the Effect of IoT Adoption on the Behavior of Firms: An Agent-Based Model

Riccardo Occa1 and Francesco Bertolotti, LIUC – Università Catteneo, Corso G. Matteotti 22, Castellanza (VA), Italy

ABSTRACT

In the context of the increasing diffusion of technologies related to the world of Industry 4.0 and the Internet of Things in particular, we have developed an agent-based model to simulate the effect of IoT diffusion in companies and verify potential benefits and risks. The model shows how IoT diffusion has the potential to influence the market by supporting both quality and cost improvements. The results of the model also confirm the potential for significant benefits for businesses, suggesting the opportunity to support the introduction and application of IoT, and clearly show how the use of IoT can be a key strategic choice in competitive market contexts focused on cost strategies to increase business performance and prospects.

KEYWORDS

IoT, agent-based modelling, simulation, adoption, risk, blockchain.


Improved Lossless Image Compression and Processing

Manga,I.,1 Garba,E.J2 and Ahmadu, A.S, 1Department of Computer, Adamawa State University, Mubi, Nigeria, 2Department of Computer Science, Modibbo Adama University, Yola, Nigeria

ABSTRACT

The growth and development of modern information and communication technologies, has led the demand for data compression to increase rapidly. Recent development in the field of Computer Science and information has led to the generation of large amount of data always. Data compression is an important aspect of information processing. Data that can be compressed include image data, video data, textual data or even audio data. Image compression refers to the process of representing image using fewer number of bits. Basically, two types of data compression exist. The major aim of lossless image compression is to reduce the redundancy and irreverence of image data for better storage and transmission of data in the better form. The lossy compression scheme leads to high compression ratio while the image experiences lost in quality. However, there are many cases where the loss of image quality or information due to compression needs to be avoided, such as medical, artistic and scientific images. Efficient lossless compression become paramount, although the lossy compressed images are usually satisfactory in divers’ cases. The objectives of the research include to explore existing lossless image compression algorithm, to design efficient and effective lossless image compression technique based on LZW- BCH lossless image compression to reduce redundancies in the image, to demonstrate image enhancement using Gaussian filter algorithm, Secondary method of data collection was used to collect the data. Standard research images were used to validate the new scheme. To achieve these objectives, Java programming language was used to develop the compression scheme using JDK 8.0 and MATLAB was used to conduct the analysis to analyze the space and time complexity of the existing compression scheme against the enhanced scheme. From the findings, it was revealed that the average compression ratio of the enhanced lossless image compression scheme was 1.6489 and the average bit per pixel was 5.416667.

KEYWORDS

Lossless, Image, Compression, Processing.


Adaptive Forgetting, Drafting and Comprehensive Guiding: Text-to-image Synthesis with Hierarchical Generative Adversarial Networks

School of Electronic Engineering, Xidian University, Xi’an, China

ABSTRACT

In this paper, we propose to boost the text-to-image synthesis through an Adaptive Learning and Generating Generative Adversarial Networks (ALG-GANs). First, we propose an adaptive forgetting mechanism in the generator to reduce the error accumulation and learn knowledge flexibly in the cascade structure. Besides, to evade the mode collapse caused by a strong biased surveillance, we propose a multi-task discriminator using weak-supervision information to guide the generator more comprehensively and maintain the semantic consistency in the cascade generation process. To avoid the refine dif iculty aroused by the bad initialization, we judge the quality of initialization before further processing. The generator will re-sample the noise and re-initialize the bad initializations to obtain good ones. All the above contributions have been integrated in a unified framework, which is an adaptive forgetting, drafting and comprehensive guiding based text-to-image synthesis method with hierarchical generative adversarial networks. The model is evaluated on the Caltech-UCSD Birds 200 (CUB) dataset and the Oxford 102 Category Flowers (Oxford) dataset with standard metrics. The results on Inception Score (IS) and Fréchet Inception Distance (FID) show that our model outperforms the previous methods.

KEYWORDS

Text-to-Image Synthesis, Generative Adversarial Network, Forgetting Mechanism, Semantic Consistency.


menu
Reach Us

emailaifu@ccsea2022.org


emailaifuconf@yahoo.com

close