2020. 2nd Issue
Volume XII, Number 2
Full issue (10,4 MB)
MESSAGE FROM THE GUEST EDITORS
Special Issue on Quality Achievements at BME-VIK with Student Contributions in EFOP-3.6.2-16-013 – Guest Editorial
The project EFOP-3.6.2-16-013, "Thematic Research Collaborations for Innovative Informatics and Infocommunication Solutions" (abbreviated as 3IN) started in September 2017. The abbreviation refers to the three participating institutions, Eötvös Loránd University (ELTE), Budapest University of Technology and Economics (BME), and Pázmány Péter Catholic University (PPKE), and to the three innovation areas in focus: Software Development and Information Security (A / Pillar), Infocommunication Networks and Cyberphysical Systems (B / Pillar) and Intelligent Data Analysis (C / Pillar).
Csaba Simon, Markosz Maliosz, Miklós Máté, Dávid Balla and Kristóf Torma
Sidecar based resource estimation method for virtualized environments
The widespread use of virtualization technologies in telecommunication system resulted in series of benefits, as flexibility, agility and increased resource usage efficiency. Nevertheless, the use of Virtualized Network Functions (VNF) in virtualized modules (e.g., containers, virtual machines) also means that some legacy mechanisms that are crucial for a telco grade operation are no longer efficient. Specifically, the monitoring of the resource sets (e.g., CPU power, memory capacity) allocated to VNFs cannot rely anymore on the methods developed for earlier deployment scenarios. Even the recent monitoring solutions designed for cloud environments is rendered useless if the VNF vendor and the telco solution supplier has to deploy its product into a virtualized environment, since it does not have access to the host level monitoring tools. In this paper we propose a sidecar-based solution to evaluate the resources available for a virtualized process. We evaluated the accuracy of our proposal in a proof of concept deployment, using KVM, Docker and Kubernetes virtualization technologies, respectively. We show that our proposal can provide real monitoring data and discuss its applicability.
Ádám Marosits, Ágoston Schranz and Eszter Udvary
Amplified spontaneous emission based quantum random number generator
There is an increasing need for true random bits, for which true random number generators (TRNG) are absolutely necessary, because the output of pseudo random number generators is deterministically calculated from the previous states. We introduce our quantum number generator (QRNG) based on amplified spontaneous emission (ASE), a truly random quantum physical process. The experimental setup utilizes the randomness of the process. In this system, optical amplifiers (based on ASE) play the major role. The suitable sampling rate is selected in order to build the fastest generator, while avoiding the correlation between consecutive bits. Furthermore, the applied post-processing increases the quality of the random bits. As a results of this, our system generated random bits which successfully passed the NIST tests. Our real-time generation system – which is currently a trial version implemented with cheap equipment – will be available for public use, generating real time random bits using a web page.
David Kobor and Eszter Udvary
Optimisation of Optical Network for Continuous-Variable Quantum Key Distribution by Means of Simulation
The unprecedented breakthrough in the field of quantum computing in the last several years is threatening with the exploitation of our current communication systems. To address this issue, researchers are getting more involved in finding methods to protect these systems. Amongst other tools, quantum key distribution could be a potentially applicable way to achieve the desired level of protection. In this paper we are evaluating the physical layer of an optical system realising continuous variable quantum key distribution (CVQKD) with simulations to determine its weak points and suggest methods to improve them. We found that polarisation dependent devices are crucial for proper operation, therefore we determined their most defining parameters from the point of operation and suggested extra optical devices to largely improve transmission quality. We also paid attention to polarisation controlling in these sort of systems. Our findings could be valuable as practical considerations to construct reliable CVQKD optical transmission links.
Gergő Ládi, Levente Buttyán and Tamás Holczer
GrAMeFFSI: Graph Analysis Based Message Format and Field Semantics Inference For Binary Protocols, Using Recorded Network Traffic
Protocol specifications describe the interaction between different entities by defining message formats and message processing rules. Having access to such protocol specifications is highly desirable for many tasks, including the analysis of botnets, building honeypots, defining network intrusion detection rules, and fuzz testing protocol implementations. Unfortunately, many protocols of interest are proprietary, and their specifications are not publicly available. Protocol reverse engineering is an approach to reconstruct the specifications of such closed protocols. Protocol reverse engineering can be tedious work if done manually, so prior research focused on automating the reverse engineering process as much as possible. Some approaches rely on access to the protocol implementation, but in many cases, the protocol implementation itself is not available or its license does not permit its use for reverse engineering purposes. Hence, in this paper, we focus on reverse engineering protocol specifications relying solely on recorded network traffic. More specifically, we propose GrAMeFFSI, a method based on graph analysis that can infer protocol message formats as well as certain field semantics for binary protocols from network traces. We demonstrate the usability of our approach by running it on packet captures of two known protocols, Modbus and MQTT, then comparing the inferred specifications to the official specifications of these protocols.
Dávid Papp, Zsolt Knoll and Gábor Szűcs
Graph construction with condition-based weights for spectral clustering of hierarchical datasets
Most of the unsupervised machine learning algorithms focus on clustering the data based on similarity metrics, while ignoring other attributes, or perhaps other type of connections between the data points. In case of hierarchical datasets, groups of points (point-sets) can be defined according to the hierarchy system. Our goal was to develop such spectral clustering approach that preserves the structure of the dataset throughout the clustering procedure. The main contribution of this paper is a set of conditions for weighted graph construction used in spectral clustering. Following the requirements – given by the set of conditions – ensures that the hierarchical formation of the dataset remains unchanged, and therefore the clustering of data points imply the clustering of point-sets as well. The proposed spectral clustering algorithm was tested on three datasets, the results were compared to baseline methods and it can be concluded the algorithm with the proposed conditions always preserves the hierarchy structure.
Csongor Pilinszki-Nagy and Bálint Gyires-Tóth
Performance Analysis of Sparse Matrix Representation in Hierarchical Temporal Memory for Sequence Modeling
Hierarchical Temporal Memory (HTM) is a special type of artificial neural network (ANN), that differs from the widely used approaches. It is suited to efficiently model sequential data (including time series). The network implements a variable order sequence memory, it is trained by Hebbian learning and all
of the network’s activations are binary and sparse. The network consists of four separable units. First, the encoder layer translates the numerical input into sparse binary vectors. The Spatial Pooler performs normalization and models the spatial features of the encoded input. The Temporal Memory is responsible for learning the Spatial Pooler’s normalized output sequence. Finally, the decoder takes the Temporal Memory’s outputs and translates it to the target. The connections in the network are also sparse, which requires prudent design and implementation. In this paper a sparse matrix implementation is elaborated, it is compared to the dense implementation. Furthermore, the HTM’s performance is evaluated in terms of accuracy, speed and memory complexity and compared to the deep neural network-based LSTM (Long Short-Term Memory).
István Fábián and Gábor György Gulyás
De-anonymizing Facial Recognition Embeddings
Advances of machine learning and hardware getting cheaper resulted in smart cameras equipped with facial recognition becoming unprecedentedly widespread worldwide. Undeniably, this has a great potential for a wide spectrum of uses, it also bears novel risks. In our work, we consider a specific related risk, one related to face embeddings, which are machine learning created metric values describing the face of a person. While embeddings seems arbitrary numbers to the naked eye and are hard to interpret for humans, we argue that some basic demographic attributes can be estimated from them and these values can be then used to look up the original person on social networking sites. We propose an approach for creating synthetic, life-like datasets consisting of embeddings and demographic data of several people. We show over these ground truth datasets that the aforementioned re-identifications attacks do not require expert skills in machine learning in order to be executed. In our experiments, we find that even with simple machine learning models the proportion of successfully re-identified people vary between 6.04% and 28.90%, depending on the population size of the simulation.
Levente Alekszejenkó and Tadeusz Dobrowiecki
Adapting IT Algorithms and Protocols to an Intelligent Urban Traffic Control
Autonomous vehicles, communicating with each other and with the urban infrastructure as well, open opportunity to introduce new, complex and effective behaviours to theintelligent traffic systems. Such systems can be perceived quite naturally as hierarchically built intelligent multi-agent systems, with the decision making based upon well-defined and profoundly tested mathematical algorithms, borrowed e.g. from the field of information technology. In this article, two examples of how to adapt such algorithms to the intelligent urban traffic are presented. Since the optimal and fair timing of the traffic lights is crucial in the traffic control, we show how a simple Round-Robin scheduler and Minimal Destination Distance First scheduling (adaptation of the theoretically optimal Shortest Job First scheduler) were implemented and tested for traffic light control. Another example is the mitigation of the congested traffic using the analogy of the Explicit Congestion Notification (ECN) protocol of the computer networks. We show that the optimal scheduling based traffic light control can handle roughly the same complexity of the traffic as the traditional light programs in the nominal case. However, in extraordinary and especially fastly evolving situations, the intelligent solutions can clearly outperform the traditional ones. The ECN based method can successfully limit the traffic flowing through bounded areas. That way the number of passing-through vehicles in e.g. residential areas may be reduced, making them more comfortable congestion-free zones in a city.
Dóra Varnyú and László Szirmay-Kalos
Comparison of Non-Linear Filtering Methods for Positron Emission Tomography
As a result of the limited radiotracer dose, acquisition time and scanner sensitivity, positron emission tomography (PET) images suffer from high noise. In the current clinical practice, post-reconstruction filtering has become one of the most common noise reduction techniques. However, the range of existing filters is very wide, and choosing the most suitable filter for a given measurement is far from simple. This paper aims to provide assistance in this choice by comparing the most powerful image denoising filters, covering both image quality and execution time. Emphasis is placed on non-linear techniques due to their ability to preserve edges and fine details more accurately than linear filters. The compared methods include the Gaussian, the bilateral, the guided, the anisotropic diffusion and the non-local means filters, which are examined in both static and dynamic PET reconstructions.
CALL FOR PAPERS
17th IFIP/IEEE International Symposium on Integrated Network and Service Management
IEEE IM 2021, Bordeaux, France
IEEE International Conference on Communications
IEEE ICC 2021, Montreal, QC, Canada
2020. 1st Issue
Volume XII, Number 1
Full issue (11,8 MB)
MESSAGE FROM THE GUEST EDITORS
Special Issue on Cognitive Infocommunications Theory and Applications – Guest Editorial
COGNITIVE infocommunications (CogInfoCom) investigates the link between the research areas of infocommunications and cognitive sciences, as well as the various engineering applications which have emerged as the synergic combination of these sciences. The primary goal of CogInfo-Com is to provide a systematic view of how cognitive processes can co-evolve with infocommunications devices so that the capabilities of the human brain may not only be extended through these devices, irrespective of geographical distance but may also be blended with the capabilities of any artificially cognitive system. This merging and extension of cognitive capabilities are targeted towards engineering applications in which artificial and/or natural cognitive systems are enabled to work together more effectively. The special issue presents the latest results in this scientific field.
Carl Vogel and Anna Esposito
Interaction Analysis and Cognitive Infocommunications
Cognitive infocommunications encompasses both scientific and engineering oriented approaches to examining extensions of human cognitive capabilities that may be assimilated within the concept of humanity. Necessary (but not sufficient) conditions for the success of any candidate technology include solving problems within private and public spheres of existence, in thought and communication. Exemplar cognitive infocommunication technologies that have been assimilated in to the concept of humanitiy are examined: emotion, gesture, language. Implications for research programmes conducted within the cognitive infocommunications discipline are outlined.
Nelson Mauro Maldonato, Benedetta Muzii, Mario Bottone, Raffaele Sperandeo, Donatella Di Corrado, Grazia Isabella Continisio, Teresa Rea, Anna Esposito
Unitas Multiplex. Biological architectures of consciousness
The so-called Posthuman question - the birth of organisms generated by the encounter of biological and artificial entities (humanoid robots, cyborgs and so on) – is now on the agenda of science and, more generally, of contemporary society. This is an issue of enormous importance, which not only poses ethical questions but also, and above all, methodological questions about how it will be achieved on a scientific plane. How such entities will be born and what their functions will be? For example, what kind of consciousness will they be equipped with, in view of the function of consciousness for distinguishing the Self from others, which is the foundation of the interactive life of relationships? Many scholars believe that rapid technological progress will lead to the emergence of organisms that will simulate the functions of the mind, learn from their experiences, decode real-world information, and plan their actions and choices based on their own values elaborated from vast amounts of data and metadata. In the not-too-distant future, it is believed that these entities will acquire awareness and, consequently, decisional freedom, and perhaps even their own unique morals. In this paper, we try to show that the path towards this goal cannot avoid clarification of the problems that neuroscience has ahead of it. These problems concern: a) the way in which consciousness comes about on the basis of well-defined brain processes; b) how it represents its own organization and not a simple brain function; c) how simultaneously contains multiple distinct contents, each with its own intentionality; d) how it expresses dynamic evolutionary relations and not a set of phenomena that may be isolated; e) finally, how its order is not rigidly hierarchical, but is supported by a multiplicity of horizontal levels, each of which is in structural and functional continuum with different phenomenal events. The empirical and theoretical research effort on this topic provides an intensive contribution to the development of IC Technologies.
Masakazu Kanazawa, Atsushi Ito, Kazuyuki Yamasawa, Takehiko Kasahara, Yuya Kiryu and Fubito Toyama
Method to Predict Confidential Words in Japanese Judicial Precedents Using Neural Networks With Part-of-Speech Tags
Cognitive Infocommunications involve a combination of informatics and telecommunications. In the future, infocommunication is expected to become more intelligent and life supportive. Privacy is one of the most critical concerns in infocommunications. Encryption is a well-recognized technology that ensures privacy; however, it is not easy to completely hide personal information. One technique to protect privacy is by finding confidential words in a file or a website and changing them into meaningless words. In this paper, we investigate a technology used to hide confidential words taken from judicial precedents. In the Japanese judicial field, details of most precedents are not made available to the public on the Japanese court web pages to protect the persons involved. To ensure privacy, confidential words, such as personal names, are replaced by other meaningless words. This operation takes time and effort because it is done manually. Therefore, it is desirable to automatically predict confidential words. We proposed a method for predicting confidential words in Japanese judicial precedents by using part-of-speech (POS) tagging with neural networks. As a result, we obtained 88% accuracy improvement over a previous model. In this paper, we describe the mechanism of our proposed model and the prediction results using perplexity. Then, we evaluated how our proposed model was useful for the actual precedents by using recall and precision. As a result, our proposed model could detect confidential words in certain Japanese precedents.
Tibor Ujbányi, Attila Kővári, Gergely Sziládi and József Katona
Examination of the eye-hand coordination related to computer mouse movement
Eye-hand coordination means the ability to combine seeing and hand movement. Eye-hand coordination is a complex process consisting of a series of conscious actions. The fine motor skills of the hand were not born with us but learned. The development of eye-hand coordination has begun in infancy through various ball games, construction games and puzzle games. Co-ordinated work of eye and hand movement is the basis for many activities. The proper functioning of eye-hand coordination is necessary for many everyday activities such as writing, reading or driving. The joint work of the eyes and hands is vital for certain forms of movement (ball-catching, kicking). The eye plays an essential role in regulating fine movements. In this paper a general eye-hand coordination task is examined in relation to mouse cursor movement on computer screen. An eye-hand tracking system was used to observe the gaze and hand path during the mouse cursor movement and the acquired data were analyzed by statistical t-test.
Emőke Kiss, Marianna Zichar, István Fazekas, Gergő Karancsi and Dániel Balla
Categorization and geovisualization of climate change strategies using an open-access WebGIS tool
The focus of our paper is to present the power of collaboration of databases in a web environment, where data contain or are related to different types of social geography spatial data. Implementing different data gained from the Climate Change Laws of the World, the United Nations Treaty Collection, the World Bank and The World Factbook, we ourselves developed the Climate Change Strategies of the world’s countries (called CCS). Our purpose is to publish and demonstrate the spatial visualization and categorization of the climate change strategies (CCS) of the world’s countries, and also highlight the power of geovisualization in terms of cognitive InfoCommunications, using open-access WebGIS tools and geoinformatics software. The evolved geographic database is able to provide information for users about the different types of climate change strategies of the world’s countries in a visual way, but can also be extended by uploading new data.
Mohammad Moghadasi and Gabor Fazekas
Multiple sclerosis Lesion Detection via Machine Learning Algorithm based on converting 3D to 2D MRI images
In the twenty first century, there have been various scientific discoveries which have helped in addressing some of the fundamental health issues. Specifically, the discovery of machines which are able to assess the internal conditions of individuals has been a significant boost in the medical field. This paper or case study is the continuation of a previous research which aimed to create artificial models using support vector machines (SVM) to classify MS and normal brain MRI images, analyze the effectiveness of these models and their potential to use them in Multiple Sclerosis (MS) diagnosis. In the previous study presented at the Cognitive InfoCommunication (CogInfoCom 2019) conference, we intend to show that 3D images can be converted into 2D and by considering machine learning techniques and SVM tools. The previous paper concluded that SVM is a potential method which can be involved during MS diagnosis, however, in order to confirm this statement more research and other potentially effective methods should be included in the research and need to be tested. First, this study continues the research of SVM used for classification and Cellular Learning Automata (CLA), then it expands the research to other method such as Artificial Neural Networks (ANN) and k-Nearest Neighbor (k-NN) and then compares the results of these.
PAPERS FROM OPEN CALL
Roman N. Ipanov
Phase-Code Shift Keyed Probing Signals with Discrete Linear Frequency Modulation and Zero Autocorrelation Zone
Modern synthesized aperture radars (SAR), e.g. space SARs for remote sensing of the Earth, use signals with linear frequency modulation and signals with phase-code shift keying (PCSK) coded by M-sequence (MS) as probing signals. Utilization of PCSK-signals permits an essential improvement of the radar image quality at the stage of its compression on azimuthal coordinate. In this paper, probing signals with zero autocorrelation zone (ZACZ) are synthesized, which signals represent a sequence of two PCSK-pulses with additional linear frequency modulation of sub-pulses in the pulses. A comparative analysis of the correlation characteristics of the synthesized signal and the PCSK-signal coded by MS has been performed. It is shown that in ZACZ, at a mismatch in the Doppler frequency, the level of all side lobes (SL) of the autocorrelation function (ACF) of the synthesized signal is less than the ACF SL level of the PCSK-signal coded by MS. The total ACF of the ensemble of 4 signals has zero SL along the whole time axis τ, and at a mismatch in frequency in ZACZ, it has a lower SL level than the total ACF SLs of the ensemble of 4 PCSK-signals coded by MS.
Balazs Solymos and Laszlo Bacsardi
Real-time Processing System for a Quantum Random Number Generator
Quantum random number generators (QRNG) provide quality random numbers, which are essential for cryptography by utilizing the unpredictable nature of quantum mechanics. Advancements in quantum optics made multiple different architectures for these possible. As part of a project aiming to realize a QRNG service, we developed a system capable of providing real-time monitoring and long term data collection while still fulfilling regular processing duties for these devices. In most cases, hardware validation is done by simply running a battery of statistical tests on the final output. Our goal, however, was to create a system allowing more flexible use of these tests, realizing a tool that can also prove useful during the construction of our entropy source for detecting and correcting unique imperfections. We tested this flexibility and the system’s ability to adequately perform the required tasks with simulated sources while further examining the usability of available verification tools within this new custom framework.