International Science Index

International Journal of Computer and Information Engineering

2304
78160
Privacy as a Key Factor of Information and Communication Technologies Development in Healthcare
Abstract:
The transfer of a large part of activities to the cyberspace has made the society heavily depended on ICT. Internet of Things has been created due to the appearance of smart devices. IoT generates a new dimension of the network. Its use has ceased to be the domain of man. Communication only between devices has become possible, posing new challenges, directly related to security threats and a lot of new opportunities for unauthorized data. While there is a common awareness of the potential risks in using computers or networks, the use of intelligent things is wrongly seen as making life easier and paradoxically more secure. It is extremely important to notice seemingly unimportant behavior, but likely to cause harm. In a world with ‘smart’ things, there are threats such as permanent surveillance, incessant and uncontrolled data leaks or identity theft. The challenge is to set up and formulate norms and enabling legislative processes to keep pace with the technology advancement. The use of ICT, especially in science and industry, changes everyday life already today. Society aging and increase in healthcare expenditure makes it imperative to expand the use of ICT in healthcare, where a revolution is expected with the use of intelligent diagnostic support systems, current health state monitoring and specialized technologies enabling remote medical procedures adaptation. But there are dangers associated with IoT use in healthcare, requiring clearly defined criteria. Patients must be able to expect privacy and medical data safety. Until gathered data is used only under the control of the affected people IoT is not a threat. Any unauthorized access to such data is a violation of the right to privacy - and therefore the security of the individual. The main goal of the research is to make a comparative analysis of the legal conditions that characterize the ICT use in healthcare in Poland in the context of EU legislation, while pointing out the postulated directions of changes and trying to answer the questions as to whether and to what extent the legal system is ready for the challenges of modern technologies. The ICT introduction pace in healthcare in Poland and other EU countries differs, but is still inadequate. Even seemingly indispensable necessity to ensure the safety of medical data is not obvious. Sensitive data gets into wrong hands. Providers are not prepared to share information about patients in real time because systems for electronic health records processing are not compatible. Utilizing ICT in the health sector will ensure change in approach towards patients and increased productivity. The need for privacy should be considered at the stage of technology design and implementation. The success of digitization in the health sector depends largely on how the legislation is ready to meet these revolutionary changes. Ensuring medical data security is paramount. Otherwise, social resistance and costs resulting from i.e. leakage of medical data and use of such data in an unlawful or even threatening manner will be very high.
2303
78439
Nature of the Prohibition of Discrimination on Grounds of Sexual Orientation in EU Law
Authors:
Abstract:
The EU law encompasses many supranational legal systems (EU law, ECHR, international public law and constitutional traditions common to the Member States) which guarantee the protection of fundamental rights, with partly overlapping scopes of applicability, various principles of interpretation of legal norms and a different hierarchy. In EU law, the prohibition of discrimination on grounds of sexual orientation originates from both the primary and secondary EU legislation. At present, the prohibition is considered to be a fundamental right in pursuance of Article 21 of the Charter, but the Court has not yet determined whether it is a right or a principle within the meaning of the Charter. Similarly, the Court has not deemed this criterion to be a general principle of EU law. The personal and materials scope of the prohibition of discrimination on grounds of sexual orientation based on Article 21 of the Charter requires each time to be specified in another legal act of the EU in accordance with Article 51 of the Charter. The effect of the prohibition of discrimination on grounds of sexual orientation understood as above will be two-fold, for the States and for the Union. On the one hand, one may refer to the legal instruments of review of EU law enforcement by a Member State laid down in the Treaties. On the other hand, EU law does not provide for the right to individual petition. Therefore, it is the duty of the domestic courts to protect the right of a person not to be discriminated on grounds of sexual orientation in line with the national procedural rules, within the limits and in accordance with the principles set out in EU law, in particular in Directive 2000/78. The development of the principle of non-discrimination in the Court’s case-law gives rise to certain doubts as to its applicability, namely whether the principle as the general principle of EU law may be granted an autonomous character, with respect to the applicability to matters not included in the personal or material scope of the Directives, although within the EU’s competence. Moreover, both the doctrine and the opinions of the Advocates-General have called for the general competence of CJEU with regard to fundamental rights which, however, might lead to a violation of the principle of separation of competence. The aim of this paper is to answer the question what is the nature of the prohibition of discrimination on grounds of sexual orientation in EU law (a general principle in EU law, or a principle or right under the Charter’s terminology). Therefore, the paper focuses on the nature of Article 21 of the Charter (a right or a principle) and the scope (personal and material) of the prohibition of discrimination based on sexual orientation in EU law as well as its effect (vertical or horizontal). The study has included the provisions of EU law together with the relevant CJEU case-law.
2302
78461
A World Map of Seabed Sediment Based on 50 Years Knowledge
Abstract:
SHOM initiated the production of a global sedimentological seabed map in 1995 to provide the necessary tool for the needs of searches for aircraft and boats lost at sea, sedimentary information on nautical charts, and the input data of acoustic propagation modeling. This project is original but had already been initiated in 1912 when the French hydrographic service and the University of Nancy produced a series of maps of the distribution of marine sediments of all the French coasts. During the following decade, this association permitted the publication of the sediment map of the continental shelves of Europe and North America. The map of the sediment of oceans presented was initiated with a map of the deep ocean floor, carried out by UNESCO in the 1970s. This map was adapted using a unique sediment classification to present all types of sediments: from beaches to the deep seabed and from glacial deposits to tropical sediments. In order to allow good visualization and to be adapted to the different applications, only the granularity of the sediments is represented. Currently, more than 200 source maps have been integrated. These maps are sometimes published seabed maps, in these cases, it is simply a validation of the interest of the document, a standardization of the sediment classification, and integration in the world map. Data also comes from interpretations of Multibeam Echo Sounder (MES) imagery of large hydrographic surveys of deep-ocean. These allow a very high-quality mapping of areas that until then were represented as homogeneous. The third and principal source of data comes from the integration of regional maps produced specifically for this project. These regional maps are carried out using all the bathymetric and sedimentary data of a region. This step makes it possible to produce a regional synthesis map, with the realization of generalizations in the case of over-precise data. Eighty-six regional maps of the Atlantic, the Mediterranean, and the Indian Ocean have been produced and integrated into the world sedimentary map. This work is permanent and permits a new digital version every two years, with the integration of some new maps. This is the first and only article describing this global seabed map and its realization. It describes the choices made in terms of sediment classification, the scale of source data and the zonation of the variability of the quality. This map is the final step in a system comprising the Shom Sedimentary Database, enriched by more than one million punctual and surface items of data, and four series of coastal seabed maps at 1:10,000, 1:50,000, 1:200,000 and 1:1,000,000. This approach makes it possible to take into account the progress in knowledge made in the field of seabed characterization during the last decades. This way the multiplication of sediment measurement systems and the compilation of all published data gradually enriched a map that still relies, for many regions, on data acquired from more than half a century ago.
2301
78786
A Modular Framework for Enabling Analysis for Educators with Different Levels of Data Mining Skills
Abstract:
Enabling data mining analysis among a wider audience of educators is an active area of research within the educational data mining (EDM) community. The paper proposes a framework for developing an environment that caters for educators who have little technical data mining skills as well as for more advanced users with some data mining expertise. This framework architecture was developed through the review of the strengths and weaknesses of existing models in the literature. The proposed framework provides a modular architecture for future researchers to focus on the development of specific areas within the EDM process. Finally, the paper also highlights a strategy of enabling analysis through either the use of predefined questions or a guided data mining process and highlights how the developed questions and analysis conducted can be reused and extended over time.
2300
78465
A Software Engineering Methodology for Developing Secure Obfuscated Software
Abstract:
We propose a methodology to conciliate two apparently contradictory processes in the development of secure obfuscated software and good software engineered software. Our methodology consists first in the system designers defining the type of security level required for the software. There are four types of attackers: casual attackers, hackers, institution attack, and government attack. Depending on the level of threat, the methodology we propose uses five or six teams to accomplish this task. One Software Engineer Team and one or two software Obfuscation Teams, and Compiler Team, these four teams will develop and compile the secure obfuscated software, a Code Breakers Team will test the results of the previous teams to see if the software is not broken at the required security level, and an Intrusion Analysis Team will analyze the results of the Code Breakers Team and propose solutions to the development teams to prevent the detected intrusions. We also present an analytical model to prove that our methodology is no only easier to use, but generates an economical way of producing secure obfuscated software.
2299
77443
Machine Learning Approach for Stress Detection Using Wireless Physical Activity Tracker
Abstract:
Stress is a psychological condition that reduces the quality of sleep and affects every facet of life. Constant exposure to stress is detrimental not only for mind but also body. Nevertheless, to cope with stress, one should first identify it. This paper provides an effective method for the cognitive stress level detection by using data provided from a physical activity tracker device Fitbit. This device gathers people’s daily activities of food, weight, sleep, heart rate, and physical activities. In this paper, four major stressors like physical activities, sleep patterns, working hours and change in heart rate are used to assess the stress levels of individuals. The main motive of this system is to use machine learning approach in stress detection with the help of Smartphone sensor technology. Individually, the effect of each stressor is evaluated using logistic regression and then combined model is built and assessed using variants of ordinal logistic regression models like logit, probit and complementary log-log. Then the quality of each model is evaluated using Akaike Information Criterion (AIC) and probit is assessed as the more suitable model for our dataset. This system is experimented and evaluated in a real time environment by taking data from adults working in IT and other sectors in India. The novelty of this work lies in the fact that stress detection system should be less invasive as possible for the users.
2298
78350
Fast Schroedinger Eigenmaps Algorithm for Hyperspectral Imagery Classification
Abstract:
Schroedinger Eigenmaps (SE) has exhibited a super efficiency for hyperspectral dimensionality reduction tasks; it is based on Laplacian Eigenmaps (LE) algorithm and Spatial-Spectral potential matrix. Practically, SE suffers of high computing complexity which may limit its exploitation in the remote sensing field. In this paper, we proposed a fast variant of Schroedinger Eigenmaps (SE) method called Fast Schroedinger Eigenmaps (Fast SE). The proposed approach is based on a fast variant of Laplacian Eigenmaps (LE) algorithm and a Spatial-Spectral potential matrix; it replaces the quadratic constraint used by the quadratic optimization problem in the Laplacian Eigenmaps (LE) approach by a new linear constraint. This modification helps to preserve the data manifold structure in a similar way to SE, but with more computational efficiency. A real hyperspectral scene was employed for experimental analysis. Experiment results proved effective classification accuracies with a reduced computing time, according to the SE and other known dimensionality reduction methods.
2297
77739
Autonomic Management For Mobile Robot Battery Degradation
Abstract:
The majority of today’s mobile robots are very dependent on battery power. Mobile robots can operate untethered for a number of hours but eventually they will need to recharge their batteries in-order to continue to function. While computer processing and sensors have become cheaper and more powerful each year, battery development has progress very little. They are slow to re-charge, inefficient and lagging behind in the general progression of robotic development we see today. However, batteries are relatively cheap and when fully charged, can supply high power output necessary for operating heavy mobile robots. As there are no cheap alternatives to batteries, we need to find efficient ways to manage the power that batteries provide during their operational lifetime. This paper proposes the use of autonomic principles of self-adaption to address the behavioral changes a battery experiences as it gets older. In life, as we get older, we cannot perform tasks in the same way as we did in our youth; these tasks generally take longer to perform and require more of our energy to complete. Batteries also suffer from a form of degradation. As a battery gets older, it loses the ability to retain the same charge capacity it would have when brand new. This paper investigates how we can adapt the current state of a battery charge and cycle count, to the requirements of a mobile robot to perform its tasks.
2296
75439
Evaluating 8D Reports Using Text-Mining
Abstract:
Increasing quality requirements make reliable and effective quality management indispensable. This includes the complaint handling in which the 8D method is widely used. The 8D report as a written documentation of the 8D method is one of the key quality documents as it internally secures the quality standards and acts as a communication medium to the customer. In practice, however, the 8D report is mostly faulty and of poor quality. There is no quality control of 8D reports today. This paper describes the use of natural language processing for the automated evaluation of 8D reports. Based on semantic analysis and text-mining algorithms the presented system is able to uncover content and formal quality deficiencies and thus increases the quality of the complaint processing in the long term.
2295
77694
Empirical Exploration of Correlations between Software Design Measures: A Replication Study
Abstract:
Software engineers apply different measures to quantify the quality of software design. These measures consider artifacts developed at low or high level software design phases. The results are used to point to design weaknesses and indicate design points that have to be restructured. Understanding the relationship among the quality measures and among the design quality aspects considered by these measures is important to interpreting the impact of a measure for a quality aspect on other potentially related aspects. In addition, exploring the relationship between quality measures helps explain the impact of different quality measures on external quality aspects, such as reliability and maintainability. In this paper, we report a replication study that empirically explores the correlation between six well known and commonly applied design quality measures. These measures consider several quality aspects, including complexity, cohesion, coupling, and inheritance. The results indicate that inheritance measures are weakly correlated with other measures, whereas complexity, coupling, and cohesion measures are most strongly correlated.
2294
78341
3D Object Retrieval Based on Similarity Calculation in 3D Computer Aided Design Systems
Authors:
Abstract:
Nowadays, recent technological advances in the acquisition, modeling, and processing of three-dimensional (3D) objects data lead to the creation of models stored in huge databases, which are used in various domains such as computer vision, augmented reality, game industry, medicine, CAD (Computer-aided design), 3D printing etc. On the other hand, the industry is currently benefiting from powerful modeling tools enabling designers to easily and quickly produce 3D models. The great ease of acquisition and modeling of 3D objects make possible to create large 3D models databases, then, it becomes difficult to navigate them. Therefore, the indexing of 3D objects appears as a necessary and promising solution to manage this type of data, to extract model information, retrieve an existing model or calculate similarity between 3D objects. The objective of the proposed research is to develop a framework allowing easy and fast access to 3D objects in a CAD models database with specific indexing algorithm to find objects similar to a reference model. Our main objectives are to study existing methods of similarity calculation of 3D objects (essentially shape-based methods) by specifying the characteristics of each method as well as the difference between them, and then we will propose a new approach for indexing and comparing 3D models, which is suitable for our case study and which is based on some previously studied methods. Our proposed approach is finally illustrated by an implementation, and evaluated in a professional context.
2293
77417
Field Programmable Gate Array Placement Improvement Using a Genetic Algorithm and the Routing Algorithm as a Cost Function
Abstract:
In this paper, the simulated annealing cost function in Field-Programmable Gate-Arrays (FPGAs) is investigated. It is found that the minimization of the traditional cost function does not ensure the minimization of the critical path. This opens an opportunity to investigate different cost functions. In the paper, it is shown that using the routing algorithm as a cost function improves the placement in the optimization of the final critical path. It is also found that using this new cost function, a genetic algorithm has advantages over the traditional simulated annealing algorithm.
2292
78281
Towards an Approach for Personalization of Web Services Composition
Abstract:
Web service (WS) is a ‘software application’ which is gaining nowadays an increasing attention due to its ability to achieve efficient user need. In some cases, the available (WS) cannot meet the complex needs of user and their adaptation to an environment in perpetual change remains a major problem for information systems (I.S) design. In this respect, services composition comes mitigate this problem. It represents a big challenge for systems and has received lots of attentions in literature. However, the satisfaction of these informational needs requires a dynamic and reusable environment. We believe so that the incorporation of a personalized aspect will be very useful for this composition process. In this work, our goal is to: i) propose a new approach that allows dynamic services composition and aims to meet the different needs of users. ii) Propose a personalization approach based on external resources such as the ontologies and user profile. This proposition allows services reuse in order to provide a relevant response to each user.
2291
77481
Optimized Preprocessing for Accurate and Efficient Bioassay Prediction with Machine Learning Algorithms
Abstract:
Bioassay is the measurement of the potency of a chemical substance by its effect on living animal or plant tissue. Bioassay data and chemical structures from pharmacokinetic and drug metabolism screening are mined from and housed in multiple databases. Bioassay prediction is calculated accordingly to determine further advancement. This paper proposes a four-step preprocessing of datasets for improving the bioassay predictions. The first step is instance selection in which dataset is categorized into training, testing, and validation sets. The second step is discretization that partitions the data in consideration of accuracy vs. precision. The third step is normalization where data are normalized between 0 and 1 for subsequent machine learning processing. The fourth step is feature selection where key chemical properties and attributes are generated. The streamlined results are then analyzed for the prediction of effectiveness by various machine learning algorithms including Pipeline Pilot, R, Weka, and Excel. Experiments and evaluations reveal the effectiveness of various combination of preprocessing steps and machine learning algorithms in more consistent and accurate prediction.
2290
75444
An Approach to Secure Mobile Agent Communication in Multi-Agent Systems
Abstract:
Inter-agent communication manager facilitates communication among mobile agents via message passing mechanism. Till now, all Foundation for Intelligent Physical Agents (FIPA) compliant agent systems are capable of exchanging messages following the standard format of sending and receiving messages. Previous works tend to secure messages to be exchanged among a community of collaborative agents commissioned to perform specific tasks using cryptosystems. However, the approach is characterized by computational complexity due to the encryption and decryption processes required at the two ends. The proposed approach to secure agent communication allows only agents that are created by the host agent server to communicate via the agent communication channel provided by the host agent platform. These agents are assumed to be harmless. Therefore, to secure communication of legitimate agents from intrusion by external agents, a two-phase policy enforcement system was developed. The first phase constrains external agent to run only on the network server while the second phase confines the activities of the external agent to its execution environment. To implement the proposed policy, a controller agent was charged with the task of screening any external agent entering the local area network and preventing it from migrating to the agent execution host where the legitimate agents are running. On arrival of the external agent at the host network server, an introspector agent was charged to monitor and restrain its activities. This approach secures legitimate agent communication from Man-in-the Middle and Replay attacks.
2289
75446
Comparative Advantage of Mobile Agent Application in Procuring Software Products on the Internet
Abstract:
This paper brings to fore the inherent advantages in application of mobile agents to procure software products rather than downloading software content on the internet. It proposes a system whereby the products come on compact disk with mobile agent as deliverable. The client/user purchases a software product but must connect to the remote server of the software developer before installation. The user provides an activation code that activates mobile agent which is part of the software product on compact disk. The validity of the activation code is checked on connection at the developer’s end to ascertain authenticity and prevent piracy. The system is implemented by downloading two different software products as compared with installing same products on compact disk with mobile agent’s application. Downloading software contents from developer’s database as in the traditional method requires a continuously open connection between the client and the developer’s end, a fixed network is not economically or technically feasible. Mobile agent after being dispatched into the network becomes independent of the creating process and can operate asynchronously and autonomously. It can reconnect later after completing its task and return for result delivery. Response time and network load are very minimal with application of mobile agent.
2288
77466
Performance Assessment of Multi-Level Ensemble for Multi-Class Problems
Abstract:
Many supervised machine learning tasks require decision making across numerous different classes. Multi-class classification has several applications, such as face recognition, text recognition and medical diagnostics. The objective of this article is to analyze an adapted method of Stacking in multi-class problems, which combines ensembles within the ensemble itself. For this purpose, a training similar to Stacking was used, but with three levels, where the final decision-maker (level 2) performs its training by combining outputs from the tree-based pair of meta-classifiers (level 1) from Bayesian families. These are in turn trained by pairs of base classifiers (level 0) of the same family. This strategy seeks to promote diversity among the ensembles forming the meta-classifier level 2. Three performance measures were used: (1) accuracy, (2) area under the ROC curve, and (3) time for three factors: (a) datasets, (b) experiments and (c) levels. To compare the factors, ANOVA three-way test was executed for each performance measure, considering 5 datasets by 25 experiments by 3 levels. A triple interaction between factors was observed only in time. The accuracy and area under the ROC curve presented similar results, showing a double interaction between level and experiment, as well as for the dataset factor. It was concluded that level 2 had an average performance above the other levels and that the proposed method is especially efficient for multi-class problems when compared to binary problems.
2287
77493
Empirical Study of Discretization Methods on the Performance of Swarm Intelligence Based Classifiers
Abstract:
Discretization techniques are used with classifiers to handle continuous data in various classification problems. The selection of appropriate discretization technique significantly impacts the performance of the classification accuracy. In contemporary literature, various discretizing techniques are proposed and incorporated in different classification scenarios to improve accuracy. However, availability of different discretization techniques necessitates investigating the accuracy analysis of a classifier. The work presented in this paper investigates the impact of discretization techniques on the performance of classifier accuracy. Based on a comprehensive literature survey, eleven, discretization techniques are selected for the purpose of this study. In particularly, discretization techniques i.e. CAIM-D, Chi2-D, ChiMerge-D, ClusterAnalysis-D, DIBD-D, 1R-D, ID3-D, Ameva-D, Bayesian-D, CACC-D and Class Attribute Dependent Discretization (CADD-D) are used for empirical study. In order to divulge the impact of these discretazation techniques on accuracy of a particular classifier an empirical study is conducted using twelve publically available standard datasets.Swarm Intelligence based classification technique (AntMiner-Plus) is customized and evaluated with selected discretization techniques using the stated datasets.
2286
76680
Secure Optimized Ingress Filtering in Future Internet Communication
Abstract:
Information-centric networking (ICN) using architectures such as the Publish-Subscribe Internet Technology (PURSUIT) has been proposed as a new networking model that aims at replacing the current used end-centric networking model of the Internet. This emerged model focuses on what is being exchanged rather than which network entities are exchanging information, which gives the control plane functions such as routing and host location the ability to be specified according to the content items. The forwarding plane of the PURSUIT ICN architecture uses a simple and light mechanism based on Bloom filter technologies to forward the packets. Although this forwarding scheme solve many problems of the today’s Internet such as the growth of the routing table and the scalability issues, it is vulnerable to brute force attacks which are starting point to distributed- denial-of-service (DDoS) attacks. In this work, we design and analyze a novel source-routing and information delivery technique that keeps the simplicity of using Bloom filter-based forwarding while being able to deter different attacks such as denial of service attacks at the ingress of the network. To achieve this, special forwarding nodes called Edge-FW are directly attached to end user nodes and used to perform a security test for malicious injected random packets at the ingress of the path to prevent any possible attack brute force attacks at early stage. In this technique, a core entity of the PURSUIT ICN architecture called topology manager, that is responsible for finding shortest path and creating a forwarding identifiers (FId), uses a cryptographically secure hash function to create a 64-bit hash, h, over the formed FId for authentication purpose to be included in the packet. Our proposal restricts the attacker from injecting packets carrying random FIds with a high amount of filling factor ρ, by optimizing and reducing the maximum allowed filling factor ρm in the network. We optimize the FId to the minimum possible filling factor where ρ ≤ ρm, while it supports longer delivery trees, so the network scalability is not affected by the chosen ρm. With this scheme, the filling factor of any legitimate FId never exceeds the ρm while the filling factor of illegitimate FIds cannot exceed the chosen small value of ρm. Therefore, injecting a packet containing an FId with a large value of filling factor, to achieve higher attack probability, is not possible anymore. The preliminary analysis of this proposal indicates that with the designed scheme, the forwarding function can detect and prevent malicious activities such DDoS attacks at early stage and with very high probability.
2285
77011
Parallelization of Random Accessible Progressive Streaming of Compressed 3D Models over Web
Abstract:
Three-dimensional (3D) meshes are data structures, which store geometric information of an object or scene, generally in the form of vertices and edges. Current technology in laser scanning and other geometric data acquisition technologies acquire high resolution sampling which leads to high resolution meshes. While high resolution meshes give better quality rendering and hence is used often, the processing, as well as storage of 3D meshes, is currently resource-intensive. At the same time, web applications for data processing have become ubiquitous owing to their accessibility. For 3D meshes, the advancement of 3D web technologies, such as WebGL, WebVR, has enabled high fidelity rendering of huge meshes. However, there exists a gap in ability to stream huge meshes to a native client and browser application due to high network latency. Also, there is an inherent delay of loading WebGL pages due to large and complex models. The focus of our work is to identify the challenges faced when such meshes are streamed into and processed on hand-held devices, owing to its limited resources. One of the solutions that are conventionally used in the graphics community to alleviate resource limitations is mesh compression. Our approach deals with a two-step approach for random accessible progressive compression and its parallel implementation. The first step includes partition of the original mesh to multiple sub-meshes, and then we invoke data parallelism on these sub-meshes for its compression. Subsequent threaded decompression logic is implemented inside the Web Browser Engine with modification of WebGL implementation in Chromium open source engine. This concept can be used to completely revolutionize the way e-commerce and Virtual Reality technology works for consumer electronic devices. These objects can be compressed in the server and can be transmitted over the network. The progressive decompression can be performed on the client device and rendered. Multiple views currently used in e-commerce sites for viewing the same product from different angles can be replaced by a single progressive model for better UX and smoother user experience. Can also be used in WebVR for commonly and most widely used activities like virtual reality shopping, watching movies and playing games. Our experiments and comparison with existing techniques show encouraging results in terms of latency (compressed size is ~10-15% of the original mesh), processing time (20-22% increase over serial implementation) and quality of user experience in web browser.
2284
77065
An Integrated Lightweight Naïve Bayes Based Webpage Classification Service for Smartphone Browsers
Abstract:
The internet world and its priorities have changed considerably in the last decade. Browsing on smart phones has increased manifold and is set to explode much more. Users spent considerable time browsing different websites, that gives a great deal of insight into user’s preferences. Instead of plain information classifying different aspects of browsing like Bookmarks, History, and Download Manager into useful categories would improve and enhance the user’s experience. Most of the classification solutions are server side that involves maintaining server and other heavy resources. It has security constraints and maybe misses on contextual data during classification. On device, classification solves many such problems, but the challenge is to achieve accuracy on classification with resource constraints. This on device classification can be much more useful in personalization, reducing dependency on cloud connectivity and better privacy/security. This approach provides more relevant results as compared to current standalone solutions because it uses content rendered by browser which is customized by the content provider based on user’s profile. This paper proposes a Naive Bayes based lightweight classification engine targeted for a resource constraint devices. Our solution integrates with Web Browser that in turn triggers classification algorithm. Whenever a user browses a webpage, this solution extracts DOM Tree data from the browser’s rendering engine. This DOM data is a dynamic, contextual and secure data that can’t be replicated. This proposal extracts different features of the webpage that runs on an algorithm to classify into multiple categories. Naive Bayes based engine is chosen in this solution for its inherent advantages in using limited resources compared to other classification algorithms like Support Vector Machine, Neural Networks, etc. Naive Bayes classification requires small memory footprint and less computation suitable for smartphone environment. This solution has a feature to partition the model into multiple chunks that in turn will facilitate less usage of memory instead of loading a complete model. Classification of the webpages done through integrated engine is faster, more relevant and energy efficient than other standalone on device solution. This classification engine has been tested on Samsung Z3 Tizen hardware. The Engine is integrated into Tizen Browser that uses Chromium Rendering Engine. For this solution, extensive dataset is sourced from dmoztools.net and cleaned. This cleaned dataset has 227.5K webpages which are divided into 8 generic categories ('education', 'games', 'health', 'entertainment', 'news', 'shopping', 'sports', 'travel'). Our browser integrated solution has resulted in 15% less memory usage (due to partition method) and 24% less power consumption in comparison with standalone solution. This solution considered 70% of the dataset for training the data model and the rest 30% dataset for testing. An average accuracy of ~96.3% is achieved across the above mentioned 8 categories. This engine can be further extended for suggesting Dynamic tags and using the classification for differential uses cases to enhance browsing experience.
2283
78219
i2kit: A Tool for Immutable Infrastructure Deployments
Abstract:
Microservice architectures are increasingly in distributed cloud applications due to the advantages on the software composition, development speed, release cycle frequency and the business logic time to market. On the other hand, these architectures also introduce some challenges on the testing and release phases of applications. Container technology solves some of these issues by providing reproducible environments, easy of software distribution and isolation of processes. However, there are other issues that remain unsolved in current container technology when dealing with multiple machines, such as networking for multi-host communication, service discovery, load balancing or data persistency (even though some of these challenges are already solved by traditional cloud vendors in a very mature and widespread manner). Container cluster management tools, such as Kubernetes, Mesos or Docker Swarm, attempt to solve these problems by introducing a new control layer where the unit of deployment is the container (or the pod — a set of strongly related containers that must be deployed on the same machine). These tools are complex to configure and manage and they do not follow a pure immutable infrastructure approach since servers are reused between deployments. Indeed, these tools introduce dependencies at execution time for solving networking or service discovery problems. If an error on the control layer occurs, which would affect running applications, specific expertise is required to perform ad-hoc troubleshooting. As a consequence, it is not surprising that container cluster support is becoming a source of revenue for consulting services. This paper presents i2kit, a deployment tool based on the immutable infrastructure pattern, where the virtual machine is the unit of deployment. The input for i2kit is a declarative definition of a set of microservices, where each microservice is defined as a pod of containers. Microservices are built into machine images using linuxkit —- a tool for creating minimal linux distributions specialized in running containers. These machine images are then deployed to one or more virtual machines, which are exposed through a cloud vendor load balancer. Finally, the load balancer endpoint is set into other microservices using an environment variable, providing service discovery. The toolkit i2kit reuses the best ideas from container technology to solve problems like reproducible environments, process isolation, and software distribution, and at the same time relies on mature, proven cloud vendor technology for networking, load balancing and persistency. The result is a more robust system with no learning curve for troubleshooting running applications. We have implemented an open source prototype that transforms i2kit definitions into AWS cloud formation templates, where each microservice AMI (Amazon Machine Image) is created on the fly using linuxkit. Even though container cluster management tools have more flexibility for resource allocation optimization, we defend that adding a new control layer implies more important disadvantages. Resource allocation is greatly improved by using linuxkit, which introduces a very small footprint (around 35MB). Also, the system is more secure since linuxkit installs the minimum set of dependencies to run containers. The toolkit i2kit is currently under development at the IMDEA Software Institute.
2282
78224
Automatic Aggregation and Embedding of Microservices for Optimized Deployments
Abstract:
Microservices are a software development methodology in which applications are built by composing a set of independently deploy-able, small, modular services. Each service runs a unique process and it gets instantiated and deployed in one or more machines (we assume that different microservices are deployed into different machines). Microservices are becoming the de facto standard for developing distributed cloud applications due to their reduced release cycles. In principle, the responsibility of a microservice can be as simple as implementing a single function, which can lead to the following issues: - Resource fragmentation due to the virtual machine boundary. - Poor communication performance between microservices. Two composition techniques can be used to optimize resource fragmentation and communication performance: aggregation and embedding of microservices. Aggregation allows the deployment of a set of microservices on the same machine using a proxy server. Aggregation helps to reduce resource fragmentation, and is particularly useful when the aggregated services have a similar scalability behavior. Embedding deals with communication performance by deploying on the same virtual machine those microservices that require a communication channel (localhost bandwidth is reported to be about 40 times faster than cloud vendor local networks and it offers better reliability). Embedding can also reduce dependencies on load balancer services since the communication takes place on a single virtual machine. For example, assume that microservice A has two instances, a1 and a2, and it communicates with microservice B, which also has two instances, b1 and b2. One embedding can deploy a1 and b1 on machine m1, and a2 and b2 are deployed on a different machine m2. This deployment configuration allows each pair (a1-b1), (a2-b2) to communicate using the localhost interface without the need of a load balancer between microservices A and B. Aggregation and embedding techniques are complex since different microservices might have incompatible runtime dependencies which forbid them from being installed on the same machine. There is also a security concern since the attack surface between microservices can be larger. Luckily, container technology allows to run several processes on the same machine in an isolated manner, solving the incompatibility of running dependencies and the previous security concern, thus greatly simplifying aggregation/embedding implementations by just deploying a microservice container on the same machine as the aggregated/embedded microservice container. Therefore, a wide variety of deployment configurations can be described by combining aggregation and embedding to create an efficient and robust microservice architecture. This paper presents a formal method that receives a declarative definition of a microservice architecture and proposes different optimized deployment configurations by aggregating/embedding microservices. The first prototype is based on i2kit, a deployment tool also submitted to ICWS 2018. The proposed prototype optimizes the following parameters: network/system performance, resource usage, resource costs and failure tolerance.
2281
78802
Improved Super-Resolution Using Deep Denoising Convolutional Neural Network
Abstract:
Super-resolution is the technique that is being used in computer vision to construct high-resolution images from a single low-resolution image. It is used to increase the frequency component, recover the lost details and removing the down sampling and noises that caused by camera during image acquisition process. High-resolution images or videos are desired part of all image processing tasks and its analysis in most of digital imaging application. The target behind super-resolution is to combine non-repetition information inside single or multiple low-resolution frames to generate a high-resolution image. Many methods have been proposed where multiple images are used as low-resolution images of same scene with different variation in transformation. This is called multi-image super resolution. And another family of methods is single image super-resolution that tries to learn redundancy that presents in image and reconstruction the lost information from a single low-resolution image. Use of deep learning is one of state of art method at present for solving reconstruction high-resolution image. In this research, we proposed Deep Denoising Super Resolution (DDSR) that is a deep neural network for effectively reconstruct the high-resolution image from low-resolution image.
2280
77928
A Hierarchical Method for Multi-Class Probabilistic Classification Vector Machines
Abstract:
The Support Vector Machine (SVM) has become widely recognised as one of the leading algorithms in machine learning for both regression and binary classification. It expresses predictions in terms of a linear combination of kernel functions, referred to as support vectors. Despite its popularity amongst practitioners, SVM has some limitations, with the most significant being the generation of point prediction as opposed to predictive distributions. Stemming from this issue, a probabilistic model namely, Probabilistic Classification Vector Machines (PCVM), has been proposed which respects the original functional form of SVM whilst also providing a predictive distribution. As physical system designs become more complex, an increasing number of classification tasks involving industrial applications consist of more than two classes. Consequently, this research proposes a framework which allows for the extension of PCVM to a multi class setting. Additionally, the original PCVM framework relies on the use of type II maximum likelihood to provide estimates for both the kernel hyperparameters and model evidence. In a high dimensional multi class setting, however, this approach has been shown to be ineffective due to bad scaling as the number of classes increases. Accordingly, we propose the application of Markov Chain Monte Carlo (MCMC) based methods to provide a posterior distribution over both parameters and hyperparameters. The proposed framework will be validated against current multi class classifiers through synthetic and real life implementations.
2279
77535
Segmentation of Liver Using Random Forest Classifier
Abstract:
Nowadays, Medical imaging has become an integral part of modern healthcare. Abdominal CT images are an invaluable mean for abdominal organ investigation and have been widely studied in the recent years. Diagnosis of liver pathologies is one of the major areas of current interests in the field of medical image processing and is still an open problem. To deeply study and diagnose the liver, segmentation of liver is done to identify which part of the liver is mostly affected. Manual segmentation of the liver in CT images is time-consuming and suffers from inter- and intra-observer differences. However, automatic or semi-automatic computer aided segmentation of the Liver is a challenging task due to inter-patient Liver shape and size variability. In this paper, we present a technique for automatic segmenting the liver from CT images using Random Forest Classifier. Random forests or random decision forests are an ensemble learning method for classification that operate by constructing a multitude of decision trees at training time and outputting the class that is the mode of the classes of the individual trees. After comparing with various other techniques, it was found that Random Forest Classifier provide a better segmentation results with respect to accuracy and speed. We have done the validation of our results using various techniques and it shows above 89% accuracy in all the cases.
2278
77479
Artificial Intelligence and Robotics in the Eye of Private Law with Special Regards to Intellectual Property and Liability Issues
Abstract:
In the last few years (what is called by many scholars the big data era) artificial intelligence (hereinafter AI) get more and more attention from the public and from the different branches of sciences as well. What previously was a mere science-fiction, now starts to become reality. AI and robotics often walk hand in hand, what changes not only the business and industrial life, but also has a serious impact on the legal system. The main research of the author focuses on these impacts in the field of private law, with special regards to liability and intellectual property issues. Many questions arise in these areas connecting to AI and robotics, where the boundaries are not sufficiently clear, and different needs are articulated by the different stakeholders. Recognizing the urgent need of thinking the Committee on Legal Affairs of the European Parliament adopted a Motion for a European Parliament Resolution A8-0005/2017 (of January 27th, 2017) in order to take some recommendations to the Commission on civil law rules on robotics and AI. This document defines some crucial usage of AI and/or robotics, e.g. the field of autonomous vehicles, the human job replacement in the industry or smart applications and machines. It aims to give recommendations to the safe and beneficial use of AI and robotics. However – as the document says – there are no legal provisions that specifically apply to robotics or AI in IP law, but that existing legal regimes and doctrines can be readily applied to robotics, although some aspects appear to call for specific consideration, calls on the Commission to support a horizontal and technologically neutral approach to intellectual property applicable to the various sectors in which robotics could be employed. AI can generate some content what worth copyright protection, but the question came up: who is the author, and the owner of copyright? The AI itself can’t be deemed author because it would mean that it is legally equal with the human persons. But there is the programmer who created the basic code of the AI, or the undertaking who sells the AI as a product, or the user who gives the inputs to the AI in order to create something new. Or AI generated contents are so far from humans, that there isn’t any human author, so these contents belong to public domain. The same questions could be asked connecting to patents. The research aims to answer these questions within the current legal framework and tries to enlighten future possibilities to adapt these frames to the socio-economical needs. In this part, the proper license agreements in the multilevel-chain from the programmer to the end-user become very important, because AI is an intellectual property in itself what creates further intellectual property. This could collide with data-protection and property rules as well. The problems are similar in the field of liability. We can use different existing forms of liability in the case when AI or AI led robotics cause damages, but it is unsure that the result complies with economical and developmental interests.
2277
77527
The Human Rights Code: Fundamental Rights as the Basis of Human-Robot Coexistence
Abstract:
Fundamental rights are the result of thousand years’ progress of legislation, adjudication and legal practice. They serve as the framework of peaceful cohabitation of people, protecting the individual from any abuse by the government or violation by other people. Artificial intelligence, however, is the development of the very recent past, being one of the most important prospects to the future. Artificial intelligence is now capable of communicating and performing actions the same way as humans; such acts are sometimes impossible to tell from actions performed by flesh-and-blood people. In a world, where human-robot interactions are more and more common, a new framework of peaceful cohabitation is to be found. Artificial intelligence, being able to take part in almost any kind of interaction where personal presence is not necessary without being recognized as a non-human actor, is now able to break the law, violate people’s rights, and disturb social peace in many other ways. Therefore, a code of peaceful coexistence is to be found or created. We should consider the issue, whether human rights can serve as the code of ethical and rightful conduct in the new era of artificial intelligence and human coexistence. In this paper, we will examine the applicability of fundamental rights to human-robot interactions as well as to the actions of artificial intelligence performed without human interaction whatsoever. Robot ethics has been a topic of discussion and debate of philosophy, ethics, computing, legal sciences and science fiction writing long before the first functional artificial intelligence has been introduced. Legal science and legislation have approached artificial intelligence from different angles, regulating different areas (e.g. data protection, telecommunications, copyright issues), but they are only chipping away at the mountain of legal issues concerning robotics. For a widely acceptable and permanent solution, a more general set of rules would be preferred to the detailed regulation of specific issues. We argue that human rights as recognized worldwide are able to be adapted to serve as a guideline and a common basis of coexistence of robots and humans. This solution has many virtues: people don’t need to adjust to a completely unknown set of standards, the system has proved itself to withstand the trials of time, legislation is easier, and the actions of non-human entities are more easily adjudicated within their own framework. In this paper we will examine the system of fundamental rights (as defined in the most widely accepted source, the 1966 UN Convention on Human Rights), and try to adapt each individual right to the actions of artificial intelligence actors; in each case we will examine the possible effects on the legal system and the society of such an approach, finally we also examine its effect on the IT industry.
2276
77555
BB-Graph: A Branch and Bound Subgraph Isomorphism Algorithm for Efficiently Querying Big Graph Databases
Abstract:
With the emergence of the big data concept, the big graph database model has become very popular since it provides strong modeling for complex applications and fast querying, especially for the cases that require costly join operations in RDBMs. However, it is a big challenge to find all exact matches of a query graph in a big graph database, which is known as the subgraph isomorphism problem. Although a number of related studies exist in literature, there is need for a better algorithm that works efficiently for all types of queries since the subgraph isomorphism problem is NP-hard. The current subgraph isomorphism approaches have been built on Ullmann's idea of focusing on the strategy of pruning out the irrelevant candidates. Nevertheless, for some graph databases and queries, the existing pruning techniques are not adequate to handle some of the complex queries. Moreover, many of those existing algorithms need large indices that cause extra memory consumption. Motivated by these, we introduce a new subgraph isomorphism algorithm, namely BB-Graph, for querying big graph databases in an efficient manner without requiring a large data structure to be stored in main memory. We test and compare our proposed BB-Graph algorithm with two popular existing ones, GraphQL and Cypher of Neo4j. Our experiments are done on a very big graph database application (Population Database) and the publicly available World Cup graph database application. We show that our algorithm performs better than those that we use for comparison in this study, for most of the query types.
2275
75218
Key Frame Based Video Summarization via Dependency Optimization
Authors:
Abstract:
As a rapid growth of digital videos and data communications, video summarization that provides a shorter version of the video for fast video browsing and retrieval is necessary. Key frame extraction is one of the mechanisms to generate video summary. In general, the extracted key frames should both represent the entire video content and contain minimum redundancy. However, most of the existing approaches heuristically select key frames; hence, the selected key frames may not be the most different frames and/or not cover the entire content of a video. In this paper, we propose a method of video summarization which provides the reasonable objective functions for selecting keyframes. In particular, we apply a statistical dependency measure called quadratic mutual information as our objective functions for maximizing the coverage of the entire video content as well as minimizing the redundancy among selected key frames. The proposed key frame extraction algorithm finds key frames as an optimization problem. Through experiments, we demonstrate the success of the proposed video summarization approach that produces video summary with better coverage of the entire video content while less redundancy among key frames comparing to the state-of-the-art approaches.