Keynotes (co-located with FDSE 2018)
Keynotes will be delivered by world-class academics for both the ACOMP 2018 and the FDSE 2018 conferences
Prof. Dirk Draheim
Tallinn University of Technology, Estonia
Dirk Draheim is full professor of information society technologies at Tallinn University of Technology. Dirk Draheim holds a Diploma in computer science from Technische Universität Berlin, a PhD from Freie Universität Berlin and a Habilitation from the University of Mannheim. From to 1990 to 2006 he worked as an IT project manager, IT consultant and IT author in Berlin. In summer 2006, he was Lecturer at the University of Auckland and from 2006-2008 he was area manager for database systems at the Software Competence Center Hagenberg as well as Adjunct Lecturer in information systems at the Johannes-Kepler-University Linz. From 2008 to 2016 he was head of the data center of the University of Innsbruck and, in parallel, from 2010 to 2016, Adjunct Reader at the Faculty of Information Systems of the University of Mannheim. Dirk is member of the ACM.
F.P. conditionalization (frequentist partial conditionalization) allows for combining partial knowledge in arbitrary many dimensions and without any restrictions on events such as independence or partitioning. In this talk, we provide a primer to F.P. conditionalization and its most important results. As an example, we proof that Jeffrey conditionalization is an instance of F.P. conditionalization for the special case that events form a partition. Also, we discuss the logics and the data science perspective on the matter.
Prof. Artur Andrzejak
Heidelberg University, Germany
Artur Andrzejak has received a PhD degree in computer science from ETH Zurich in 2000 and a habilitation degree from FU Berlin in 2009. He was a postdoctoral researcher at the HP Labs Palo Alto from 2001 to 2002 and a researcher at ZIB Berlin from 2003 to 2009. He was leading the CoreGRID Institute on System Architecture (2004 to 2006) and acted as a Deputy Head of Data Mining Department at I2R Singapore in 2010. Since 2010 he is a professor at Ruprecht-Karls-University of Heidelberg and leads there the Parallel and Distributed Systems group. His research interests include reliability of complex software systems, scalable data analysis, and cloud computing.
Non-trivial data analysis studies in science, engineering and business require programming of reusable and flexible "processing workflows" . While a majority of the workflow operations are already provided by frameworks and libraries (e.g. Python's SciPy, Pandas, scikit-learn, R packages, Apache Spark, ..), the challenge for the users is to identify them, and subsequently to adapt and "glue" them into a coherent workflow. The associated effort is considerable, especially if a data analyst has only limited knowledge of programming. This difficulty is even aggravate if scalable processing is needed, as in case of massive data sets.
In the main part of this talk we discuss some promising emerging approaches to address this problem.
Among others, we take a look at novel tools like Google' Cloud Dataprep service. It combines a Domain-Specific Language, code recommendations, and immediate result previews to implement the so-called Predictive Interaction. With its help, end-users are able to implement relatively complex and scalable workflows for data preparation without deep knowledge of the framework's API. Another example is our extension of the popular tool OpenRefine, where we use Apache Spark as a new backend to run arbitrarily large data preprocessing jobs defined in a user-friendly way via GUI and simple expressions.
Program synthesis is a more disruptive approach. We describe in this talk some recent advances in this field targeted at automated generation of (short) scripts for data processing derived only from input/output examples. However, the limitations of these technologies are still considerable, including difficulty of integrating the synthesized code into mainstream languages/frameworks, and the rather cumbersome way of goal specification.
We conclude this talk by looking at a complementary problem of large-scale data analysis, namely reduction of the computational cost in cloud-based data processing. We present an approach which trades cost of computational resources for availability guarantees. Here a mixture of dedicated (and so highly available) hosts and non-dedicated (and so highly volatile) hosts is used to provision a large scale processing service. The resulting monetary savings can be substantial given some tolerance for failures of participating resources.
Prof. Dinh Nho Hao
Institute of Mathematics, Vietnam
Dinh Nho Hào is affiliated with Department of Differential Equations, Hanoi Institute of Mathematics, Vietnam Academy of Science and Technology. His main interests are partial differential equations, inverse and ill-posed problems for partial differential equations, optimal control, numerical analysis and image processing. Dinh Nho Hào obtained his Master degree from Baku State University in 1983, PhD degree from Free University of Berlin in 1991 and Habilitation from University of Siegen in 1996. His work is funded by DAAD, DFG (German Research Foundation), Alexander von Humboldt Stiftung, CNR (Italy), PASS (France), Royale Society, Marie Curie Foundation … and several visiting professor stays. He is now in editorial board of Acta Mathematica Vietnamica, Applicable Analysis, Applied Numerical Mathematics, Inverse Problems in Science and Engineering, Journal of Inverse and Ill-Posed Problems, Journal of Nonlinear Evolution Equations and Applications, Vietnam Journal of Applied Mathematics.
In this tutorial talk we present a theoretical framework for supervised learning. We consider supervised learning as the regression of approximating a multivariate function from sparse data. This problem can be regarded as an inverse problem which is ill-posed. We will show how to apply the of ill-posed problems to supervised learning, how to reduce the dimension of the problem and the error estimate.
Prof. Tae M. Chung
Sungkyunkwan University, Korea
Prof. Chung has been a faculty member of the College of Software at Sungkyunkwan University (SKKU) in Korea since 1995. There, he is now the Dean of College of Software and directs Information Management Technology Laboratory. His research interests focus on Information Security, Security & Information Management, and Services in Next Generation Network environments. His research started as a staff scientist of network technology department at Bolt Beranek & Newman Labs., USA.
Are you happy for the 4th industrial revolution? It has brought up so much convenience and economic fruits to the society. If you say "yes", we may ask you another question "Should you worry about side effects from the revolutionary changes". Of course, one of the most critical side effects is "Security & Safe" as all agree because security is the key factor to determine sustainable growth of smart society. Then, one more question is added. How will you protect you and your society from the hacking incidents".
In this presentation, drastic changes made by 4th industrial revolution technologies such as IoT(internet of things), Big data, AI(Artificial Intelligence), Cloud, Mobile communications are reviewed with the threats and damages by hacking incidents. In particular, recent hacking techniques will be emphasized with more details. Then, modern security technologies and measures to protect the smart society from the hackers will be discussed. A few modern and innovative ideas of security related research and development will be also touched in the sense of sharing thoughts.
Prof. Ing-Chao Lin
National Cheng Kung University, Taiwan
Bias temperature instability (BTI) which causes a shift in the transistor’s threshold voltage and decreases circuit switching speed has become a major reliability concern for integrated circuit (IC). In this talk, I will introduce the cause of BTI effect and the techniques to mitigate device degradation caused by BTI from the circuit level to the system level. I will provide design guidelines to deal with device degradation. The future trend on IC reliability improvement technique will be introduced as well.
Prof. Michael Felderer
University of Innsbruck, Austria
Michael Felderer is a professor at the Department of Computer Science at the University of Innsbruck, Austria and a senior researcher at the Department of Software Engineering at the Blekinge Institute of Technology, Sweden. In 2014 he was a guest researcher at Lund University, Sweden and in 2015 a guest lecturer at the University of Stuttgart, Germany. His fields of expertise and interest in software and security engineering include software quality, testing, software and security processes, risk management, software analytics and measurement, requirements engineering, model-based software engineering and empirical research methodology in software and security engineering. Michael Felderer holds a habilitation degree from the University of Innsbruck, co-authored more than 130 publications and received 9 best paper awards. His research has a strong empirical focus also using methods of data science and is directed towards development and evaluation of efficient and effective methods to improve quality and value of industrial software systems and processes in close collaboration with companies. Michael Felderer himself has more than 10 years of industrial experience as a senior executive consultant, project manager and software engineer. He is an internationally recognized member of the software and security engineering research community and supports it as an editorial board member, organizer of conferences (e.g. General Chair of PROFES 2017) and regular PC member of premier conferences. For more information visit his website at mfelderer.at.
The concept of risk as a measure for the potential of gaining or losing something of value has successfully been applied in software quality engineering for years, e.g., for risk-based test case prioritization, and in security engineering, e.g., for security requirements elicitation. In practice, both, in software quality engineering and in security engineering, risks are typically assessed manually, which tends to be subjective, non-deterministic, error-prone and time-consuming. This often leads to the situation that risks are not explicitly assessed at all and further prevents that the high potential of assessed risks to support decisions is exploited. However, in modern data-intensive environments, e.g., open online environments, continuous software development or IoT, the online, system or development environments continuously deliver data, which provides the possibility to now automatically assess and utilize software and security risks. In this talk we first discuss the concept of risk in software quality and security engineering. Then, we provide two current examples from software quality engineering and security engineering, where data-driven risk assessment is a key success factor, i.e., risk-based continuous software quality engineering in continuous software development and risk-based security data extraction and processing in the open online web.
Prof. Kazuo Sakiyama
The University of Electro-Communications, Japan
Dr. Kazuo Sakiyama is a professor at The University of Electro-Communications, Tokyo, Japan. He has more than 20 years experience of digital circuit design and test especially focusing on security applications.
Hardware device as a root of trust in the IoT era is under threat of new physical attacks that appear one after another. For instance, fault injection attacks are known to be one of the most serious physical attacks against cryptographic IC (Integrated Circuit). In this talk, basic example of the physical attacks is firstly introduced, and then quantitative analysis of information leakage by the physical attack is discussed. Finally, in collaboration with sensors, development of resilient IoT systems is explored.