127 private links
Natural language processing algorithms applied to three million materials science abstracts uncover relationships between words, material compositions and properties, and predict potential new thermoelectric materials.
The overwhelming majority of scientific knowledge is published as text, which is difficult to analyse by either traditional statistical analysis or modern machine learning methods. By contrast, the main source of machine-interpretable data for the materials research community has come from structured property databases, which encompass only a small fraction of the knowledge present in the research literature. Beyond property values, publications contain valuable knowledge regarding the connections and relationships between data items as interpreted by the authors. To improve the identification and use of this knowledge, several studies have focused on the retrieval of information from scientific literature using supervised natural language processing, which requires large hand-labelled datasets for training. Here we show that materials science knowledge present in the published literature can be efficiently encoded as information-dense word embeddings (vector representations of words) without human labelling or supervision. Without any explicit insertion of chemical knowledge, these embeddings capture complex materials science concepts such as the underlying structure of the periodic table and structure–property relationships in materials. Furthermore, we demonstrate that an unsupervised method can recommend materials for functional applications several years before their discovery. This suggests that latent knowledge regarding future discoveries is to a large extent embedded in past publications. Our findings highlight the possibility of extracting knowledge and relationships from the massive body of scientific literature in a collective manner, and point towards a generalized approach to the mining of scientific literature.
We present HotStuff, a leader-based Byzantine fault-tolerant replication protocol for the partially synchronous model.
Once network communication becomes synchronous, HotStuff enables a correct leader to drive the protocol to consensus at the pace of actual (vs. maximum) network delay--a property called responsiveness--and with communication complexity that is linear in the number of replicas. To our knowledge, HotStuff is the first partially synchronous BFT replication protocol exhibiting these combined properties. HotStuff is built around a novel framework that forms a bridge between classical BFT foundations and blockchains. It allows the expression of other known protocols (DLS, PBFT, Tendermint, Casper), and ours, in a common framework.
Our deployment of HotStuff over a network with over 100 replicas achieves throughput and latency comparable to that of BFT-SMaRt, while enjoying linear communication footprint during leader failover (vs. quadratic with BFT-SMaRt).
Daniel J. Bernstein, Bo-Yin Yang. "Fast constant-time gcd computation and modular inversion."
Best practice and tips & tricks to write scientific papers in LaTeX, with figures generated in Python or Matlab.
Experienced programmers often need to use online resources to pick up new programming languages. However, we lack a comprehensive understanding of which resources programmers find most valuable and utilize most often. In this paper, we study how experienced programmers learn Rust, a systems programming language with comprehensive documentation, extensive example code, an active online community, and descriptive compiler errors. We develop a task that requires understanding the Rust-specific language concepts of mutability and ownership, in addition to learning Rust syntax.
Our results show that users spend 42% of online time viewing example code and that programmers appreciate the Rust Enhanced package’s in-line compiler errors, choosing to refresh every 30.6 seconds after first discovering this feature. We did not find any significant correlations between the resources used and the total task time or the learning outcomes. We discuss these results in light of design implications for language developers seeking to create resources to encourage usage and adoption by experienced programmers.
This paper investigates heat pump systems in smart grids, focussing on fields of application and control approaches that have emerged in academic literature. Based on a review of published literature technical aspects of heat pump flexibility, fields of application and control approaches are structured and discussed. Three main categories of applications using heat pumps in a smart grid context have been identified: First stable and economic operation of power grids, second the integration of renewable energy sources and third operation under variable electricity prices. In all fields heat pumps - when controlled in an appropriate manner - can help easing the transition to a decentralized energy system accompanied by a higher share of prosumers and renewable energy sources. Predictive controls are successfully used in the majority of studies, often assuming idealized conditions. Topics for future research have been identified including: a transfer of control approaches from simulation to the field, a detailed techno-economic analysis of heat pump systems under smart grid operation, and the design of heat pump systems in order to increase flexibility are among the future research topics suggested.
IoT is considered as one of the key enabling technologies for the fourth industrial revolution, that is known as Industry 4.0. In this paper, we consider the mechatronic component as the lowest level in the system composition hierarchy that tightly integrates mechanics with the electronics and software required to convert the mechanics to intelligent (smart) object offering well defined services to its environment. For this mechatronic component to be integrated in the IoT- based industrial automation environment, a software layer is required on top of it to convert its conventional interface to an IoT compliant one. This layer, that we call IoTwrapper, transforms the conventional mechatronic component to an Industrial Automation Thing (IAT). The IAT is the key element of an IoT model specifically developed in the context of this work for the manufacturing domain. The model is compared to existing IoT models and its main differences are discussed. A model-to-model transformer is presented to automatically transform the legacy mechatronic component to an IAT ready to be integrated in the IoT-based industrial automation environment. The UML4IoT profile is used in the form of a Domain Specific Modeling Language to automate this transformation. A prototype implementation of an In dustrial Automation Thing using C and the Contiki operating system demonstrates the effectiveness of the proposed approach.
In this paper, we consider multi-pursuer single-superior-evader pursuit-evasion differential games where the evader has a speed that is similar to or higher than the speed of each pursuer. A new fuzzy reinforcement learning algorithm is proposed in this work. The proposed algorithm uses the well-known Apollonius circle mechanism to define the capture region of the learning pursuer based on its location and the location of the superior evader. The proposed algorithm uses the Apollonius circle with a developed formation control approach in the tuning mechanism of the fuzzy logic controller (FLC) of the learning pursuer so that one or some of the learning pursuers can capture the superior evader. The formation control mechanism used by the proposed algorithm guarantees that the pursuers are distributed around the superior evader in order to avoid collision between pursuers. The formation control mechanism used by the proposed algorithm also makes the Apollonius circles of each two adjacent pursuers intersect or be at least tangent to each other so that the capture of the superior evader can occur. The proposed algorithm is a decentralized algorithm as no communication among the pursuers is required. The only information the proposed algorithm requires is the position and the speed of the superior evader. The proposed algorithm is used to learn different multi-pursuer single-superior-evader pursuit-evasion differential games. The simulation results show the effectiveness of the proposed algorithm.
A Learning Invader for the “Guarding a Territory” Game
A Reinforcement Learning Problem
This paper explores the use of a learning algorithm in the “guarding a territory” game. The game occurs in continuous time, where a single learning invader tries to get as close as possible to a territory before being captured by a guard. Previous research has approached the problem by letting only the guard learn. We will examine the other possibility of the game, in which only the invader is going to learn. Furthermore, in our case the guard is superior (faster) to the invader. We will also consider using models with non-holonomic constraints. A control system is designed and optimized for the invader to play the game and reach Nash Equilibrium. The paper shows how the learning system is able to adapt itself. The system’s performance is evaluated through different simulations and compared to the Nash Equilibrium. Experiments with real robots were conducted and verified our simulations in a real-life environment. Our results show that our learning invader behaved rationally in different circumstances.
In 2005, two scientists, David Mazières and Eddie Kohler, wrote a paper titled Get me off Your Fucking Mailing List and submitted it to WMSCI 2005 (the 9th World Multiconference on Systemics, Cybernetics and Informatics), a conference then notorious for its spamming and lax standards for paper acceptance, in protest of same. The paper consisted essentially only of the sentence "Get me off your fucking mailing list" repeated many times.
The deployment of solar-based electricity generation, especially in the form of photovoltaics (PVs), has increased markedly in recent years due to a wide range of factors including concerns over greenhouse gas emissions, supportive government policies, and lower equipment costs. Still, a number of challenges remain for reliable, efficient integration of solar energy. Chief among them will be developing new tools and practices that manage the variability and uncertainty of solar power.
Network protocol design and evaluation requires either full implementation of the considered protocol and evaluation in a real network, or a simulation based on a model. There is also a middle approach in which both simulation and emulation are used to evaluate a protocol. In this article the Partov engine, which provides both simulation and emulation capabilities simultaneously, is presented. Partov benefits from a layered and platform-independent architecture. As a pure simulator, it provides an extensible plugin-based platform that can be configured to perform both real-time and non-real-time discrete-event simulations. It also acts as an emulator, making interaction with real networks possible in real time. Additionally, a declarative XML-based language is used, acting as a glue between simulation and emulation modules and plugins. It supports dynamic network modelling and simulation based on continuous time Markov chains. Partov is compared with other well-known tools such as NS-3 and real processes such as Hping3. It is shown that Partov requires less overhead and is much more scalable than NS-3.
In this paper we address a seemingly simple question: Is there a universal packet scheduling algorithm? More precisely, we analyze (both theoretically and empirically) whether there is a single packet scheduling algorithm that, at a network-wide level, can match the results of any given scheduling algorithm. We find that in general the answer is “no”. However, we show theoretically that the classical Least Slack Time First (LSTF) scheduling algorithm comes closest to being universal and demonstrate empirically that LSTF can closely, though not perfectly, replay a wide range of scheduling algorithms in realistic network settings. We then evaluate whether LSTF can be used in practice to meet various network-wide objectives by looking at three popular performance metrics (mean FCT, tail packet delays, and fairness); we find that LSTF performs comparable to the state-of-the-art for each of them.
Demand Response (DR) in residential sector is considered to play a key role in the smart grid framework because of its disproportionate amount of peak energy use and massive integration of distributed local renewable energy generation in conjunction with battery storage devices. In this paper, first a quick overview about residential demand response and its optimization model at single home and multi-home level is presented. Then a description of state-of-the-art optimization methods addressing different aspects of residential DR algorithms such as optimization of schedules for local RE based generation dispatch, battery storage utilization and appliances consumption by considering both cost and comfort, parameters uncertainty modeling, physical based dynamic consumption modeling of various appliances power consumption at single home and aggregated homes/community level are presented. The key issues along with their challenges and opportunities for residential demand response implementation and further research directions are highlighted.
This paper presents an experimental evaluation of different line extraction algorithms on 2D laser scans for indoor environment. Six popular algorithms in mobile robotics and computer vision are selected and tested. Experiments are performed on 100 real data scans collected in an office environment with a map size of 80m × 50m. Several comparison criteria are proposed and discussed to highlight the advantages and drawbacks of each algorithm, including speed, complexity, correctness and precision. The results of the algorithms are compared with the ground truth using standard statistical methods.
by Truong X. Nghiem, Rahul Mangharam
Peak power consumption is a universal problem across energy control systems in electrical grids, buildings, and industrial automation where the uncoordinated operation of multiple controllers result in temporally correlated electricity demand surges (or peaks). While there exist several different approaches to balance power consumption by load shifting and load shedding, they operate on coarse grained time scales and do not help in de-correlating energy sinks. The Energy System Scheduling Problem is particularly hard due to its binary control variables. Its complexity grows exponentially with the scale of the system, making it impossible to handle systems with more than a few variables.
We developed a scalable approach for fine-grained scheduling of energy control systems that novelly combines techniques from control theory and computer science. The original system with binary control variables are approximated by an averaged system whose inputs are the utilization values of the binary inputs within a given period. The error between the two systems can be bounded, which allows us to derive a safety constraint for the averaged system so that the original system's safety is guaranteed. To further reduce the complexity of the scheduling problem, we abstract the averaged system by a simple single-state single-input dynamical system whose control input is the upper-bound of the total demand of the system. This model abstraction is achieved by extending the concept of simulation relations between transition systems to allow for input constraints between the systems. We developed conditions to test for simulation relations as well as algorithms to compute such a model abstraction. As a consequence, we only need to solve a small linear program to compute an optimal bound of the total demand. The total demand is then broken down, by solving a linear program much smaller than the original program, to individual utilization values of the subsystems, whose actual schedule is then obtained by a low-level scheduling algorithm. Numerical simulations in Matlab show the effectiveness and scalability of our approach.
The need for fast response demand side participation (DSP) has never been greater due to increased wind power penetration. White domestic goods suppliers are currently developing a ‘smart’ chip for a range of domestic appliances (e.g. refrigeration units, tumble dryers and storage heaters) to support the home as a DSP unit in future power systems. This paper presents an aggregated population-based model of a single compressor fridge-freezer. Two scenarios (i.e. energy efficiency class and size) for valley filling and peak shaving are examined to quantify and value DSP savings in 2020. The analysis shows potential peak reductions of 40 MW to 55 MW are achievable in the Single wholesale Electricity Market of Ireland (i.e. the test system), and valley demand increases of up to 30 MW. The study also shows the importance of the control strategy start time and the staggering of the devices to obtain the desired filling or shaving effect.
Plug-loads are often neglected in commercial demand response (DR) despite being a major contributor to building energy consumption. Improvements in technology like smart power strips are prompting the incorporation of plug-loads as a DR resource alongside building HVAC and lighting. Office scale battery storage (OSBS) systems are also candidates as a DR resource due to their ability to run on battery power. In this work, we present a model predictive control (MPC) framework for optimal load-shedding of plug-loads and OSBS.We begin with discussion of the context of this work, and present two models of OSBS systems. A model predictive controller for OSBS and plug-load load-shed scheduling is presented. We discuss casting the MPC as a dynamic program, and an algorithm to solve the dynamic program. Simulation results show the efficacy and utility of dynamic programming, and quantify the performance of OSBS systems.
The performance, reliability, cost, size and energy usage of computing systems can be improved by one or more orders of magnitude by the systematic use of modern control and optimization methods. Computing systems rely on the use of feedback algorithms to schedule tasks, data and resources, but the models that are used to design these algorithms are validated using open-loop metrics. By using closed-loop metrics instead, such as the gap metric developed in the control community, it should be possible to develop improved scheduling algorithms and computing systems that have not been over-engineered. Furthermore, scheduling problems are most naturally formulated as constraint satisfaction or mathematical optimization problems, but these are seldom implemented using state of the art numerical methods, nor do they explicitly take into account the fact that the scheduling problem itself takes time to solve. This paper makes the case that recent results in real-time model predictive control, where optimization problems are solved in order to control a process that evolves in time, are likely to form the basis of scheduling algorithms of the future. We therefore outline some of the research problems and opportunities that could arise by explicitly considering feedback and time when designing optimal scheduling algorithms for computing systems.