Artificial intelligence has made tremendous advances since its inception about seventy years ago. Self-driving cars, programs beating experts at complex games, and smart robots capable of assisting people that need care are just some among the successful examples of machine intelligence. This kind of progress might entice us to envision a society populated by autonomous robots capable of performing the same tasks humans do in the near future. This prospect seems limited only by the power and complexity of current computational devices, which is improving fast. However, there are several significant obstacles on this path. General intelligence involves situational reasoning, taking perspectives, choosing goals, and an ability to deal with ambiguous information. We observe that all of these characteristics are connected to the ability of identifying and exploiting new affordances—opportunities (or impediments) on the path of an agent to achieve its goals. A general example of an affordance is the use of an object in the hands of an agent. We show that it is impossible to predefine a list of such uses. Therefore, they cannot be treated algorithmically. This means that “AI agents” and organisms differ in their ability to leverage new affordances. Only organisms can do this. This implies that true AGI is not achievable in the current algorithmic frame of AI research. It also has important consequences for the theory of evolution. We argue that organismic agency is strictly required for truly open-ended evolution through radical emergence. We discuss the diverse ramifications of this argument, not only in AI research and evolution, but also for the philosophy of science.
Addressing the need for novel insect observation and control tools, the Photonic Fence detects and tracks mosquitoes and other flying insects and can apply lethal doses of laser light to them. Previously, we determined lethal exposure levels for a variety of lasers and pulse conditions on anesthetized Anopheles stephensi mosquitoes. In this work, similar studies were performed while the subjects were freely flying within transparent cages two meters from the optical system; a proof-of-principle demonstration of a 30 m system was also performed. From the dose–response curves of mortality data created as a function of various beam diameter, pulse width, and power conditions at visible and near-infrared wavelengths, the visible wavelengths required significantly lower laser exposure than near infrared wavelengths to disable subjects, though near infrared sources remain attractive given their cost and retina safety. The flight behavior of the subjects and the performance of the tracking system were found to have no impact on the mortality outcomes for pulse durations up to 25 ms, which appears to be the ideal duration to minimize required laser power. The results of this study affirm the practicality of using optical approaches to protect people and crops from pestilent flying insects.
The modern project of creating human-like artificial intelligence (AI) started after World War II, when it was discovered that electronic computers are not just number-crunching machines, but can also manipulate symbols. It is possible to pursue this goal without assuming that machine intelligence is identical to human intelligence. This is known as weak AI. However, many AI researcher have pursued the aim of developing artificial intelligence that is in principle identical to human intelligence, called strong AI. Weak AI is less ambitious than strong AI, and therefore less controversial. However, there are important controversies related to weak AI as well. This paper focuses on the distinction between artificial general intelligence (AGI) and artificial narrow intelligence (ANI). Although AGI may be classified as weak AI, it is close to strong AI because one chief characteristics of human intelligence is its generality. Although AGI is less ambitious than strong AI, there were critics almost from the very beginning. One of the leading critics was the philosopher Hubert Dreyfus, who argued that computers, who have no body, no childhood and no cultural practice, could not acquire intelligence at all. One of Dreyfus’ main arguments was that human knowledge is partly tacit, and therefore cannot be articulated and incorporated in a computer program. However, today one might argue that new approaches to artificial intelligence research have made his arguments obsolete. Deep learning and Big Data are among the latest approaches, and advocates argue that they will be able to realize AGI. A closer look reveals that although development of artificial intelligence for specific purposes (ANI) has been impressive, we have not come much closer to developing artificial general intelligence (AGI). The article further argues that this is in principle impossible, and it revives Hubert Dreyfus’ argument that computers are not in the world.
Going without sleep for too long kills animals but scientists haven’t known why. Newly published work suggests that the answer lies in an unexpected part of the body.
A transcompiler, also known as source-to-source translator, is a system that converts source code from a high-level programming language (such as C++ or Python) to another. Transcompilers are primarily used for interoperability, and to port codebases written in an obsolete or deprecated language (e.g. COBOL, Python 2) to a modern one. They typically rely on handcrafted rewrite rules, applied to the source code abstract syntax tree. Unfortunately, the resulting translations often lack readability, fail to respect the target language conventions, and require manual modifications in order to work properly. The overall translation process is timeconsuming and requires expertise in both the source and target languages, making code-translation projects expensive.
Although neural models significantly outperform their rule-based counterparts in the context of natural language translation, their applications to transcompilation have been limited due to the scarcity of parallel data in this domain. In this paper, we propose to leverage recent approaches in unsupervised machine translation to train a fully unsupervised neural transcompiler. We train our model on source code from open source GitHub projects, and show that it can translate functions between C++, Java, and Python with high accuracy.
Our method relies exclusively on monolingual source code, requires no expertise in the source or target languages, and can easily be generalized to other
programming languages. We also build and release a test set composed of 852 parallel functions, along with unit tests to check the correctness of translations. We show that our model outperforms rule-based commercial baselines by a significant margin.
We show that for thousands of years, humans have concentrated in a surprisingly narrow subset of Earth’s available climates, characterized by mean annual temperatures around ∼13 °C. This distribution likely reflects a human temperature niche related to fundamental constraints. We demonstrate that depending on scenarios of population growth and warming, over the coming 50 y, 1 to 3 billion people are projected to be left outside the climate conditions that have served humanity well over the past 6,000 y. Absent climate mitigation or migration, a substantial part of humanity will be exposed to mean annual temperatures warmer than nearly anywhere today.
All species have an environmental niche, and despite technological advances, humans are unlikely to be an exception. Here, we demonstrate that for millennia, human populations have resided in the same narrow part of the climatic envelope available on the globe, characterized by a major mode around ∼11 °C to 15 °C mean annual temperature (MAT). Supporting the fundamental nature of this temperature niche, current production of crops and livestock is largely limited to the same conditions, and the same optimum has been found for agricultural and nonagricultural economic output of countries through analyses of year-to-year variation. We show that in a business-as-usual climate change scenario, the geographical position of this temperature niche is projected to shift more over the coming 50 y than it has moved since 6000 BP. Populations will not simply track the shifting climate, as adaptation in situ may address some of the challenges, and many other factors affect decisions to migrate. Nevertheless, in the absence of migration, one third of the global population is projected to experience a MAT >29 °C currently found in only 0.8% of the Earth’s land surface, mostly concentrated in the Sahara. As the potentially most affected regions are among the poorest in the world, where adaptive capacity is low, enhancing human development in those areas should be a priority alongside climate mitigation.
In this short essay, written for a symposium in the San Diego Law Review, Professor Daniel Solove examines the nothing to hide argument. When asked about government surveillance and data mining, many people respond by declaring: "I've got nothing to hide." According to the nothing to hide argument, there is no threat to privacy unless the government uncovers unlawful activity, in which case a person has no legitimate justification to claim that it remain private. The nothing to hide argument and its variants are quite prevalent, and thus are worth addressing. In this essay, Solove critiques the nothing to hide argument and exposes its faulty underpinnings.
We describe the vision of being able to reason about the design space of data structures.
We break this down into two questions: 1) Can we know all data structures that is possible to design? 2) Can we compute the performance of arbitrary designs on a given hardware and workload without having to implement the design or even access the target hardware?
If those challenges are possible, then an array of exciting opportunities would become feasible such as interactive what-if design to improve the productivity of data systems researchers and engineers, and informed decision making in industrial settings with regards to critical ardware/workload/data structure design issues. Then, even fully automated discovery of new data structure designs becomes possible. Furthermore, the structure of the design space itself provides numerous insights and opportunities such as the existence of design continuums that can lead to data systems with deep adaptivity, and a new understanding of the possible performance trade-offs. Given the universal presence of data structures at the very core of any data-driven field across all sciences and industries, reasoning about their design can have significant benefits, making it more feasible (easier, faster and cheaper) to adopt tailored state-of-the-art storage solutions. And this effect is going to become increasingly more critical as data keeps growing, hardware keeps changing and more applications/fields realize the transformative power and potential of data analytics.
This paper presents this vision and surveys first steps that demonstrate its feasibility.
VoLoc, a system that uses the microphone array on Alexa, as well as room echoes of the human voice, to infer the user location inside the home.
Computer simulations are invaluable tools for scientific discovery. However, accurate simulations are often slow to execute, which limits their applicability to extensive parameter exploration, large-scale data analysis, and uncertainty quantification. A promising route to accelerate simulations by building fast emulators with machine learning requires large training datasets, which can be prohibitively expensive to obtain with slow simulations. Here we present a method based on neural architecture search to build accurate emulators even with a limited number of training data. The method successfully accelerates simulations by up to 2 billion times in 10 scientific cases including astrophysics, climate science, biogeochemistry, high energy density physics, fusion energy, and seismology, using the same super-architecture, algorithm, and hyperparameters. Our approach also inherently provides emulator uncertainty estimation, adding further confidence in their use. We anticipate this work will accelerate research involving expensive simulations, allow more extensive parameters exploration, and enable new, previously unfeasible computational discovery.
Detection and attribution typically aims to find long-term climate signals in internal, often short-term variability. Here, common methods are extended to high-frequency temperature and humidity data, detecting instantaneous, global-scale climate change since 1999 for any year and 2012 for any day.
This post overviews the paper Confident Learning: Estimating Uncertainty in Dataset Labels authored by Curtis G. Northcutt, Lu Jiang, and Isaac L. Chuang.
Topical keyphrase extraction is used to summarize large collections of text
documents. However, traditional methods cannot properly reflect the intrinsic
semantics and relationships of keyphrases because they rely on a simple
term-frequency-based process. Consequently, these methods are not effective in
obtaining significant contextual knowledge. To resolve this, we propose a
topical keyphrase extraction method based on a hierarchical semantic network
and multiple centrality network measures that together reflect the hierarchical
semantics of keyphrases. We conduct experiments on real data to examine the
practicality of the proposed method and to compare its performance with that of
existing topical keyphrase extraction methods. The results confirm that the
proposed method outperforms state-of-the-art topical keyphrase extraction
methods in terms of the representativeness of the selected keyphrases for each topic. The proposed method can effectively reflect intrinsic keyphrase semantics and interrelationships.
N-grams have been a common tool for information retrieval and machine learning applications for decades. In nearly all previous works, only a few values of $n$ are tested, with $n > 6$ being exceedingly rare. Larger values of $n$ are not tested due to computational burden or the fear of overfitting.
In this work, we present a method to find the top-$k$ most frequent $n$-grams that is 60$\times$ faster for small $n$, and can tackle large $n\geq1024$. Despite the unprecedented size of $n$ considered, we show how these features still have predictive ability for malware classification tasks. More important, large $n$-grams provide benefits in producing features that are interpretable by malware analysis, and can be used to create general purpose signatures compatible with industry standard tools like Yara. Furthermore, the counts of common $n$-grams in a file may be added as features to publicly available human-engineered features that rival efficacy of professionally-developed features when used to train gradient-boosted decision tree models on the EMBER dataset.
With the increasing number of scientific publications, the analysis of the trends and the state-of-the-art in a certain scientific field is becoming very time-consuming and tedious task. In response to urgent needs of information, for which the existing systematic review model does not well, several other review types have emerged, namely the rapid review and scoping reviews.
The paper proposes an NLP powered tool that automates most of the review process by automatic analysis of articles indexed in the IEEE Xplore, PubMed, and Springer digital libraries. We demonstrate the applicability of the toolkit by analyzing articles related to Enhanced Living Environments and Ambient Assisted Living, in accordance with the PRISMA surveying methodology. The relevant articles were processed by the NLP toolkit to identify articles that contain up to 20 properties clustered into 4 logical groups.
The analysis showed increasing attention from the scientific communities towards Enhanced and Assisted living environments over the last 10 years and showed several trends in the specific research topics that fall into this scope. The case study demonstrates that the NLP toolkit can ease and speed up the review process and show valuable insights from the surveyed articles even without manually reading of most of the articles. Moreover, it pinpoints the most relevant articles which contain more properties and therefore, significantly reduces the manual work, while also generating informative tables, charts and graphs.
Anonymization has been the main means of addressing privacy concerns in sharing medical and socio-demographic data. Here, the authors estimate the likelihood that a specific person can be re-identified in heavily incomplete datasets, casting doubt on the adequacy of current anonymization practices.
Deep learning techniques have become the method of choice for researchers working on algorithmic aspects of recommender systems. With the strongly increased interest in machine learning in general, it has, as a result, become difficult to keep track of what represents the state-of-the-art at the moment, e.g., for top-n recommendation tasks. At the same time, several recent publications point out problems in today's research practice in applied machine learning, e.g., in terms of the reproducibility of the results or the choice of the baselines when proposing new models.
In this work, we report the results of a systematic analysis of algorithmic proposals for top-n recommendation tasks. Specifically, we considered 18 algorithms that were presented at top-level research conferences in the last years. Only 7 of them could be reproduced with reasonable effort. For these methods, it however turned out that 6 of them can often be outperformed with comparably simple heuristic methods, e.g., based on nearest-neighbor or graph-based techniques. The remaining one clearly outperformed the baselines but did not consistently outperform a well-tuned non-neural linear ranking method.
Overall, our work sheds light on a number of potential problems in today's machine learning scholarship and calls for improved scientific practices in this area. Source code of our experiments and full results are available at: https://github.com/MaurizioFD/RecSys2019_DeepLearning_Evaluation.
Natural language processing algorithms applied to three million materials science abstracts uncover relationships between words, material compositions and properties, and predict potential new thermoelectric materials.
The overwhelming majority of scientific knowledge is published as text, which is difficult to analyse by either traditional statistical analysis or modern machine learning methods. By contrast, the main source of machine-interpretable data for the materials research community has come from structured property databases, which encompass only a small fraction of the knowledge present in the research literature. Beyond property values, publications contain valuable knowledge regarding the connections and relationships between data items as interpreted by the authors. To improve the identification and use of this knowledge, several studies have focused on the retrieval of information from scientific literature using supervised natural language processing, which requires large hand-labelled datasets for training. Here we show that materials science knowledge present in the published literature can be efficiently encoded as information-dense word embeddings (vector representations of words) without human labelling or supervision. Without any explicit insertion of chemical knowledge, these embeddings capture complex materials science concepts such as the underlying structure of the periodic table and structure–property relationships in materials. Furthermore, we demonstrate that an unsupervised method can recommend materials for functional applications several years before their discovery. This suggests that latent knowledge regarding future discoveries is to a large extent embedded in past publications. Our findings highlight the possibility of extracting knowledge and relationships from the massive body of scientific literature in a collective manner, and point towards a generalized approach to the mining of scientific literature.
We present HotStuff, a leader-based Byzantine fault-tolerant replication protocol for the partially synchronous model.
Once network communication becomes synchronous, HotStuff enables a correct leader to drive the protocol to consensus at the pace of actual (vs. maximum) network delay--a property called responsiveness--and with communication complexity that is linear in the number of replicas. To our knowledge, HotStuff is the first partially synchronous BFT replication protocol exhibiting these combined properties. HotStuff is built around a novel framework that forms a bridge between classical BFT foundations and blockchains. It allows the expression of other known protocols (DLS, PBFT, Tendermint, Casper), and ours, in a common framework.
Our deployment of HotStuff over a network with over 100 replicas achieves throughput and latency comparable to that of BFT-SMaRt, while enjoying linear communication footprint during leader failover (vs. quadratic with BFT-SMaRt).
Daniel J. Bernstein, Bo-Yin Yang. "Fast constant-time gcd computation and modular inversion."
Best practice and tips & tricks to write scientific papers in LaTeX, with figures generated in Python or Matlab.
Experienced programmers often need to use online resources to pick up new programming languages. However, we lack a comprehensive understanding of which resources programmers find most valuable and utilize most often. In this paper, we study how experienced programmers learn Rust, a systems programming language with comprehensive documentation, extensive example code, an active online community, and descriptive compiler errors. We develop a task that requires understanding the Rust-specific language concepts of mutability and ownership, in addition to learning Rust syntax.
Our results show that users spend 42% of online time viewing example code and that programmers appreciate the Rust Enhanced package’s in-line compiler errors, choosing to refresh every 30.6 seconds after first discovering this feature. We did not find any significant correlations between the resources used and the total task time or the learning outcomes. We discuss these results in light of design implications for language developers seeking to create resources to encourage usage and adoption by experienced programmers.
This paper investigates heat pump systems in smart grids, focussing on fields of application and control approaches that have emerged in academic literature. Based on a review of published literature technical aspects of heat pump flexibility, fields of application and control approaches are structured and discussed. Three main categories of applications using heat pumps in a smart grid context have been identified: First stable and economic operation of power grids, second the integration of renewable energy sources and third operation under variable electricity prices. In all fields heat pumps - when controlled in an appropriate manner - can help easing the transition to a decentralized energy system accompanied by a higher share of prosumers and renewable energy sources. Predictive controls are successfully used in the majority of studies, often assuming idealized conditions. Topics for future research have been identified including: a transfer of control approaches from simulation to the field, a detailed techno-economic analysis of heat pump systems under smart grid operation, and the design of heat pump systems in order to increase flexibility are among the future research topics suggested.
IoT is considered as one of the key enabling technologies for the fourth industrial revolution, that is known as Industry 4.0. In this paper, we consider the mechatronic component as the lowest level in the system composition hierarchy that tightly integrates mechanics with the electronics and software required to convert the mechanics to intelligent (smart) object offering well defined services to its environment. For this mechatronic component to be integrated in the IoT- based industrial automation environment, a software layer is required on top of it to convert its conventional interface to an IoT compliant one. This layer, that we call IoTwrapper, transforms the conventional mechatronic component to an Industrial Automation Thing (IAT). The IAT is the key element of an IoT model specifically developed in the context of this work for the manufacturing domain. The model is compared to existing IoT models and its main differences are discussed. A model-to-model transformer is presented to automatically transform the legacy mechatronic component to an IAT ready to be integrated in the IoT-based industrial automation environment. The UML4IoT profile is used in the form of a Domain Specific Modeling Language to automate this transformation. A prototype implementation of an In dustrial Automation Thing using C and the Contiki operating system demonstrates the effectiveness of the proposed approach.
In this paper, we consider multi-pursuer single-superior-evader pursuit-evasion differential games where the evader has a speed that is similar to or higher than the speed of each pursuer. A new fuzzy reinforcement learning algorithm is proposed in this work. The proposed algorithm uses the well-known Apollonius circle mechanism to define the capture region of the learning pursuer based on its location and the location of the superior evader. The proposed algorithm uses the Apollonius circle with a developed formation control approach in the tuning mechanism of the fuzzy logic controller (FLC) of the learning pursuer so that one or some of the learning pursuers can capture the superior evader. The formation control mechanism used by the proposed algorithm guarantees that the pursuers are distributed around the superior evader in order to avoid collision between pursuers. The formation control mechanism used by the proposed algorithm also makes the Apollonius circles of each two adjacent pursuers intersect or be at least tangent to each other so that the capture of the superior evader can occur. The proposed algorithm is a decentralized algorithm as no communication among the pursuers is required. The only information the proposed algorithm requires is the position and the speed of the superior evader. The proposed algorithm is used to learn different multi-pursuer single-superior-evader pursuit-evasion differential games. The simulation results show the effectiveness of the proposed algorithm.
A Learning Invader for the “Guarding a Territory” Game
A Reinforcement Learning Problem
This paper explores the use of a learning algorithm in the “guarding a territory” game. The game occurs in continuous time, where a single learning invader tries to get as close as possible to a territory before being captured by a guard. Previous research has approached the problem by letting only the guard learn. We will examine the other possibility of the game, in which only the invader is going to learn. Furthermore, in our case the guard is superior (faster) to the invader. We will also consider using models with non-holonomic constraints. A control system is designed and optimized for the invader to play the game and reach Nash Equilibrium. The paper shows how the learning system is able to adapt itself. The system’s performance is evaluated through different simulations and compared to the Nash Equilibrium. Experiments with real robots were conducted and verified our simulations in a real-life environment. Our results show that our learning invader behaved rationally in different circumstances.
In 2005, two scientists, David Mazières and Eddie Kohler, wrote a paper titled Get me off Your Fucking Mailing List and submitted it to WMSCI 2005 (the 9th World Multiconference on Systemics, Cybernetics and Informatics), a conference then notorious for its spamming and lax standards for paper acceptance, in protest of same. The paper consisted essentially only of the sentence "Get me off your fucking mailing list" repeated many times.
The deployment of solar-based electricity generation, especially in the form of photovoltaics (PVs), has increased markedly in recent years due to a wide range of factors including concerns over greenhouse gas emissions, supportive government policies, and lower equipment costs. Still, a number of challenges remain for reliable, efficient integration of solar energy. Chief among them will be developing new tools and practices that manage the variability and uncertainty of solar power.
Network protocol design and evaluation requires either full implementation of the considered protocol and evaluation in a real network, or a simulation based on a model. There is also a middle approach in which both simulation and emulation are used to evaluate a protocol. In this article the Partov engine, which provides both simulation and emulation capabilities simultaneously, is presented. Partov benefits from a layered and platform-independent architecture. As a pure simulator, it provides an extensible plugin-based platform that can be configured to perform both real-time and non-real-time discrete-event simulations. It also acts as an emulator, making interaction with real networks possible in real time. Additionally, a declarative XML-based language is used, acting as a glue between simulation and emulation modules and plugins. It supports dynamic network modelling and simulation based on continuous time Markov chains. Partov is compared with other well-known tools such as NS-3 and real processes such as Hping3. It is shown that Partov requires less overhead and is much more scalable than NS-3.
In this paper we address a seemingly simple question: Is there a universal packet scheduling algorithm? More precisely, we analyze (both theoretically and empirically) whether there is a single packet scheduling algorithm that, at a network-wide level, can match the results of any given scheduling algorithm. We find that in general the answer is “no”. However, we show theoretically that the classical Least Slack Time First (LSTF) scheduling algorithm comes closest to being universal and demonstrate empirically that LSTF can closely, though not perfectly, replay a wide range of scheduling algorithms in realistic network settings. We then evaluate whether LSTF can be used in practice to meet various network-wide objectives by looking at three popular performance metrics (mean FCT, tail packet delays, and fairness); we find that LSTF performs comparable to the state-of-the-art for each of them.
Demand Response (DR) in residential sector is considered to play a key role in the smart grid framework because of its disproportionate amount of peak energy use and massive integration of distributed local renewable energy generation in conjunction with battery storage devices. In this paper, first a quick overview about residential demand response and its optimization model at single home and multi-home level is presented. Then a description of state-of-the-art optimization methods addressing different aspects of residential DR algorithms such as optimization of schedules for local RE based generation dispatch, battery storage utilization and appliances consumption by considering both cost and comfort, parameters uncertainty modeling, physical based dynamic consumption modeling of various appliances power consumption at single home and aggregated homes/community level are presented. The key issues along with their challenges and opportunities for residential demand response implementation and further research directions are highlighted.
This paper presents an experimental evaluation of different line extraction algorithms on 2D laser scans for indoor environment. Six popular algorithms in mobile robotics and computer vision are selected and tested. Experiments are performed on 100 real data scans collected in an office environment with a map size of 80m × 50m. Several comparison criteria are proposed and discussed to highlight the advantages and drawbacks of each algorithm, including speed, complexity, correctness and precision. The results of the algorithms are compared with the ground truth using standard statistical methods.
by Truong X. Nghiem, Rahul Mangharam
Peak power consumption is a universal problem across energy control systems in electrical grids, buildings, and industrial automation where the uncoordinated operation of multiple controllers result in temporally correlated electricity demand surges (or peaks). While there exist several different approaches to balance power consumption by load shifting and load shedding, they operate on coarse grained time scales and do not help in de-correlating energy sinks. The Energy System Scheduling Problem is particularly hard due to its binary control variables. Its complexity grows exponentially with the scale of the system, making it impossible to handle systems with more than a few variables.
We developed a scalable approach for fine-grained scheduling of energy control systems that novelly combines techniques from control theory and computer science. The original system with binary control variables are approximated by an averaged system whose inputs are the utilization values of the binary inputs within a given period. The error between the two systems can be bounded, which allows us to derive a safety constraint for the averaged system so that the original system's safety is guaranteed. To further reduce the complexity of the scheduling problem, we abstract the averaged system by a simple single-state single-input dynamical system whose control input is the upper-bound of the total demand of the system. This model abstraction is achieved by extending the concept of simulation relations between transition systems to allow for input constraints between the systems. We developed conditions to test for simulation relations as well as algorithms to compute such a model abstraction. As a consequence, we only need to solve a small linear program to compute an optimal bound of the total demand. The total demand is then broken down, by solving a linear program much smaller than the original program, to individual utilization values of the subsystems, whose actual schedule is then obtained by a low-level scheduling algorithm. Numerical simulations in Matlab show the effectiveness and scalability of our approach.
The need for fast response demand side participation (DSP) has never been greater due to increased wind power penetration. White domestic goods suppliers are currently developing a ‘smart’ chip for a range of domestic appliances (e.g. refrigeration units, tumble dryers and storage heaters) to support the home as a DSP unit in future power systems. This paper presents an aggregated population-based model of a single compressor fridge-freezer. Two scenarios (i.e. energy efficiency class and size) for valley filling and peak shaving are examined to quantify and value DSP savings in 2020. The analysis shows potential peak reductions of 40 MW to 55 MW are achievable in the Single wholesale Electricity Market of Ireland (i.e. the test system), and valley demand increases of up to 30 MW. The study also shows the importance of the control strategy start time and the staggering of the devices to obtain the desired filling or shaving effect.
Plug-loads are often neglected in commercial demand response (DR) despite being a major contributor to building energy consumption. Improvements in technology like smart power strips are prompting the incorporation of plug-loads as a DR resource alongside building HVAC and lighting. Office scale battery storage (OSBS) systems are also candidates as a DR resource due to their ability to run on battery power. In this work, we present a model predictive control (MPC) framework for optimal load-shedding of plug-loads and OSBS.We begin with discussion of the context of this work, and present two models of OSBS systems. A model predictive controller for OSBS and plug-load load-shed scheduling is presented. We discuss casting the MPC as a dynamic program, and an algorithm to solve the dynamic program. Simulation results show the efficacy and utility of dynamic programming, and quantify the performance of OSBS systems.
The performance, reliability, cost, size and energy usage of computing systems can be improved by one or more orders of magnitude by the systematic use of modern control and optimization methods. Computing systems rely on the use of feedback algorithms to schedule tasks, data and resources, but the models that are used to design these algorithms are validated using open-loop metrics. By using closed-loop metrics instead, such as the gap metric developed in the control community, it should be possible to develop improved scheduling algorithms and computing systems that have not been over-engineered. Furthermore, scheduling problems are most naturally formulated as constraint satisfaction or mathematical optimization problems, but these are seldom implemented using state of the art numerical methods, nor do they explicitly take into account the fact that the scheduling problem itself takes time to solve. This paper makes the case that recent results in real-time model predictive control, where optimization problems are solved in order to control a process that evolves in time, are likely to form the basis of scheduling algorithms of the future. We therefore outline some of the research problems and opportunities that could arise by explicitly considering feedback and time when designing optimal scheduling algorithms for computing systems.
Demand response on the residential market is becoming a solution to adapt customer consumption to the offer available and therefore lower the electricity peak prices. Tariff incentives and direct load control of residential air-conditioners and electric heaters are flexible solutions to reduce the peak demand. To include residential demand response resources in planning operators, quantifying the demand reduction is becoming a major issue for all electrical stakeholders. Current methods are based on day or weather matching, regressions and control group approaches. In general, methods using available data from a control group give more accurate results. With the introduction of smart meters, the electric utilities generate a large amount of quality data, available almost in real time. In this paper, we suggest using these available residential load curves to select a control group based on individual load curves. One of the advantages of our method is that the selected control group could adapt at anytime to the number of individuals belonging to the demand reduction program, as this number evolves with customers entering and leaving the program. Constrained regression methods and an algorithm are developed and evaluated on real data, providing a reliable solution for an operational use.
This paper presents the modeling and control for a novel Compressed Air Energy Storage (CAES) system for wind turbines. The system captures excess power prior to electricity generation so that electrical components can be downsized for demand instead of supply. Energy is stored in a high pressure dual chamber liquid-compressed air storage vessel. It takes advantage of the power density of hydraulics and the energy density of pneumatics in the “open accumulator” architecture. A liquid piston air compressor/expander is utilized to achieve near-isothermal compression/expansion for efficient operation. A cycle-average approach is used to model the dynamics of each component in the combined wind turbine and storage system. Standard torque control is used to capture the maximum power from wind through a hydraulic pump attached to the turbine rotor in the nacelle. To achieve both accumulator pressure regulation and generator power tracking, a nonlinear controller is designed based on an energy based Lyapunov function. The nonlinear controller is then modified to distribute the control effort between the hydraulic and pneumatic elements based on their bandwidth capabilities. As a result, liquid piston air compressor/expander will loosely maintain the accumulator pressure ratio, while the down-tower hydraulic pump/motor precisely tracks the desired generator power. This control scheme also allows the accumulator to function as a damper for the storage system by absorbing power disturbances from the hydraulic path generated by the wind gusts. A set of simulation case studies demonstrate the operation of the combined system when the nonlinear controller is utilized and illustrates how this system can be used for load leveling, downsizing electrical system and maximizing revenues.
This paper is concerned with maximum efficiency or power tracking for pneumatically-driven electric generator of a stand-alone small scale compressed air energy storage system (CAES). In this system, an air motor is used to drive a permanent magnet DC generator, whose output power is controlled by a buck converter supplying a resistive load. The output power of the buck converter is controlled power such that the air motor operates at a speed corresponding to either maximum power or maximum efficiency. The maximum point tracking controller uses a linearised model of the air motor together with integral control action. The analysis and design of the controller is based on a small injected-absorbed current signal-model of the buck converter. The controller was implemented experimentally using a dSPACE system. Test results are presented to validate the design and demonstrate its capabilities.
In this paper a new concept for control and performance assessment of compressed air energy storage (CAES) systems in a hybrid energy system is introduced. The proposed criterion, based on the concept of energy harvest index (HEI), measures the capability of a storage system to capture renewable energy. The overall efficiency of the CAES system and optimum control and design from the technical and economic point of view is presented. A possible application of this idea is an isolated community with significant wind energy resource. A case study reveals the usefulness of the proposed criterion in design, control and implementation of a small CAES system in a hybrid power system (HPM) for an isolated community. Energy harvested index and its effectiveness in increasing the wind penetration rate in the total energy production is discussed.
Distributed energy storage has been recognized as a valuable and often indispensable complement to small-scale power generation based on renewable energy sources. Small- scale energy storage positioned at the demand side would open the possibility for enhanced predictability of power output and easier integration of small-scale intermittent generators into functioning electricity markets, as well as offering inherent peak shaving abilities for mitigating contingencies and blackouts, for reducing transmission losses in local networks, profit optimization and generally allowing tighter utility control on renewable energy generation. Distributed energy storage at affordable costs and of low environmental footprint is a necessary prerequisite for the wider deployment of renewable energy and its deeper penetration into local networks.
Thermodynamic energy storage in the form of compressed air is an alternative to electrochemical energy storage in batteries and has been evaluated in various studies and tested commercially on a large scale. Small-scale distributed compressed air energy storage (DCAES) systems in combination with renewable energy generators installed at residential homes or small businesses are a viable alternative to large-scale energy storage, moreover promising lower specific investment than batteries. Flexible control methods can be applied to DCAES units, resulting in a complex system running either independently for home power supply, or as a unified and centrally controlled utility-scale energy storage entity.
This study aims at conceptualizing the plausible distributed compressed-air energy storage units, examining the feasibility for their practical implementation and analyzing their behavior, as well as devising the possible control strategies for optimal utilization of grid-integrated renewable energy sources at small scales. Results show that overall energy storage efficiency of around 70% can be achieved with comparatively simple solutions, offering less technical challenges and lower specific costs than comparable electrical battery systems. Furthermore, smart load management for improving the dispatchability can bring additional benefits by profit optimization and decrease.
Future energy systems will depend much more on renewable energy resources than the current ones. Renewable energy resources, in turn, fluctuate and are not permanently available to the same extent than fossil ones. In consequence, new approaches are required to balance electricity demand and production. One approach is to schedule the compressed-air production of industrial installations according to the current load and supply of the electric grid. To be able to do this, compressed-air has to be stored for peak load phases. Computer simulations are an efficient tool to judge the technical feasibility of such an approach and to compare it with other load management systems. This paper describes the thermodynamic fundamentals of compressed-air energy storage and their integration in a computer model. The obtained results from simulations were compared with results from measurements showing good consistency. Thus, the model was used to simulate different principles to store compressed-air. Systems with low pressure level and with high storage volume appear to be the most energy-efficient ones. In general the technology has the potential to be utilized in the electric load management. However, further simulations are required to determine the most economical solution.
@ARTICLE{7160638,
author={Shi, X. and Li, Y. and Cao, Y. and Tan, Y.},
journal={Power and Energy Systems, CSEE Journal of},
title={Cyber-physical electrical energy systems: challenges and issues},
year={2015},
month={June},
volume={1},
number={2},
pages={36-42},
abstract={Cyber-physical electrical energy systems (CPEES) combine computation, communication and control technologies with physical power system, and realize the efficient fusion of power, information and control. This paper summarizes and analyzes related critical scientific problems and technologies, which are needed to be addressed with the development of CPEES. Firstly, since the co-simulation is an effective method to investigate infrastructure interdependencies, the co-simulation platform establishment of CPEES and its evaluation is overviewed. Then, a critical problem of CPEES is the interaction between energy and information flow, especially the influence of failures happening in information communication technology (ICT) on power system. In order to figure it out, the interaction is analyzed and the current analysis methods are summarized. For the solution of power system control and protection in information network environment, this paper outlines different control principles and illustrates the concept of distributed coordination control. Moreover, mass data processing and cluster analysis, architecture of communication network, information transmission technology and security of CPEES are summarized and analyzed. By solving the above problems and technologies, the development of CPEES will be significantly promoted.},
keywords={Cyber-physical electrical energy systems;informationcommunication technology;power system;smart grid},
doi={10.17775/CSEEJPES.2015.00017},}
@INPROCEEDINGS{7170976,
author={Risbeck, Michael J. and Maravelias, Christos T. and Rawlings, James B. and Turney, Robert D.},
booktitle={American Control Conference (ACC), 2015},
title={Cost optimization of combined building heating/cooling equipment via mixed-integer linear programming},
year={2015},
month={July},
pages={1689-1694},
abstract={In this paper, we propose a mixed-integer linear program to economically optimize equipment usage in a central heating/cooling plant subject to time-of-use and demand charges for utilities. The optimization makes both discrete on/off and continuous load decisions for equipment while determining utilization of thermal energy storage systems. This formulation allows simultaneous optimization of heating and cooling subsystems, which interact directly when heatrecovery chillers are present. Nonlinear equipment models are approximated as piecewise-linear to balance modeling accuracy with the computational constraints imposed by online implementation and to ensure global optimality for the computed solutions. The chief benefits of this formulation are its ability to tightly control on/off switching of equipment, its consideration of cost contributions from auxiliary equipment such as pumps, and its applicability to large systems with multiple heating and cooling units in which a combinatorial problem must be solved to pick the optimal mix of equipment. These features result in improved performance over heuristic scheduling rules or other formulations that do not consider discrete decision variables. We show optimization results for a system with four conventional chillers, two heat-recovery chillers, and one hot water boiler. With a timestep of 1 h and a horizon of 48 h, the optimization problem can be solved to optimality within 5 minutes, indicating suitability for online implementation.},
keywords={Biological system modeling;Cooling;Generators;Load modeling;Optimization;Production;Switches},
doi={10.1109/ACC.2015.7170976},}
Design and Implementation of Real-Time Task’s Scheduling on ARM processor
Author : Boppani Krishna Kanth and G. Bhaskar Phani Ram
Pages : 2666-2670
Download PDF
Abstract
This paper is an RTOS based architecture designed for the purpose of mine detection. RTOS is a Process which will be done between hardware and application. Here, scheduling is the one which is used to avoid the delay between one application with another. We are using in the mobile communication to receiving the condition of the border level. Using mobile communication we are giving the indication to the monitoring section. The semantic time scheduling is done all applications at a time without any time delay.
Keywords: Robotics, RTOS, GSM, ARM.
Coordinated Electric Vehicle Charging Control with Aggregator Power Trading and Indirect Load Control
James J.Q. Yu, Junhao Lin, Albert Y.S. Lam, Victor O.K. Li
(Submitted on 4 Aug 2015)
Due to the increasing concern on greenhouse gas emmissions and fossil fuel security, Electric Vehicles (EVs) have attracted much attention in recent years. However, the increasing popularity of EVs may cause stability issues to the power grid if their charging behaviors are uncoordinated. In order to address this problem, we propose a novel coordinated strategy for large-scale EV charging. We formulate the energy trade among aggregators with locational marginal pricing to maximize the aggregator profits and to indirectly control the loads to reduce power network congestion. We first develop a centralized iterative charging strategy, and then present a distributed optimization-based heuristic to overcome the high computational complexity and user privacy issues. To evaluate our proposed approach, a modified IEEE 118 bus testing system is employed with 10 aggregators serving 30 000 EVs. The simulation results indicate that our proposed approach can effectively increase the total profit of aggregators, and enhance the power grid stability.
Subjects: Systems and Control (cs.SY)