Daily Shaarli

All links of one day in a single page.

07/10/20

glow - Render markdown on the CLI, with pizzazz!

Electric light orchestra - Don't bring me down video
[2007.03629] Strong Generalization and Efficiency in Neural Programs

We study the problem of learning efficient algorithms that strongly
generalize in the framework of neural program induction. By carefully designing
the input / output interfaces of the neural model and through imitation, we are
able to learn models that produce correct results for arbitrary input sizes,
achieving strong generalization. Moreover, by using reinforcement learning, we
optimize for program efficiency metrics, and discover new algorithms that
surpass the teacher used in imitation. With this, our approach can learn to
outperform custom-written solutions for a variety of problems, as we tested it
on sorting, searching in ordered lists and the NP-complete 0/1 knapsack
problem, which sets a notable milestone in the field of Neural Program
Induction. As highlights, our learned model can perform sorting perfectly on
any input data size we tested on, with $O(n log n)$ complexity, whilst
outperforming hand-coded algorithms, including quick sort, in number of
operations even for list sizes far beyond those seen during training.

GitHub - victorvde/jpeg2png: silky smooth JPEG decoding

silky smooth JPEG decoding. Contribute to victorvde/jpeg2png development by creating an account on GitHub.

Introducing Teleport 4.3 - Modern Replacement for OpenSSH

We’re excited to announce the release of Teleport 4.3 - new UI, API driven, expanded audit capabilities, and still open source.

Why general artificial intelligence will not be realized | Humanities and Social Sciences Communications

The modern project of creating human-like artificial intelligence (AI) started after World War II, when it was discovered that electronic computers are not just number-crunching machines, but can also manipulate symbols. It is possible to pursue this goal without assuming that machine intelligence is identical to human intelligence. This is known as weak AI. However, many AI researcher have pursued the aim of developing artificial intelligence that is in principle identical to human intelligence, called strong AI. Weak AI is less ambitious than strong AI, and therefore less controversial. However, there are important controversies related to weak AI as well. This paper focuses on the distinction between artificial general intelligence (AGI) and artificial narrow intelligence (ANI). Although AGI may be classified as weak AI, it is close to strong AI because one chief characteristics of human intelligence is its generality. Although AGI is less ambitious than strong AI, there were critics almost from the very beginning. One of the leading critics was the philosopher Hubert Dreyfus, who argued that computers, who have no body, no childhood and no cultural practice, could not acquire intelligence at all. One of Dreyfus’ main arguments was that human knowledge is partly tacit, and therefore cannot be articulated and incorporated in a computer program. However, today one might argue that new approaches to artificial intelligence research have made his arguments obsolete. Deep learning and Big Data are among the latest approaches, and advocates argue that they will be able to realize AGI. A closer look reveals that although development of artificial intelligence for specific purposes (ANI) has been impressive, we have not come much closer to developing artificial general intelligence (AGI). The article further argues that this is in principle impossible, and it revives Hubert Dreyfus’ argument that computers are not in the world.