131 private links
Fake Text uses AI to analyze text and then generate incredibly detailed and realistic written responses to it, giving the impression that an exchange between humans is taking place. The AI analyses text patterns to put together disturbingly lucid text, typified by this Reddit thread.
Launched by leading global AI research lab OpenAI, Fake Text is already recognized as so potentially dangerous that even its inventors have publicly warned about it.
Check out a cool project that leverages Stack Overflow Data and Google's Cloud AI to predict what tags would work best on Stack Overflow questions.
This article focuses on using a Deep LSTM Neural Network architecture to provide multidimensional time series forecasting using Keras and Tensorflow - specifically on stock market datasets to provide momentum indicators of stock price.
The following article sections will briefly touch on LSTM neuron cells, give a toy example of predicting a sine wave then walk through the application to a stochastic time series. The article assumes a basic working knowledge of simple deep neural networks.
I recently wrote a Markov chain package which included a random text generator. The generated text is not very good.
The rest of this post covers the evolution of the main algorithm.
“Don’t think of the overwhelming majority of the impossible.”
“Grew up your bliss and the world.”
“what we would end create, creates the ground and you are the one to warm it”
“look and give up in miracles”
All the quotes above have been generated by a computer, using a program that consists of less than 20 lines of python code.
I originally wrote this paper in 1981 for a course in writing research papers at Rose-Hulman Institute of Technology. It was written on a DEC PDP-11/70 computer using the RUNOFF text formatting program, and having it on line from the beginning made it easy to save an electronic copy for future use. The instructor, Dr. Peter Parshall (of "Peter Parshall picked apart my perfect paper" fame), awarded the grade of A- to my work.
Part 1 of 2: "The Road to Superintelligence". Artificial Intelligence — the topic everyone in the world should be talking about.
Deep Learning has had a huge impact on computer science, making it possible to explore new frontiers of research and to develop amazingly useful products that millions of people use every day. Our internal deep learning infrastructure DistBelief, developed in 2011, has allowed Googlers to build ever larger neural networks and scale training to thousands of cores in our datacenters. We’ve used it to demonstrate that concepts like “cat” can be learned from unlabeled YouTube images, to improve speech recognition in the Google app by 25%, and to build image search in Google Photos. DistBelief also trained the Inception model that won Imagenet’s Large Scale Visual Recognition Challenge in 2014, and drove our experiments in automated image captioning as well as DeepDream.