Artificial Intelligence and TV Drama

Life Tips & Miscellaneous Travel and History Sports and Arts Books, TV, Movies and Music Zen and Life Tips Navigation of this blog

Artificial Intelligence and TV Drama

I once read an article in wired that science fiction writers are increasingly being commissioned to create scenarios to determine the direction to take in corporate management and policy making, and that this is starting to become a viable business. In order to predict what kind of technology will be needed in the future, I think it is important to draw inspiration not only from the accumulation of existing technologies but also from the world of fiction, including actual use scenes.

Not only in the world of novels, but also in recent TV dramas (especially in Europe and the United States), there are many scenarios where technology specialists are brought in during the production of the script to make it not just a dream story, but a reality in a short while. In the world of artificial intelligence and IT, there are many such scenarios. In the world of artificial intelligence and IT, this includes “Person of interest” that aired from 2011 to 2016 and “Mr Robot” that aired from 2015 to 2019.

Person of interest (POI) is a drama about two artificial intelligences, the Machine created by a lonely billionaire computer genius, and the Samaritan that opposes the Machine. The “machine” was originally created to prevent terrorist attacks, but it now detects ordinary criminal plans other than terrorist attacks, and the story is about a billionaire who leaves the government and works to stop ordinary crimes other than terrorism. In the real world, a system called PISTA, which uses the aforementioned SemanticWeb technology to connect and integrate databases of e-mails, travel information, and other data to identify things related to terrorism, has been built. The world of reality and fiction are beginning to overlap.

The latest technologies in the POI drama include

(1) Analyze (machine learning) the audio, images, and video flowing in the infrastructure and reconstruct them as network data (knowledge data). These data are then converted into network data that transitions on a time axis.

(2) Use the results of (1) above to predict future events through simulation.

(3) Interaction with people using voice.

(4) Ultra-large scale/high speed processing by distributed computing.

(5) Ultra-compression of data

(6) Ultra-powerful computer viruses

Pattern extraction of non-text information (1) is exactly what DNN is doing now, and reconstructing them into network data is also being considered in the world of SemanticWeb. The real problem is how to perform async information processing when the time axis is added to the network. The real issue is how to perform async information processing when a time axis is added to the network. Network data with different time axis and granularity cannot be processed synchronously. (3), (4). I will describe (3), (4), (5), and (6) separately when I have a chance, but the most interesting issue at present is the simulation of (2).

One of the challenges of current machine learning technology is that it is impossible to obtain highly accurate results without a large amount of training data, and when considering the application of artificial intelligence to the real world, this challenge is the largest, and it can be thought that simulation is the most promising candidate for technology to solve this problem. We can think of simulation as the most promising candidate for technology to solve this problem.

There are a wide range of approaches to simulation as machine learning, such as GAN (DNN-like approach) and Bayesian inference (probabilistic approach). I would like to discuss these technologies in the future.

Next time, I will return to the topic of technology and discuss CSS, which is the adjustment of the appearance of Elasticsearch.

コメント

タイトルとURLをコピーしました