1 Comment

I already use AI as a video editor to track effects, auto-generate captions and subtitles (and increasingly translate them), make various cuts seamless, automatically splice flattened content, and a variety of other workflow stuff. Things like Replace Sky followed Content Aware Fill and Scaling, and are soon going to have things like Replace Face (deep fake built in). Outside my video editor work I've used neural network generative art algorithms for a variety of glitch, visual, and even just toy/FAFO uses.

I could see myself generating a character, deep faking it as a way of puppeteering its performances for screen, in order to create relatively photo-realistic movies for the price of a bit of my time and the processing power. A lot of AI worries are that it's going to skip right past that stage and I basically just enter a command, "Make a movie about x in the style of y with the following beats," OR that others won't watch my stories because they're doing that themselves and won't need me.

Remains to be seen but despite all the debate, I have a long history of playing with new digital tools with others and what you see every time is that people quickly get bored of the basic outputs and start figuring out ways to put their own work into it.

Expand full comment