Friday 16 February 2024

world simulators (11. 353)

Although trialled previously by other platforms to varying success, via Waxy, the new text-to-video generation models from OpenAI’s Sora does seem like prising open another Pandora’s Box. Producing rather crisp and wholly convincing clips up to a minute in length from prompts and instructions, a gallery of samples have been released and for safety and further testing, the vignettes were made by user’s within the company with the participation of a select few artists and cinematographers to assess its strengths and weaknesses. Currently there are no plans to release it to the public and given the pace of change, will probably be impressive for a very short amount of time, though checking out the videos I cannot believe what I’m seeing. Building from adversarial static that transforms over successive steps, the neural network, named after the Japanese word for sky to express its limitless potential, can also extend existing footage forward and backward in time and replace missing frames. The project however has shown difficulty with continuity, the physics of causality and knowing right from left.

synchronoptica

one year ago: the tomb of Tutankhamun (1923) plus assorted links to revisit

two years ago: BBS (1978), the moon of Uranus, traditional Japanese chess plus The Simpsons Sing the Blues (1991)

three years ago: more links to enjoy, another North Korean holiday plus Ladybug Ladybug (1963)

four years ago: jamming with barcodes

five years ago: imperial America, more links worth the revisit plus night-mode