I decided to centralize my posts on my own website according to POSSE principles. The original post is on Substack here: https://52weeks.substack.com/p/week-10-deep-space-nsdr
This week I created an 18 minute Non Sleep Deep Rest (NSDR) meditation video set on a space probe.
Tools I used:
Midjourney - to generate the cockpit
ChatGPT - to iterate on the meditation script
RunwayML - to edit the video and use ‘magic green screen’
Wav2Lip - tried this for open source lip syncing to a non-human face
D-ID - for tried and true lip syncing
Pixabay - for sound effects
Generative.fm - for original generative ambient music
Reaper + Audacity + Free VSTs - to record and add effects to the audio
ElevenLabs - to create synthetic voices (a generated voice and a clone of my voice) and then text-to-speech
VoxVisual - to create the audio waveform representing the ship’s AI
Context
My partner often listens to Yoga Nidra or NSDR meditations, and I occasionally do instead of a nap (naps make me terribly groggy) if I’m feeling sleepy first thing in the morning or after lunch. I’ve always wanted to make my own.
I don’t think I’ve encountered many guided meditations with a ‘theme’, but why not? I’ve recently read a few sci-fi novels involving cryogenic sleep to allow humans to travel huge times and distances - so I wanted to explore what it might be like to awake from such a sleep and reconnect with your body for the first time in the middle of space.
Process
First I found a few example scripts, and messed around with ChatGPT and my own thoughts to get a version that wove in space references, awaking from sleep, and narrated by a ship’s AI. As usual, I was fighting ChatGPT being generic. I could have put more time into crafting a witty and thoughtful script, but I wanted to move on to the rest of the project.
I recorded a demo of the track using only my phone microphone. I had planned to re-record it with a proper mic for better quality (as the voice quality seemed pretty important for a guided meditation), but I ran out of time before I flew out to the jungle of Costa Rica this week. The birds, insects and teeming jungle life of my new home for the week meant the rough demo voice was the cleanest I was going to get.

This forced me to figure out a different way to generate some new voiceovers - a perfect excuse to try out ElevenLabs Voice Lab - which claims it can ‘clone’ your voice and then generate new audio from text. I uploaded ~5 minutes of my meditation voiceover and cloned it. Below is the result:
I can catch something of myself, but the American twang is way too strong. Also, there is no way to control the pacing/cadence of the generated voiceover. For my purpose I wanted a very slow voiceover with long pauses, so this wouldn’t do. What I did instead was generate a second, shorter script, and had a preset generated voice read that script. I’ll come back to that later.
For the main meditation, I decided to stick with my demo vocals. So I applied some vocal effects in Reaper and Audacity using free VSTs to reduce noise, add chorus, and add reverb. Good enough!
With some voiceover vocals in hand, it was time to bring the scene to life. I turned to Midjourney to explore different variations of the inside of a space probe. Eventually I found the balance between minimal and evocative that I was after:

For video editing this time I wanted to try RunwayML - a web based video editor that boasts some cool features that I wouldn’t know how to do (and I imagine take a lot more skill/experience to do) with Final Cut Pro / Motion / After Effects. The feature in particular I wanted to use here was the green screen masking, which lets you select out certain parts of a frame, and it extrapolates it the the remainder of a clip.


This took a bit of trial and error
I overlayed this onto a slowed down planet video so it looks like the probe is drifting into orbit. Next, I wanted to create a visual representation of the ship’s AI. What a great excuse to use VoxVisual - the waveform visualizer I created in Week 8! I played my meditation track while recording VoxVisual on the screen so it would be synced up with the audio.
I felt pretty good about this part of the meditation. I had only recorded half of my demo meditation script, and I decided to let the second part (a guided visualization exercise) to be played by incoming alien transitions. My plan was to generate some funky alien characters and have them ‘phone in’ to deliver their piece. Back to Midjourney:




Peaceful, right?
I picked a few characters and went over to D-ID to get the lip syncing. I discovered that D-ID at this point only works with human faces! I played around with using the stock humanoid avatars and doing an image transfer to make them alien-ish while retaining human features (mainly the mouth and eyes):






Not quite the vibe..
I was able to successfully lip sync the two on the right, but I didn’t really dig it. Instead, I turned to Wav2Lip - an open source lip syncing library that gave me much more control over parameters and the ’lip box’. I ran it as a Google Colab notebook and tried to get my alien to speak. The results were… underwhelming:
Time to move on! My solution was to fudge it by adding some grainy hologram effects and finish the video.
Final touches included added sound effects (breathing, heartbeat, space ambience, transmission noises) and a brooding original soundtrack that I generated via Generative.fm. It’s creator has a great post about Brian Eno’s coining of the term and concept of generative music.
Learnings:
Browser based video editing is flexible in some ways, but it fails when travelling in places with weak internet. I wasn’t really able to continue the project while in Costa Rica (which in this case is probably a good thing so I could unplug!). Similarly, exporting a project on RunwayML takes way longer than using FCP locally. For now, it’s a complement but not a replacement to proper editing software
At least with the tools I’ve tried, non-human lip syncing isn’t easy yet
Recording a good meditation takes practice to get the tone, timing, and content right
I’m still pretty slow with editing!
Next steps:
Record guided meditations from other people with more experience, and make a themed-meditation series
Try a narrated audiobook with some generative video elements