Being a lover of procedural workflows, I leveraged Geometry Nodes to populate the scene with cars, trees and chairs, randomising rotation, position and even the vehicle colours for natural variation and realism. This setup allowed for flexibility to adjust and update the density and arrangement of all objects with just a couple parameters. Even the Label text was driven by Geometry Nodes. Check out the final piece below.

Being a lover of procedural workflows, I leveraged Geometry Nodes to populate the scene with cars, trees and chairs, randomising rotation, position and even the vehicle colours for natural variation and realism. This setup allowed for flexibility to adjust and update the density and arrangement of all objects with just a couple parameters. Even the Label text was driven by Geometry Nodes. Check out the final piece below.

Being a lover of procedural workflows, I leveraged Geometry Nodes to populate the scene with cars, trees and chairs, randomising rotation, position and even the vehicle colours for natural variation and realism. This setup allowed for flexibility to adjust and update the density and arrangement of all objects with just a couple parameters. Even the Label text was driven by Geometry Nodes. Check out the final piece below.

This project was a collaboration between myself and Noor Ud Din, from Pakistan. He modelled the initial site map, then it was handed over to me to complete the project. Despite being a beginner, Noor Ud Din laid a lot of ground work, enabling me to focus on details such as the car parks, tents and trees. Using the BlenderGIS addon, I was able to extend the scene out to the horizon, downloading data for elevation, distant buildings and areas of forest. Once the scene was ready, I was concerned regarding the harmony of the flythrough animation to the voiceover as timing was crucial. Time was tight and the deadline was final! How I was going to animate everything in sync with the voice over- it's five minutes long! The animation would have required going back and forth from Blender to Premiere, setting time markers, essentially guessing the timing and hoping the animation would line up with the VO. But the issue was the length of audio, how was I going to achieve that in the time left, when rendering was a major step I had not even approached yet? Then I remembered Blender has a built in NLE, so what if I drop in the audio, what would happen? I dragged it in and hit play, I thought perhaps this built in video editor is going to operate as an entirely different workspace to the 3D workspace. But then I hit play on the timeline, and to my surprise it played instantly and alongside the 3D animation timeline. One timeline across all workspaces in Blender. The audio was scrubbing perfectly without a hitch. I could cut it down and have it play in realtime without a hitch whilst I scrubbed through the timeline, allowing me to animate directly to the words. It was an incredible find. This meant the entire camera animation was completed within a day or two. The camera animation was straightforward. A camera and camera target were animated individually along with some natural soft camera shake to polish the movements with imperfections like that of a drone operator.

This project was a collaboration between myself and Noor Ud Din, from Pakistan. He modelled the initial site map, then it was handed over to me to complete the project. Despite being a beginner, Noor Ud Din laid a lot of ground work, enabling me to focus on details such as the car parks, tents and trees. Using the BlenderGIS addon, I was able to extend the scene out to the horizon, downloading data for elevation, distant buildings and areas of forest. Once the scene was ready, I was concerned regarding the harmony of the flythrough animation to the voiceover as timing was crucial. Time was tight and the deadline was final! How I was going to animate everything in sync with the voice over- it's five minutes long! The animation would have required going back and forth from Blender to Premiere, setting time markers, essentially guessing the timing and hoping the animation would line up with the VO. But the issue was the length of audio, how was I going to achieve that in the time left, when rendering was a major step I had not even approached yet? Then I remembered Blender has a built in NLE, so what if I drop in the audio, what would happen? I dragged it in and hit play, I thought perhaps this built in video editor is going to operate as an entirely different workspace to the 3D workspace. But then I hit play on the timeline, and to my surprise it played instantly and alongside the 3D animation timeline. One timeline across all workspaces in Blender. The audio was scrubbing perfectly without a hitch. I could cut it down and have it play in realtime without a hitch whilst I scrubbed through the timeline, allowing me to animate directly to the words. It was an incredible find. This meant the entire camera animation was completed within a day or two. The camera animation was straightforward. A camera and camera target were animated individually along with some natural soft camera shake to polish the movements with imperfections like that of a drone operator.

This project was a collaboration between myself and Noor Ud Din, from Pakistan. He modelled the initial site map, then it was handed over to me to complete the project. Despite being a beginner, Noor Ud Din laid a lot of ground work, enabling me to focus on details such as the car parks, tents and trees. Using the BlenderGIS addon, I was able to extend the scene out to the horizon, downloading data for elevation, distant buildings and areas of forest. Once the scene was ready, I was concerned regarding the harmony of the flythrough animation to the voiceover as timing was crucial. Time was tight and the deadline was final! How I was going to animate everything in sync with the voice over- it's five minutes long! The animation would have required going back and forth from Blender to Premiere, setting time markers, essentially guessing the timing and hoping the animation would line up with the VO. But the issue was the length of audio, how was I going to achieve that in the time left, when rendering was a major step I had not even approached yet? Then I remembered Blender has a built in NLE, so what if I drop in the audio, what would happen? I dragged it in and hit play, I thought perhaps this built in video editor is going to operate as an entirely different workspace to the 3D workspace. But then I hit play on the timeline, and to my surprise it played instantly and alongside the 3D animation timeline. One timeline across all workspaces in Blender. The audio was scrubbing perfectly without a hitch. I could cut it down and have it play in realtime without a hitch whilst I scrubbed through the timeline, allowing me to animate directly to the words. It was an incredible find. This meant the entire camera animation was completed within a day or two. The camera animation was straightforward. A camera and camera target were animated individually along with some natural soft camera shake to polish the movements with imperfections like that of a drone operator.

The scene was lit using 2 HDRIs. One for day and another for night. For the night transition I simply blended the day image into the night image and rotated the HDRI 360 degrees to produce the effect of a hyperlapse. The final piece rendered at ~3 seconds per frame on an RTX 4080 using EEVEE at 1080p25. We then used Topaz Video AI to upscale it to 4K50. Apart from one or two artefacts, the footage upscaled excellently. All in all, I spent 2 weeks on modelling, with animation, rendering and editing consuming the final week. I actually finished it on time, which allowed me to sprinkle in bonus details in the animation like the men-women split, the arrow facing the Kaa'ba, night transition, and sinking it all at the end. It was an honour to serve my community like this and I hope to make more!

The scene was lit using 2 HDRIs. One for day and another for night. For the night transition I simply blended the day image into the night image and rotated the HDRI 360 degrees to produce the effect of a hyperlapse. The final piece rendered at ~3 seconds per frame on an RTX 4080 using EEVEE at 1080p25. We then used Topaz Video AI to upscale it to 4K50. Apart from one or two artefacts, the footage upscaled excellently. All in all, I spent 2 weeks on modelling, with animation, rendering and editing consuming the final week. I actually finished it on time, which allowed me to sprinkle in bonus details in the animation like the men-women split, the arrow facing the Kaa'ba, night transition, and sinking it all at the end. It was an honour to serve my community like this and I hope to make more!

The scene was lit using 2 HDRIs. One for day and another for night. For the night transition I simply blended the day image into the night image and rotated the HDRI 360 degrees to produce the effect of a hyperlapse. The final piece rendered at ~3 seconds per frame on an RTX 4080 using EEVEE at 1080p25. We then used Topaz Video AI to upscale it to 4K50. Apart from one or two artefacts, the footage upscaled excellently. All in all, I spent 2 weeks on modelling, with animation, rendering and editing consuming the final week. I actually finished it on time, which allowed me to sprinkle in bonus details in the animation like the men-women split, the arrow facing the Kaa'ba, night transition, and sinking it all at the end. It was an honour to serve my community like this and I hope to make more!

Client

Client

MTA International

MTA International

Industries

Industries

Broadcast, Non-Profit

Broadcast, Non-Profit

Services

Services

3D Video Editing Sound Design

3D Video Editing Sound Design

Credits

Credits

Exterior: Noor Ud Din - 3D Modelling Ataur-Raziq Gonzalez - All aspects Interior: Ataur-Raziq Gonzalez - All aspects

Exterior: Noor Ud Din - 3D Modelling Ataur-Raziq Gonzalez - All aspects Interior: Ataur-Raziq Gonzalez - All aspects

Exterior: Noor Ud Din - 3D Modelling Ataur-Raziq Gonzalez - All aspects Interior: Ataur-Raziq Gonzalez - All aspects