Now that my project proposal has evolved into something that I’m more excited about, both conceptually and contextually, my research and progress have been moving along rather nicely.
Research and Inspiration
Over the past week I have found inspiration from several different sources, including YouTube, GitHub, and Black Panther (don’t worry there won’t be any spoilers).
3D Printing
As far as the animation component of my project, I’ve been looking into lots of tutorials on how to animate objects and characters in Blender. That being said, I’ved included some of what I have found to be the most helpful below:
Unbeknowst to me I was bombarded with inspirational content by the visualization that happened during the end credits of Black Panther. It was a visualization driven by the music and I had a light bulb idea of sonifying the profile of the Bears Ears topogaphical data and using that to animate something in Blender which led me to searching for tutorials on how to bake audio to a f-curve in Blender, which I have included below:
From this point I’ve been thinking hard about what to animate and I’ve had a few ideas that I’m mulling over, my favorite of which includes using the coverted sound from the topographical data to manipulate a model of Trump as a reverse of what I’m doing with Trump’s speech to affect the 3D model of the landscape. I think that this needs to evolve a little more but I’m excited about where it will go from here.
Process and Progress
The main progress I made this week revolved around converting Trump’s speech to a 3D model. I spent a few hour searching for ways to make this happen and, unfortunately, the first few options didn’t seem to work properly. There were several articles that proposed using software written by Blair Neal, which was written in OpenFrameworks. The models generated by this software were beautiful and I thought I found what I needed. Unfortunately, this is an unsupported project of his and I couldn’t seem to get it to work. I’ve included a video showing it work just because I think what it produces is amazing.
That being said, I turned my attention to finding an alternative method and discovered some Processing script written by John Locke that seemed like it could give me what I needed. Unfortunately, it was written for Processing 1.5 and a lot of the code was deprecated. However, I’m a lot more comfortable with Processing that OF and was able to port his program to work with Processing 3. This process took quite a while but I was finally able to get it to work and the outcome is pretty good.
The next steps in the process include the following:
- Import displacement map of Bears Ears into Blender and figure out booleans with model from speech.
- Find objects/models to animate with sonified topographical data in Blender and follow tutorials
- Test print of “scarred” landscape
Reflection
I’m extremely happy with the progress that I have made this week and I feel like I’m setting myself up nicely to go into next week ready to print the models I’ve created and start tackling the animation component of this project.
I was a little disappointed with my lack of ability to work with Open Frameworks since the program I found looks amazing. But C++ is something I have no knowledge of so that’s ok. I know this is an opportunity to learn a new software but I think that it might be a little outside my grasp at this point. However, it is just more motivation to learn which is a great thing!
This project has also opened my eyes to a sliver of the possibilities available with Blender which is motivating me to dedicate time over summer to expand my knowledge of the program.