Tuesday, June 16, 2020

DigitalFruit- Janco Verduin

DigitalFruit is an interview series from Adafruit showcasing some of our favorite digital fine artists from around the world. As we begin this new decade with its rapidly changing landscape, we must envision our path through a different lens.  Over the next few weeks we’ll feature many innovative perspectives and techniques that will inspire our maker community to construct a bold creative frontier.  The only way is forward.

1. Where are you based?

Eindhoven, the Netherlands.

2. Tell us about your background?

I studied Music Technology (majoring in composition) at the HKU in Hilversum with the intent to end up in the video game industry, making music for games. But once finished, I realized that having better coding skills could be helpful (I found programming quite interesting) and after some job hopping, I ended up as a software developer at Not A Number, the company that developed the 3D software, Blender. Unfortunately, this was in the early 2000’s and after a boom, the IT bubble popped and I was out of a job (Blender was saved by making it open source).

Since I didn’t want to be developing financial software, I switched careers and went to the Royal Conservatory in The Hague, where I studied classical composition with Louis Andriessen, Martijn Padding, Richard Ayres and Gilius van Bergeijk. In 2009, I finished my masters and ever since I have been a professional composer of contemporary classical music (for lack of a better term).

About five years ago, I wanted to add animations to my recordings of music so that my posts on YouTube would be more visually attractive and ever since then, the idea of “doing something” with visuals has been developing.  About three years ago, I started doing these almost daily posts on Instagram for fun, but also as an exercise to develop my visual skills.

 

3.  What inspires your work?  

Ever since my early years at the conservatory, I have been interested in psychology, how we perceive external stimuli.  I have read a lot, from the early Gestalt theories to findings coming out of fMRI research. One clear personal experience always stuck with me, I noticed how my musical hearing had grown over the years. I still remember the first time I heard the Slayer songs, Postmortem and Raining Blood. My initial perception was noise – rain – noise – rain.  But after a couple of years of being a metal enthusiast, my hearing had grown more focussed: I could hear details in the most dense sound walls with relative ease and this led me to believe that although we, objectively, all hear the same sounds, we listen to it very differently and how “trained” your ear is determines what you actually perceive. Each listener has his/her own personal musical experience and development of hearing skills, and if I want my music to communicate, it should address this. I must accept that my experience can differ a lot from another (other) listener/s.

The consequence of this is that I try to exclude as much extra-musical aspects as possible. It should be captivating without any prior knowledge, just an open mind and attention. I could use a chord to reference Beethoven, but that only has meaning for that part of the audience that really knows Beethoven. So I treat music just as sounds positioned in time and create an abstract narrative, structured in more and less perceivable ways and let it function as an acoustic mirror that reflects each listener’s subjective experiences. I am interested in audible processes, but more in an evolutionary way than in a minimal musical way, where the process is often clear and very human. Compared to visual arts, you could say less defined geometric shapes and more gritty organic thingies.

Because contemporary classical music can sometimes be a bit difficult for the non-informed in how to listen to it (it often doesn’t deal with traditional melodies and harmonies), I was thinking of ways to guide the audience’s ear by visual means. See what you hear and hear what you see. So this has been a longer term goal and one of my upcoming projects is a piece for percussion and a visual system (I am probably going to use TouchDesigner for this), where the performer will trigger the visuals but also respond to it, creating a compositional feedback loop. So in that regard it is generative music, each outcome will be different, there’s no Platonian perfect shape to strive for.

What I didn’t expect when starting these animations was the applicability of ideas used in 3D software to music. So for instance, the way Adobe After Effects works with keyframes influenced how I compose. So that was an added bonus.  But to expand on this, many subjects, especially those that deal with how things change in time, could be applied musically. A couple of years ago, I had this obsession with geology and thinking about how the surface of the Earth was shaped over millions of years- in some ways, this also found its way into my music.

But if I had to describe inspiration to its core meaning, it is probably-What happens if I do this?. Human curiosity, finding new ways to use known tools, to step beyond intended functionality. The thing I really like about making smartphone art is how to get the most out of a limited set of possibilities. I only use a couple of apps but with their forces combined, I still find new paths to explore, things to combine.

4.  What are you currently working on?

Currently I am finishing some pieces for classical guitar. I always played electric guitar myself (and most of the time, metal), so this was a personal challenge. While composing one piece, I always start thinking about the next piece, and that will probably be a piece for percussion and visuals. And after that, a large piece for orchestra, which I am really looking forward to.

5.  Describe your process and what tools you like to use.

For my music, I use Logic pro on a Mac Pro. I first make a mock-up using samples from the Vienna Symphonic Library, which sounds amazing.  At this stage, I don’t use conventional music notation, everything is done graphically which gives me a more objective and bird’s eye view of the composition.  Once the piece is sort of finished, I import it into Dorico and work out the notation and tidy things up.

For my smartphone art, I use a Samsung Note 9 and mainly Glitchlab and Mirrorlab (by François Morvillier) for all sorts of manipulation, usually on a quick photo, often of my cat. I noticed that it doesn’t matter much, what you use for an input, as with some filters, they’re just shapes and colors that get squished into different shapes and colors. Recently I started using Artrage to have more influence on the input, anticipating the effects I will use later on. Once I have some sort of interesting texture and/or shape, I use Shift, a neat little app that unfortunately isn’t to be found in the Play Store anymore. I really like the lighting and coloring that it adds. And finally, Google’s Snapseed to tidy things up and do some color correction and stuff.

For a couple of animations, I used After Effects and started looking into Houdini. But to really use that I need a new computer, and I’m still not sure if I want to get a dedicated Windows or Linux machine for it. And that new Mac pro is a wee bit expensive.

6.  What does your workplace or studio look like? Do you work in silence or listen to music while you work?

I work at home, have a dedicated room with a desk and my computer and some speakers.  My guitars and amp. And a pillow, for my cat.

While composing, I listen to my own music, but when not composing I usually prefer silence.

My smartphone art I can do anywhere, on the couch, in the train, in bed.

7.  How has technology shaped your creative vision?

Quite a bit I think. My parents worked at Philips here in Eindhoven, and from early on we have had computers. So technology has always been a part of my life. And being a bit nerdy, I always enjoyed things like programming, mathematics and reading about physics and that sort. But creatively, I have always seen technology as a tool. Tools to do things better, differently or completely new.

Coming from a more simple world (I was born in 1972), I do see the trap that technology can become, now that it is so much more available. An artist is as good as the way he/she can handle their tools. With new gear coming on the market faster and faster, there’s often not enough time to really master your tool. I think it is better to limit your tools and learn to master them, rather than spending all of your money on the newest hype and jumping from preset to preset.

A tricky thing with generative art is that it is very easy to get lots of results, but not all results are equal. You change a parameter and the result is different, but is it better? By getting to know your tool inside out, you know what dials to turn to get certain results, transforming your idea into a work of art.

But in the end, the best thing is the function that a technology offers. This can be big or small, but exploring what its possibilities are will lead you to things you wouldn’t have thought of before. That’s part of the fun in creating my smartphone art, let’s see what I end up with when I do this, this and that. Et voila, my cat turned into a weird 3D blob that looks like a weird deep sea creature or some alien coral reef.

8.  Any tips for someone interested in getting started in the digital art form?

Make lots of things, don’t try to make your first piece a masterpiece. Watch lots of tutorials. Different people like different things, so never mind people who don’t like it. If you like it yourself there’s bound to be other people who like it as well, so find them. Challenge yourself and try to do things a bit different each time. Persevere. Have fun doing it.

9.  Where do you see generative and digital art heading in the future?

Both aren’t really my fields of expertise, but I think that both will eventually become standards in everyday applications. With generative art, you can get complex results with fairly simple rules, and since that is how the universe seems to work that seems like a logical step to apply it to whatever it is we want to do.

Thinking about music, I expect a huge shift when singing can be synthesized in a way that it can’t be differentiated from human vocals anymore. Once this is done, platforms like Spotify can generate their own songs, tailored to the needs of target audiences. It might be the death of real artists as high volumes of music can be created with no royalties having to be paid.

Taking it more towards a Black Mirror direction, maybe those songs can be generated for a truly personal experience, where the music is made and played for you and you only, based on all the data that has been gathered about your life, your mood, your health. Songs that play at the optimal tempo with matching rhythms to guide your morning run, controlling heartbeat and flow. A song that knows exactly what strings to pull to cheer you up or mend your broken heart. If this sounds too artificial to you, empty of any human emotion, composing often works just like that. That special emotional moment in a love ballad where the singer looks up from his piano and looks into the camera, that’s probably an F minor chord if the song is in C major.

Janco Verduin

Links:

www.jancoverduin.com

https://www.instagram.com/janco_verduin

https://www.youtube.com/c/JancoVerduin

Soundcloud Portrait Photo by: Ruben Brands

 

DigitaFruit is curated by Adafruit lead photographer- Andrew Tingle

https://www.instagram.com/andrew_tingle

https://www.andrewtingle.com

No comments:

Post a Comment