I love the possibilities of glitch in art and music. I think it’s the technomancer’s version of free jazz – the courting of randomness as almost a musical device in itself. Dmitry Morozov, by way of Hackaday.com, has built a rather fantastic device that allows fine grain control of a CD player well beyond what we have easily available.
While the build itself is rather gorgeous and the video on the project page is tantalizing in terms of the possibilities of the machine, I really which this device could fall into the hands of some musicians so we can see what this really could do.
Please some musician out there reach out to Dimitry to borrow his CD glitch machine to put it to some use!
There’s a lot of hype about machine learning entering the art world. I’ve seen a lot of projects as well. The Adversarial Feelings project by Lorem seems to be one of the best examples of really integrating ML technologies into human artistic endeavors. I especially like what is described as an interactive process that moves back and forth between human and machine in terms of building the work, as I’m not really on board with the methodology that works in a one-way fashion where a data set is learnt and permutations are belched out in so often a project.
The integration of three disciplines into one project is also quite exciting. Putting video, sound design and ML specialists together makes for pushing all three further than a project of just one specialty and I think could inform the breadth of possibility when ML gets integrated even more so into the arts – far better than data scientists working in a vacuum on projects or artists wading into the shallows of the technology.
I definitely suggest checking out the interview and learning more about the project and how the interactions came together between the specialties.
An article written as a conversation, Code and Poetry, a conversation lays the foundation for opening the door to allowing the thinking that both concerns are perhaps the same thing. I’d really like it to take the thought process a bit further and explore the ‘art’ of code or the programmatic rule sets that poetry pursuits find themselves operating within to intertwine the two further but it’s nice nonetheless to set the mind to accept the concept that code perhaps is a form of poetry as form in itself beyond what it achieves on execution.
Would code eventually be written for artistic qualities rather than functional in the future? Will there be a programming language developed that is functional but has the constructs of a villanelle or other ‘conventional’ poetic form? Or some sort of combination of both? Interesting to think about…
I could almost see that any one of these ‘artifact’ aspects the author touches on could be the basis for an artistic exploration or social commentary experience all by itself. Even the more concrete example of an artifact developing into something of a societal wave from such an innocuous birth at the beginning of the article temps the construction of more pointed fabrications, in my thinking.
The intriguing connection to some of the other subjects on Of Peculiar Utility is how a lot of these examples seemed to have come into being through seemingly a random fashion – which is really quite similar to the basis of a lot of the artworks in the orbit of the manifesto of this site.
The article is certainly worth a read, if only you’re a fan of William Gibson’s later work. Enjoy!
Way back in the day – about the aughts – I remember when digital art really became a thing. The tools were in abundance, like Processing.org and Flash. Both had the relatively new and approachable ability to actually program an art piece. It was an game changing ability. This brought digital art out of just being an extension of hand techniques into something truly new.
First, random data was used to manifest projects. We searched through endless machine-made permutations to find something worthy of hanging the word ‘art’ onto. Certainly we got to the point where we realized that there was no soul in random noise, no matter how pretty it looked. Artists then used datasets from real things to build digital works.
Things got a bit more meaningful. What really happened was a whole new job category was developed. Much like how headphone drum and bass was swallowed up by more dancy-ier explorations, the digital artist was swallowed up by it’s more useful child, data visualization. Suddenly science couldn’t live without art anymore.
Why the devolvement? Well Peter Beshai has written a really great article for Medium that takes the reader through the process of developing some truly amazing visualization of Twitter conversations. I’d say that these visualizations certainly move into the art category.
Perhaps the best part is that Peter has done us a solid of name dropping and even link dropping the technologies and the theory work that went into the project. Even better, he’s given the article a step-by-step visual record of the path taken to get the end result. Just the sort of article for sharing here at OfPeculiarUtiltiy.
Link-heavy pages are really like hitting gold for me. This one has a lot of links. While there are some nifty links in the beginning, the real gold of the article is taking you through the how’s and why’s of a certain process for creating video-based glitch art in the bottom half of the article. The certain process listed is called datamoshing. And there’s links to tutorials!
Along the same lines ToolFarm has a page that has even more links to not only tutorials, but an array of tools for the craft.
So, I’m thinking the ultimate awesome thing to do is learn how to auto-magically dump some 4K samples to print…
Apparently there’s a robot art competition and it’s a thing and it’s been going on for a while, at least according to VentureBeat. VB puts out a lot of interesting avenues to explore in terms of what people are doing in the field and from the surface, it’s mostly built upon machine learning.
My favorite aspect of the article revolves around someone who’s produced what sounds like a rather elaborate real-world robot to execute his creations, and the rather long list of software that people are using in this competition all are things I intend to explore further. Hopefully with another blog post as we go down the rabbit hole together.
Personally, while the bulk of the artwork shown in the article appears more than a little derivative to me, I’m sure that this is the phase of ML art similar to the time in the late ’90s where everyone found digital art production.
That was a terrible time where everything that was new basically was somebody either painting just like they did only with digital means (woo! Wacom tablets!!!) or sliced and stacked myriad filters on stuff (have you seen the lates Kai’s Power Tools??) – sort of an Andy Warhol nightmare. I’m sure he’d have laughed.
Then things got really, really good – and all with the same software, well maybe not Kai’s Power Tools.
UI design is something that has, at certain points in time, an amazing amount of innovation. When any new software technology starts out, that’s when we see the most inventive come forward. Eventually, though the more interesting eventually gets beaten out of the market and we’re left with what amounts to VB widgets designed for confusing the human mind and hand instead. But sometimes, something new shows up further down in a technology’s maturity.
Such is the case with Oddball. There’s a real interesting dynamic being brought to music creation – a way to engage with software-based music making that proves to be something that could at least open up how we all think about creating music. It’s also a nice way to break out of standard music software paradigms, which is good because if we all don’t watch out, we could end up making what’s easiest with the UI instead of what should be made.
For instance, with the bouncing aspect there’s an opportunity to create second and third degree actions after the initial. Sort of like having a sequential kick but now that second kick is nearly infinitely adjustable on the fly. I’m intrigued about how programmable the ball is in terms of what happens at each bounce and how assignable they would be. So yeah, I’m going to get one.
The product looks to have closed it’s Kickstarter already with flying colors and now has an Indiegogo set up with a pledge to ship in March of 2019. Create Digital Music has a better write up of the tech than I do, so I suggest for more heading there first.
” …they are incredibly efficient at guiding viewers toward socially acceptable group behavior and away from actions that aren’t. Memes can keep people in check, allowing them to correct behaviors framed as unsavory or distasteful, because the core feature of viral content is its ability to tap into common, relatable emotions or experiences.”
I’m putting this up here because I think we all at some point wish our work to have the capability to have some sort of social impact. It’s interesting that it doesn’t have to take the shape of an installation piece, it could be just a bit of well-designed Photoshop kludge collage work.
Recently, through a bit of an obfuscated path, I happened across a group called Obvious who are working on using machine learning to create artwork. While I’ve (and I’m sure most here) heard of ML being used to categorize and quantify art, it’s interesting to see if ML can actually create on its own – or if it can only elaborately remix prior work.
Looking to find more about the group, I eventually stumbled up on this Medium article where it discusses the use of ML and whether it constitutes ‘art’ at all.
Curiously, I recall the same sorts of arguments being constructed around generative efforts ten or so years ago. Both arguments orbit around the degree of the human artist’s ‘hand’ in creating the work and at what level of involvement is necessary before the work becomes art. A tricky question to say the least. While purely generative pursuits had to fight against the notion that one was picking through iterations of randomness to find a usable gem, I’m thinking ML is probably going to fight the notion that it’s an elaborate remix platform – where people search through variants to find a usable gem.
For me, I’d like to see how the machine learning system creates the work and at what level is it combining prior work or creating new techniques. There is a link to a GitHub repository so I guess I have my opportunity to look under the hood.