To me, the real intriguing aspects of Alex Dodge‘s work come from the process of building the pieces. Design milk has done a nice overview of the methods Dodge uses, so I’ll let you swing over there to read more about it. The ‘tool chain’ Dodge uses is particularly exciting as it opens new avenues for conventional visual art.
There seems to be this canyon between digital and fine art that seems to be particularly unbridgeable. While many have tried, the worlds seem to be more like oil and water, resisting on a molecular level their bonding. Thinking more about how Dodge uses digital modeling as the sketch stage of the work, yet not letting it overpower represents a possible entry point in combining the two.
Further, the inclusion of CNC processes to develop the templates for what appears to be compiling and rendering the image in the real world is a great use of CNC abilities without the works exclaiming, “Hey I have a CNC router!”
All this points to a good series of examples where a multi-faceted integration of today’s technologies can potentially be used to create work that can portray a humanized interaction and perhaps help show a direction forward for fine art in the increasingly digital world.
A personal question that I ruminate on quite often (and is probably the reason for this site) is what is the future of art? I’m specifically talking about the fine art part – painting and drawing.
Currently, I don’t have a comfortable answer. There’s probably going to be some sort of evolution involved. I’ve seen such an evolution happen in the design world. I’m not sure it’s fully complete yet but what the results look like so far isn’t as exciting as I’d like it to be.
With that ominous and probably depressing intro, I’d like to present something that gives me quite a bit of hope. Tom Whitwell has used an e-paper display to show a movie over a period of a month. While that’s cool all by itself, I think the technique could be a really interesting platform for fine art projects.
Art could now exist in a changing state – a sort of evolution. Or maybe life-cycle. It becomes even more intriguing when you think of the possibilities in just those two descriptions. I think a commentator on the page (that also explains how to do it) called it ‘living art’ – that’s also an interesting way to think of its potential, as if it continues to play forever.
The important part to consider is the speed at which the image changes. Or lack of speed. It’s certainly not a movie. It’s not 12-30fps. by any means. The slow changes in image means each key frame is considered much more than in any sort of video. I’d figure that the artist could also utilize frame rate as a tool.
That means an artist can spend the time on each frame as one would with a conventional 2D art piece. That’s the aspect that gives me a bit of hope and a potential answer to my previously mentioned ruminations.
Another paper has come out about how researchers have taught an AI system to paint in the manner of a number of artists. From a machine learning perspective, the project is pretty darn impressive and I personally find it quite interesting to read through the training process. It’s particularly unique in that it appears to not start from finished works like most so far, rather by watching how people paint and emulating the human process.
I want nothing to take away from the triumph that these researchers have achieved, but I guess I’m waiting for the era when AI emulation gives way to the creation of new expressions. I am hoping soon enough that the more clever of the artists out there can take this technology and use it to create something spectacular and well beyond mimicking other artists.
A great example of what I’m hoping for is what Janelle Shane has experimented with by combining a machine learning system (Runway AI) trained on the Great British Bake-off and apparently random squirrel images to come out with something decidedly different and fresh – in a pretty odd way. The process has a feel to me not unlike the process of glitching out video and the effects look like they have the same presence.
To me, this is when AI art will really come into its own and I’m excited for the first project when someone can take that and bend it to make their own statements.
Two things I struggle with is how music will continue to be its own experience to the listener, especially as technology keeps developing new ways of interactivity as well as a similar struggle with determining the same about art. Times when I really work to consider these things, I arrive at the thought that both fine art, as conventionally known, and music, as conventionally consumed, feel like they are hopelessly ‘flat’ in today’s world. This seems especially so in the face of what’s being done with video game technology, AR and so on.
Installations like Steve Parker’s Ghost Box I think does a great job of breaking up that ‘flatness’ of presentation of both mediums. The music looks to become something tactile and engaging in its human-scale topography that can be explored. A lot of how this works is talked about in a recent exhibition he had at the CUE Art Foundation – which I’m also going to follow and maybe visit in the near future.
Over the number of installations Steve has put together, he looks to have created experiences that can be seemingly different at every exposure. To me, that’s something special and exciting – especially when thinking about the relevance of the two artforms going forward.
A while ago, somewhere in William Gibson’s Blue Ant series, he laid out the premise of artists doing installations in a sort of augmented reality. These were location dependent works where one had to know the exact coordinates to view the art. It looks like that capability is knocking on our doors with XRAD Remote Positioning.
Of course, it’s currently set up only on iPhones and it appears to be only set up for the city of London. It does though open very interesting opportunities for the evolution of art. While on one level, it creates a whole new way of being creative with a palette of location and temporal tools at one’s disposal that, aside from graffiti and actually getting installation grants, allows the ability to work with spaces and time in new ways. I’m sure there could also be interactivity integrated as well.
There is of course a few other options, like ScrapeKit which seems to be a bit more inclusive in terms of platforms. From the demos, it looks like this tech is just about ready for experimentation. Has anyone out there started using these platforms for more creative pursuits? If you know of projects, please add to the comments!
I’m thinking, since this sort of art would require an app to access at a certain point in the process, the AR format could also solve one of the more irksome aspects of digital art – how to get paid and how to preserve the value of the work. Perhaps access to the artwork could be managed with…wait for it…a block chain system*. Ownership then has a much better ability to be controlled, valued and thus sought after. This could create the demand to drive the market.
*Yeah, I went there. Had to win the tech buzzword bingo on an art blog!
I’m a sucker for process. I’m also a sucker for works that have certain emergent qualities to them. Wu Chi-Tsung has put together a series of rather amazing landscapes using analog collage techniques combined with traditional Chinese brush work (to be honest, I’m also a sucker for the later as well to the point where I seek out books on it whenever I get to China.)
To exactly lift the description of the process from the artist,”a form of ink painting collage…a conventional method, combines with wrinkled-texture cyanotype. Rice papers with photosensitive coating were wrinkled and exposed under sunlight to record the lighting and shading on the paper. A selection from dozens of pieces of cyanotype photographic paper was reorganized and edited before mounting on a canvas. The work is displayed in a style resembling Chinese Shan Shui and photomontage.”
While bearing slightest of connections, it reminds me of back in college when I was messing around with doing collage on photocopiers. Naturally, Wu’s process is much more vibrant and intensive. I kind of wish I could find more information about the process and ideally process pictures documenting their construction.
What I did find was another artist who produces works using the cyanotype process. The best part is the article goes into detail on how to actually operate the process. Of course, there’s other descriptions on how it works and how you can do it, too. Apparently, cyanotype is the process by which blueprints were made – beware, I recall those blueprints to be a pungent, ammonia-instilled affair.
I love the possibilities of glitch in art and music. I think it’s the technomancer’s version of free jazz – the courting of randomness as almost a musical device in itself. Dmitry Morozov, by way of Hackaday.com, has built a rather fantastic device that allows fine grain control of a CD player well beyond what we have easily available.
While the build itself is rather gorgeous and the video on the project page is tantalizing in terms of the possibilities of the machine, I really which this device could fall into the hands of some musicians so we can see what this really could do.
Please some musician out there reach out to Dimitry to borrow his CD glitch machine to put it to some use!
There’s a lot of hype about machine learning entering the art world. I’ve seen a lot of projects as well. The Adversarial Feelings project by Lorem seems to be one of the best examples of really integrating ML technologies into human artistic endeavors. I especially like what is described as an interactive process that moves back and forth between human and machine in terms of building the work, as I’m not really on board with the methodology that works in a one-way fashion where a data set is learnt and permutations are belched out in so often a project.
The integration of three disciplines into one project is also quite exciting. Putting video, sound design and ML specialists together makes for pushing all three further than a project of just one specialty and I think could inform the breadth of possibility when ML gets integrated even more so into the arts – far better than data scientists working in a vacuum on projects or artists wading into the shallows of the technology.
I definitely suggest checking out the interview and learning more about the project and how the interactions came together between the specialties.
An article written as a conversation, Code and Poetry, a conversation lays the foundation for opening the door to allowing the thinking that both concerns are perhaps the same thing. I’d really like it to take the thought process a bit further and explore the ‘art’ of code or the programmatic rule sets that poetry pursuits find themselves operating within to intertwine the two further but it’s nice nonetheless to set the mind to accept the concept that code perhaps is a form of poetry as form in itself beyond what it achieves on execution.
Would code eventually be written for artistic qualities rather than functional in the future? Will there be a programming language developed that is functional but has the constructs of a villanelle or other ‘conventional’ poetic form? Or some sort of combination of both? Interesting to think about…
I could almost see that any one of these ‘artifact’ aspects the author touches on could be the basis for an artistic exploration or social commentary experience all by itself. Even the more concrete example of an artifact developing into something of a societal wave from such an innocuous birth at the beginning of the article temps the construction of more pointed fabrications, in my thinking.
The intriguing connection to some of the other subjects on Of Peculiar Utility is how a lot of these examples seemed to have come into being through seemingly a random fashion – which is really quite similar to the basis of a lot of the artworks in the orbit of the manifesto of this site.
The article is certainly worth a read, if only you’re a fan of William Gibson’s later work. Enjoy!