In the design world and in the engineering world renderings have been pushed to be as close to real as possible. A considerable amount of money and development has gone into honing the technology towards this pursuit. The results speak for themselves.
It’s fascinating that you can take this software, whose aim is photo-realism and turn it on its head to create some extraordinarily creative, otherworldly work. I suppose this follows in a similar vein with how auto-tune has gone from purely a product that aims to refine less than perfect singing to perfection to a device that offers greater albeit otherworldly creative output.
[ab] normal project certainly has its foundation in the architectural realm – which is certainly cool and interesting. I’d like to see how much further taking rendering software into the more expressive, less grounded in realism world could end up creating more interesting pursuits of art.
To me, the real intriguing aspects of Alex Dodge‘s work come from the process of building the pieces. Design milk has done a nice overview of the methods Dodge uses, so I’ll let you swing over there to read more about it. The ‘tool chain’ Dodge uses is particularly exciting as it opens new avenues for conventional visual art.
There seems to be this canyon between digital and fine art that seems to be particularly unbridgeable. While many have tried, the worlds seem to be more like oil and water, resisting on a molecular level their bonding. Thinking more about how Dodge uses digital modeling as the sketch stage of the work, yet not letting it overpower represents a possible entry point in combining the two.
Further, the inclusion of CNC processes to develop the templates for what appears to be compiling and rendering the image in the real world is a great use of CNC abilities without the works exclaiming, “Hey I have a CNC router!”
All this points to a good series of examples where a multi-faceted integration of today’s technologies can potentially be used to create work that can portray a humanized interaction and perhaps help show a direction forward for fine art in the increasingly digital world.
Another paper has come out about how researchers have taught an AI system to paint in the manner of a number of artists. From a machine learning perspective, the project is pretty darn impressive and I personally find it quite interesting to read through the training process. It’s particularly unique in that it appears to not start from finished works like most so far, rather by watching how people paint and emulating the human process.
I want nothing to take away from the triumph that these researchers have achieved, but I guess I’m waiting for the era when AI emulation gives way to the creation of new expressions. I am hoping soon enough that the more clever of the artists out there can take this technology and use it to create something spectacular and well beyond mimicking other artists.
A great example of what I’m hoping for is what Janelle Shane has experimented with by combining a machine learning system (Runway AI) trained on the Great British Bake-off and apparently random squirrel images to come out with something decidedly different and fresh – in a pretty odd way. The process has a feel to me not unlike the process of glitching out video and the effects look like they have the same presence.
To me, this is when AI art will really come into its own and I’m excited for the first project when someone can take that and bend it to make their own statements.
A while ago, somewhere in William Gibson’s Blue Ant series, he laid out the premise of artists doing installations in a sort of augmented reality. These were location dependent works where one had to know the exact coordinates to view the art. It looks like that capability is knocking on our doors with XRAD Remote Positioning.
Of course, it’s currently set up only on iPhones and it appears to be only set up for the city of London. It does though open very interesting opportunities for the evolution of art. While on one level, it creates a whole new way of being creative with a palette of location and temporal tools at one’s disposal that, aside from graffiti and actually getting installation grants, allows the ability to work with spaces and time in new ways. I’m sure there could also be interactivity integrated as well.
There is of course a few other options, like ScrapeKit which seems to be a bit more inclusive in terms of platforms. From the demos, it looks like this tech is just about ready for experimentation. Has anyone out there started using these platforms for more creative pursuits? If you know of projects, please add to the comments!
I’m thinking, since this sort of art would require an app to access at a certain point in the process, the AR format could also solve one of the more irksome aspects of digital art – how to get paid and how to preserve the value of the work. Perhaps access to the artwork could be managed with…wait for it…a block chain system*. Ownership then has a much better ability to be controlled, valued and thus sought after. This could create the demand to drive the market.
*Yeah, I went there. Had to win the tech buzzword bingo on an art blog!
I’m a sucker for process. I’m also a sucker for works that have certain emergent qualities to them. Wu Chi-Tsung has put together a series of rather amazing landscapes using analog collage techniques combined with traditional Chinese brush work (to be honest, I’m also a sucker for the later as well to the point where I seek out books on it whenever I get to China.)
To exactly lift the description of the process from the artist,”a form of ink painting collage…a conventional method, combines with wrinkled-texture cyanotype. Rice papers with photosensitive coating were wrinkled and exposed under sunlight to record the lighting and shading on the paper. A selection from dozens of pieces of cyanotype photographic paper was reorganized and edited before mounting on a canvas. The work is displayed in a style resembling Chinese Shan Shui and photomontage.”
While bearing slightest of connections, it reminds me of back in college when I was messing around with doing collage on photocopiers. Naturally, Wu’s process is much more vibrant and intensive. I kind of wish I could find more information about the process and ideally process pictures documenting their construction.
What I did find was another artist who produces works using the cyanotype process. The best part is the article goes into detail on how to actually operate the process. Of course, there’s other descriptions on how it works and how you can do it, too. Apparently, cyanotype is the process by which blueprints were made – beware, I recall those blueprints to be a pungent, ammonia-instilled affair.
Way back in the day – about the aughts – I remember when digital art really became a thing. The tools were in abundance, like Processing.org and Flash. Both had the relatively new and approachable ability to actually program an art piece. It was an game changing ability. This brought digital art out of just being an extension of hand techniques into something truly new.
First, random data was used to manifest projects. We searched through endless machine-made permutations to find something worthy of hanging the word ‘art’ onto. Certainly we got to the point where we realized that there was no soul in random noise, no matter how pretty it looked. Artists then used datasets from real things to build digital works.
Things got a bit more meaningful. What really happened was a whole new job category was developed. Much like how headphone drum and bass was swallowed up by more dancy-ier explorations, the digital artist was swallowed up by it’s more useful child, data visualization. Suddenly science couldn’t live without art anymore.
Why the devolvement? Well Peter Beshai has written a really great article for Medium that takes the reader through the process of developing some truly amazing visualization of Twitter conversations. I’d say that these visualizations certainly move into the art category.
Perhaps the best part is that Peter has done us a solid of name dropping and even link dropping the technologies and the theory work that went into the project. Even better, he’s given the article a step-by-step visual record of the path taken to get the end result. Just the sort of article for sharing here at OfPeculiarUtiltiy.
Link-heavy pages are really like hitting gold for me. This one has a lot of links. While there are some nifty links in the beginning, the real gold of the article is taking you through the how’s and why’s of a certain process for creating video-based glitch art in the bottom half of the article. The certain process listed is called datamoshing. And there’s links to tutorials!
Along the same lines ToolFarm has a page that has even more links to not only tutorials, but an array of tools for the craft.
So, I’m thinking the ultimate awesome thing to do is learn how to auto-magically dump some 4K samples to print…
Apparently there’s a robot art competition and it’s a thing and it’s been going on for a while, at least according to VentureBeat. VB puts out a lot of interesting avenues to explore in terms of what people are doing in the field and from the surface, it’s mostly built upon machine learning.
My favorite aspect of the article revolves around someone who’s produced what sounds like a rather elaborate real-world robot to execute his creations, and the rather long list of software that people are using in this competition all are things I intend to explore further. Hopefully with another blog post as we go down the rabbit hole together.
Personally, while the bulk of the artwork shown in the article appears more than a little derivative to me, I’m sure that this is the phase of ML art similar to the time in the late ’90s where everyone found digital art production.
That was a terrible time where everything that was new basically was somebody either painting just like they did only with digital means (woo! Wacom tablets!!!) or sliced and stacked myriad filters on stuff (have you seen the lates Kai’s Power Tools??) – sort of an Andy Warhol nightmare. I’m sure he’d have laughed.
Then things got really, really good – and all with the same software, well maybe not Kai’s Power Tools.