ATEK 635/425 -- algorithmic arts

algorithm: a definite method. this class is an indoctrination into the practice of algorithmic arts and practices that derive from cultural threads of science and physical crafts -- viewed from a perspective that these practices are grounded in human cognition and the body as described by current work in cognitive science. the goal is to provide a basis for using and understanding various technical arts centered around but not restricted to expressing ideas in computer code.

12 september, 2016: first class of the year! this is a revised list, modified after the first class.

all knowledge is provisional. there is no correct way to do or see, there are multiple ways to look and to see. our task as i see it is to see things from multiple angles to see (really create) a whole from them. the multiple views of a play (outside audience, inside actors and script) is one such paradigm.

for 19 september: thomas kuhn's structure of scientific revolutions is worthwhile reading, and i think the overall approach to problem-solving that is required, as intellectual approach to thinking, needs to be understood and accepted in order to proceed from here. if you were to apprentice in a craft-based shop the old fashioned way (no longer available in the West outside of a shrinking handful of disciplines) you would get the low-level knowledge along the way. we don't have that luxury; you're grad students (and advanced undergrads), so we must bootstrap. i want to go through most, if not all of this book. there's not that many pages, and they are small-format. read as far as you can, but at least through section V, priority of paradigms. he's an old-fashioned writer, but not dense, and it shouldn't require any specific knowledge to understand. he'll mention sciences you (and i) are not familiar with, but read on. our interest, as well as kuhn's i think, isn't in any particular discipline, but the shape of them and how they change.

for 26 september: thomas kuhn's structure of scientific revolutions pages XX

for 3 october: two readings... first, alan turing's computing machinery and intelligence. the cultural references are quite dated (and british) but i'm sure you can muddle through that. turing is quite extraordinary, you can read about him in some breadth and depth on his biographers' site, alan turing: the enigma, also the title of hodges' biography (an excellent if dense read). be prepared to unpack assumptions made by the author in addressing his audience, which in this case is somewhat ambiguous; turing was a world-class mathemetician, not a pop writer, but he generally utterly disobeyed expectations and polite boundaries.

second, read at least the first three chapters if george lakoff and mark johnson's philosophy in the flesh. i mentioned this book earlier, but i neglected to ensure that you got a copy to read, so i can't insist as i'd like that you read chapters 1 through 8. some of you have it, if so, please read it, and try to obtain a copy and do so. this is a great book. later on it might be worth us going back over some of Kuhn, in light of the current ideas in metaphor and cognition, but it's early for that right now.

for 24 october:... i mentioned in class that you might poke around the intertubes for ideas and examples of the simulation of flight, but on a second glance it might be too much distraction. the obsession is on hardware, which is interesting to gawk at (see below) as the goal is deemed 'obvious' etc, but to me what was interesting was the gulf -- the gap -- between what the 'pilot' trainee is supposed to 'experience' and what the simulation consists of/looks like. whether desktop software program or institute-sized installation, the most subjective part ("simulation of flight") is held in your head while you use/enter this thing that every atom of your existence screams "fake!". and the subjective part is deemed obvious, because we think we share the notion of 'what it is'.

a little more containable and in some whays more directly pointful is the panorama. if you get a chance visit the one in LA that Donovan mentioned in class. extra brownie points for you in class if you do! lol. and if you haven't check out the Museum of Jurrasic Technology if you haven't....

i also mentioned briefly the enteric nervous system -- here's a (very old-fashioned looking) page on it (the brain in the gut). considering it from a cognitive point of view is probably still somewhat radical, but a lot less so today than it was a decade ago whe i first read about it. in fact it's been in the news how large the effect of the gut's activities are on our entire bodily experience, physical and otherwise. a good idea to lodge in your head...

ok i'll wrap this up: for next week, please re-read those (two) pages, and try to get to the core of the problem presented on the first whole sheet; the three aspects that are in tension. it doesn't matter if you agree, disagree, accept some other solution, etc, but i'm asking you to identify the core of the (philosophical) dilemma it presents. i get that it sounds kind of abstract, but increasingly, i think it's not... more later. here is the book by Gilbert Ryle i mentioned, the handout was from the preface or ontro... Gilbert Ryle's The Concept of Mind

OK, for the programmers in the class: here is my a definite method. i suspect some parts will seem elementary to you, and some just off to one side. but it takes a very particular approach to thinking about code. you know how storage works (byte, word, etc) i know: but do you understand that those aren't "number"? but i'm more concerned with setting a rational basis for discussing the encoding of thought. so it's not so much simple, as it is basic. or that's my hope... let me know what you think. if you want to talk about code beyond what's appropriate in class (not much, most are not programmers) i'm perfectly happy to arrange time for that.

for 31 october: read von Uexkull's A stroll through the worlds of animals and men through at least the very top of page 19 (one sentence there). it's an excerpt from Schiller's INSTINCTUAL BEHAVIOR. continuing on the 'where is the "you"'? (even the question is awkward) idea, read closely around the last paragraph of page 7: "...every living creature is an object in his human world"... another one of those fundamental unquestioned worldview thingies. to be blunt, is (a tick, etc) a being in it's own umwelt, or are all beings in ours (each of ours...)?

you might find this text a bit tough to follow, for a number of reasons. one, it's translated from German. two, it's not just old (1934?) it (three) was written before certain basic biological "facts" we now take for granted we known. but this happens when you read outside of the current "canon" of popular readings. but it is sooo worth your effort to get in sync with writings like this as they contain novel (to you, me, now) ideas that come from askance, like looking at a sculpture from another angle.

note also von Uexkull takes the common idea "stimulus --> response", of how our machines behave, and turns it ideologically on it's head into a feedback loop: "stimulus --> response --> stimulus removed (satisfied)". you are hungry, you eat, your hunger satisfied. a small but profound change in looking and seeing. this is ideology at work. you can select ideologies, you just have to identify when you are using one!

last, actually try out the experience tests discussed on page 14. might help to do it with friends.

and finally, for your amusement (and edification, etc): an excerpt from the March 1949 issue of RADIO ELECTRONICS article by Ross Ashby on his ELECTRONIC BRAIN! (all-caps my arch emphasis). the magazine cover image is cringe-worthily sexist, but amusingly optimistic, at once (ugh) and while the 'features of the brain' article seems laughable today, it is worth looking at it anew. read between the lines about the underlying assumptions. i'll point out for you that it is very common for to forget that sometimes metaphors are just metaphors; when the brain is sometimes described as being "like" a computer, that's only a metaphor. metaphors are quite useful, but they should not ever be confused with the target of the metaphor. today, the brain is a network. in the past, it's been like a telephone network, a steam engine, clockwork... Ashby was no fool, nor were the other early cyberneticists. we are still stumbling in the dark (erm, metaphorically) with knowledge of how we work, and early attempts often look foolish in hindsight...

for 07 november: simon penny's Experience and abstraction: the arts and the logic of machines. though the whole paper is aimed at media-technology-wielding artists (eg. you) only a few sections are directly pointful to our discussions so far. read: ABSTRACT, 3. ACADEMIC CARTESIANISM AND ARTISNAL ART, and 4. MAN-MACHINE INTERACTION AND TECHNOPHILE RHETORICS OF LIBERATION. written in 2007, some of it (eg. section 5) i think contains since-outmoded tech (though there is still much there, eg. 'All too often, digital cultural workers seem to think in terms of "how can i (change my behavior in order to) exploit this (available, commodified) technology') but you might find the whole paper useful. and many of the references are great.

then read my a definite method, which, amusingly or otherwise, takes for granted some of the things warned against in penny's paper, and emphasizes others. these are tools, we make them fit when they are useful to us, but are free reject them when they are not. not always easy to decide! you can simply stop reading when it gets too specifically technical. we won't discuss the specifics of coding in class.

i'll just throw this one out there: ingold's Walking the plank, meditations on a process of skill. it's about the exact opposite of disembodied cognition. it really is true -- when cutting material with a saw (steel, in my case, with a hand hacksaw) each stroke is not only different from each other, but from beginning to end is a progressive path, precisely as lakoff and johnson define one. people who play a musical instrument might find parallels here. optional reading, but check it out.

for 14 november: read, again, 4. MAN-MACHINE INTERACTION AND TECHNOPHILE RHETORICS OF LIBERATION from simon penny's Experience and abstraction: the arts and the logic of machines. if you recall from PHILOSOPHY IN THE FLESH, given how our brains are constructed and how we learn, we "can't think just anything" -- our umvelt, though literally the whole world to us, is bounded. "Our computers retain traces of earlier technologies, from telephones and mechanical analogs to directorscopes and tracking to radars." each thing we do or make is inevitably based upon, built upon things we already know and do. this makes it hard to critique things, but with effort, we can start to see where the bounds lie. looking at things sufficiently in the past, so that it's at once unfamiliar and unfamiliar, works for me as a key to find a crack in the apparent seamlessness of our technologies. is "VR" so necessarily tied to the idea of embodiment as it seems to be? what might constitute a "VR" that included more of your actual umwelt?

also for 14 november read/have read a definite method. and be prepared to deal with the graphs/graphing section. i'll explain when we get there, if you are not already comfortably fluent with graphs, this may be one of those things that open up new ways of thinking. graphs are a way to visualize related things and bring out patterns.

for 21 november: we'll finally, actually, work through a definite method read for last week.

for 28 november:ok, all y'all got off easy this semester, readings have been light... i realize it's a lot to read, for monday we start in on lawrence lessig's CODE 2.0, through the end of PART 1: REGULABILITY, through "book page" 82. the week after we'll talk about how code you write, even in small spaces like an Arduino, is bounded and regulated by "code" in Lessig's sense. bonus points for reading/have read my welcome to 4chan paper, which is in line with this larger discussion, if a bit further downstream from where we are. though you might think some of the examples and language in CODE 2.0 are dated, i can assure you the substance of it is not. this is a common occurrence in doing research, and the better works, like Lessig's, truly stand up. none of this is in the slightest bit obsolete or irrelevant. it may seem so due to the examples used, and the fact that it may be unfamiliar may imply it's come and gone... far from the case.

for 5 december:let's talk about the Toyota "unintended accelleration" issue that killed a bunch of people. because this was a U.S. lawsuit, the plaintiffs technical experts explain the very complex issue in terms a smart but non-nerd judge would understand. it's rather involved, but if you are persistent and read the court transcripts (good coast to coast airplane flight reading). the result is a public record, the files are below (5 files). read the NY Times article (brief), then in the 11 october testimony read a few pages to get a sense of how court transcripts work; it's fairly straightforward. lots of typos; recall this stuff is typed in realtime by a stenographer who can't ask people to slow down or back up. personally i found most of it interesting but given short notice read a couple pages to get a feel for it then jump to page 20 and read on from there. the powerpoint slides are nicely succinct.

we'll also talk more about a definite method and anything you bring in or want to talk about.

for 12 december: here is a pair of readings, neither deep nor difficult, on the surface... AI's Language Problem.pdf appears to be a straightforward summary, a history, of a couple of well-known AI projects, goals and problems (and in that way isn't all that interesting). but i'd like you to consider that the arguments and assumptions made rest upon at least two questionable assumptions: one, that language is the same thing as intelligence, and two, that playing difficult rule-based games requires intelligence. both assertions have been routinely made, and for centuries. both have no basis in empirical science. before you read, or skim, given the late date, this article consider: for assumption 1, are mute people not-intelligent? are beings that cannot use human speech not intelligent? would a scholar speaking a language no one else understood be intelligent? for assumption 2, if machines can "solve" chess or go problems, then maybe those aren't so much themselves measures of intelligence, but one of complexity-scale? and that humans are simply good at certain mechanical processes? (we're not good at factoring huge numbers).

then there's this quite empirical look at cross-species animal communication: When Birds Squawk, Other Species Seem to Listen - NYTimes.pdf. an easy read. i personally find it hard to overlook what certainly appears to be transmission, from observer to listening audiece, of nuanced factual observation -- eg. the size and scaryness of a nearby predator. ignoring the florid 'nature's music' metaphor (it may be more akin to angry neighbors yelling at each other, who knows?) the intereactions, to me, read more like a description of interactions amongst disparate visitors to a marketplace, not everyone speaks the same language but needs to get certain things across.

what's this got to do with 'algorithmic thinking'? everything and nothing. chess and go playing machines, and 'deep learning' such as tensor flow, remain entirely, deterministically algorithmic, even if the complexity is beyond a single person's comprehension. i offer that if machinery can somewhat master the mechanics of speech (speech output, lexogigraphic breakdown) or complex game play (go, chess) it is because those alleged unique human abilities aren't; they are things we do, but are inherently mechanical. the birds in the forest don't worry such things, as far as we know, yet they fight, fuck, feed, have fun, and teach their young their cultures. when we build AI, VR, AR systems, how can we code necessary components when we can't even state with any clarity at all what the goals are? where's the meaning?

all 'readings' files...

errors fixed soon





--end