COMMENTARY GUIDELINES
For many classes, students are required to write short commentaries on the assigned readings. Commentaries are due before class on Canvas.
The goal here is to think critically about the assigned papers and understand how they relate to the theme of the course: the intersection of HCI and ML. Therefore, paper commentaries should not summarize the paper--everyone reading your commentary will have already read the paper!
Appropriate topics to address in the commentary include:
- the main takeaways with respect to the theme of the course
- the strengths and weaknesses of the methodology and evaluation
- how the paper relates to the other readings for that class and the other papers discussed earlier in the course
An excellent way to structure your commentary is to discuss two positive and two critical topics for each paper. Each topic should be a short paragraph (about 4 sentences in length).
We'd like everyone to rate each paper on a scale of 1 to 5 (with 1 being dull and pedantic, and 5 being enlightening and enjoyable). These numbers will be used to determine which papers to keep on the syllabus for subsequent years.
Additionally, we'd like everyone to brainstorm a discussion question for each paper to help the students leading discussion.
SUBMISSION FORMAT
Please use the following format for commentaries:
(1) Paper Name -- Rating: __/5
(commentary here)
Discussion Question: (discussion question here)
(2) Paper Name -- Rating: __/5
(commentary here)
Discussion Question: (discussion question here)
EXAMPLES
(1) Metamagical Themas -- 5/5
Two things make this paper very relevant to discussions we have had so far in the course. First, it points us to one possible way that creative people such as designers work (variations of a concept possibly using analogical thinking). Second, it looks into the future and highlights the possible role that computation could play in the creative process.
I thought the paper was very bold and creative itself. Hofstadter puts forth an explanation of creativity that might have sounded very trivial or dismissive at the start, but then he builds up on it and discusses different parts of his argument with some creative examples. He never uses experimental evidence, but, even then, is able to weave together a cohesive explanation that is believable.
I thought his vision for how computers would be used in the creative process is remarkable. He sees what Knuth has done in terms of using a computer to twiddle knobs and extrapolates from it that, in addition to twiddling knobs, computers could in the future actually help find those knobs. That discussion also touches upon some aspects of machine learning and generative models (how do you have a model of all A's that does not lead you outside the space of A's).
Another thing I liked about the paper is that at several places he points out the difficulty of certain approaches. For example, in discussing analogical reasoning, he highlights the main difficulty of that approach - which is - how do you answer a question such as "why does Chopin sound like Chopin?". We are just beginning to see work that addresses such questions in specific domains.
Discussion Question: The main shortcoming of the paper is that it is biased - Hofstadter focuses on explaining one point of view and does not really dwell on other possibilities. For example, are there any other credible mechanisms for creativity? Or, is analogical reasoning the only way to arrive at variations on a concept?
(2) How Examples May (Or May Not) Constrain Creativity -- 4/5
I found it inspiring how this paper picked up an earlier piece of work that had a weak argument (more conformity means less creativity) and through more careful experiments was able to show that that argument is not necessarily true. This is a good example of how the scientific process should work.
The importance of examples and how they affect the creative process as demonstrated by Smith et al. and this work are very relevant to our discussion of data-driven tools for the design process. It points out that a system that makes it easier to find relevant examples during the ideation phase can be very useful in producing new concepts/ideas. Taking it one step further, we can imagine the tool automatically extracting features from examples (such as the tail) and helping the designer incorporate those into a new design with some modifications.
The idea that examples could potentially boost creativity is also important to justify our work. We could have come to the conclusion that designers work by looking at examples by observing them at work. But whether we should support that with our tools would not be clear without this information.
Having said that, this work also has its flaws. First, it appears that the same students participated in all three experiments and all three experiments had them draw aliens. Could it be that the results were affected by them having gone through the process earlier? Second, so far it is still the one creative task for which results have been shown. Could results vary significantly for other tasks? Maybe using a crowdsourcing based approach, a more varied experiment can be conducted now. Third, from the perspective of writing, the paper is not self-contained. It uses terms such as "activation", and "retrieval blocking" without defining them. Even more problematic (for understanding the significance of their results) was the fact that they did not explain how the conformity scores are computed (but rather referred to Smith et al.).
Discussion Question: Beyond creativity, does conformity affect the design process in other ways? For example, does it play a role in how design trends propagate?