Info ︎

Projects:


1. Great Performers 2020 —The New York Times Magazine

Link ︎


Year – 2020
Type – Digital
Work – Design, Code

With Kate LaRue and Jacky Myint




2. The Decameron Project —The New York Times
Magazine

Link ︎


Year – 2020
Type – Digital
Work – Design, Code

With Kate LaRue and Jacky Myint



3. The Office: An In-Depth Analysis of Workplace User Behavior —The New York Times

Link ︎


Year — 2019
Type — Digital
Work — Design, Motion, Illustration, Code

Collaboration/Creative direction by Tracy Ma





4. The Design Report: Visualizing the 2018 AIGA Design Census

Link ︎

Featured on AIGA Eye On Design ︎


Year — 2018
Type — Digital
Work — Design, Data Visualization,
Code




5. The Best of Illustration 2019 —The New York Times

Link ︎


Year — 2019
Type — Digital
Work — Design, Code


Collaboration with Antonio De Luca




6. Sweatpants Forever —The New York Times Magazine

Link ︎


Year – 2020
Type – Digital
Work – Design, Code




7. The Culture Issue: Bad Bunny —The New York Times Magazine

Link ︎


Year – 2020
Type – Digital
Work – Design, Code



8. Looking Back at People Watching the Apollo 11 Mission
The New York Times

Link ︎


Year — 2019
Type — Digital
Work — Design, Code 


Advised by Rumsey Taylor 




9. Various Works
The New York Times

Year — 2019
Type — Digital
Work — Design, Motion, Illustration, Code


All work at The Times ︎



10. Anomie — Lunar Gala 

Year — 2019
Type — Digital, Branding
Work — Design, Motion, Code





11. Signs from God

Year — 2019
Type — Print
Work — Design




12. Abstraction Recognition

Year — 2018
Type — Print, Physical artifacts 
Work — Participatory design, Graphic design, Motion




13. Generous Feedback

Year — 2019
Type — Print, Exhibit
Work — Creative direction, Graphic design, Participatory design

Collaboration with Faith Kim, Juan Aranda, HeeSeo Chun




14. The Reddit Bible 

Year —2019
Type — Sentence embedding (InferSent), basil.js, Python
Work — Design, lots of code

 

15. Et Cetera ︎

Small sketches and things, incl. a chat room! 




Mark

Abstration Recognition

In an age where computers can now tell us whether or not a photo contains a hot dog, it can be easy to extrapolate that very soon, computers will be able to replicate human creativity. This is a real discussion that I often have with engineering friends—I, of course, argue that that is impossible, while said engineers argue that it truly is not far off.


Participants are each given an instruction that is mapped to one step in the neural network. However, instead of literally asking participants to recreate what is happening, I extrapolated the neural networks to a delineated “process” of human conversation. With posters made from image recognition/machine learning data sets, I asked participants to seek out patterns, emotions, feelings, memories that the images triggered, with each participant thus passing their perceptions and perspective onto the next person in the “network”.

This project was advised by Kyuha Shim.



Abstraction Recognition is an interactive participatory experience inspired by machine learning and image recognition, as well as the work of Luna Maurer and Conditional Design. The basic image recognition process (just one of the many possible applications of neural networks) relies on breaking down input images into pixels, and slowly finding recognized patterns from clusters of pixels (This is an extremely reductive description of how neural networks work. It is hard to describe even in a few sentences; however, for this project, I spent some time learning about neural networks, and found these videos concise and digestible, even for a non-mathematician like me.)

This project sets out to compare computer recognition to human capability—a pixel is to a unit of the computer as a human experience or memory is a unit of the brain. Machine learning has enabled DeepFake, contextual image and text generation, audio that is in-differentiable from human voice. The heart of human creativity lies in being able to recognize patterns, abstract those patterns, and generate new things. Computers can now do that too. If we humans are unable to tell the difference from human made and computer generated, at a certain point, does it matter the origin?



Final Results






Installation