PREVIOUS PROJECT WEBSITE STILL UNDER CONSTRUCTION, THANK YOU FOR YOUR PATIENCE!


Enya Pan is an interdisciplinary designer, research rabbit hole enthusiast, and Brown|RISD Dual Degree Program student studying Computer Science and Business Economics @ Brown University and Graphic Design @ Rhode Island School of Design.

  • A lover of all things linguistics, typography, biomaterials engineering, sustainable urban infrastructure, & ugly ceramic mugs. 






Work →
Archive
About
Contact




Tags
BRANDING DATA PUBL
MOTION TYPE WEB CONCEPTUAL
© 2024 by Enya Pan

FIG. 3        

Phonetics-Based Machine Orthography

// MAY 2024
CONCEPTUAL
DATA

In orthography in linguistics, letterforms all fall into a horizontal and vertical grid. This experiment explores how machines interpret the English language based on phonetics alone (a human sense) and present that machine-generated language visually. The goal is to see if two separate AI models would be able to interpret their self-generated letterforms and answer the question: Can machines form languages and orthographic writing systems based on phonetics in the same way humans do?

Step 1: Converting from text to image. I wrote a script that extracts phonemes from the English language and utilized stable diffusion machine learning algorithms to iterate over the prompt ““Create an image that sounds like the letter ‘_’ in the English alphabet” 100 times for each letter in the English alphabet. In total, 2,600 letterforms were generated during this process.
Step 2: Converting from imge to text. These letterforms were then fed into an alt text generator to generate text descriptions for each of the images. I finally compared the text descriptions with the original letter inputted into the prompt to test the accuracy of text-to-image generation and image recognition.