Introduction of the workshop:
Text-to-image diffusion models are an accessible way for artists, architects, and designers to experiment with Artificial Intelligence. MidJourney is one such platform that engages our imaginations and lets us explore and test design ideas in a fraction of the time it would traditionally take to draw, model, or render. Joshua Vermillion will lead this MidJourney workshop with an emphasis on generating surface, material, and lighting effects and atmospherics. Along the way, we will develop and craft prompts (the instructions we give to the AI model) as a way to sharpen your ideas and augment your design process.
The scope of the workshop:
This fast-paced workshop will focus on generating images of provocative spatial volumes (interior or exterior), along with effects and atmospherics utilizing surface, lighting, textures, materials, shape, etc. We will iteratively edit, generate, re-roll, and blend prompts, as well as the resulting images. We will also examine and compare the differences in the outputs from various MidJourney models and workflows (for instance, MJ’s v3, v4, and v5 models, the remastering and blend options, and adjusting aspect ratios) all of which affect the coherence and “creativity” of the results.