3 Lessons I Learned From Failing at Our Logo Design


Originally posted on Medium: Link to original article.


I co-host a popular AI Ethics Podcast entitled “The Radical AI Podcast.” In our first month and a half since launch, we received nearly 5000 unique downloads and trended quite highly on the ‘New and Noteworthy’ list for technology podcasts on iTunes. Because of the amazing support of the AI Ethics community on Twitter we were able to interview some of the most influential academics and industry leaders in the global AI Ethics space.

1.jpeg

Our First Logo


We had begun as a small project with a simple mission “To create an engaging, professional, educational and accessible platform centering marginalized or otherwise radical voices in industry and the academy for dialogue, collaboration, and debate to co-create the field of Artificial Intelligence Ethics.” We expected to get only a few dozen listens and had prepared as such. I had employed a fellow Ph.D. student to create our first logo, expecting only a few folks to see it. The logo worked for a small project but my co-host and I quickly realized that we needed a new logo that represented our vision for the future of the project and the communities we were trying to uplift.

We employed a graphic designer that my co-host had worked with before and set out to vision a new logo. What started as a quick project escalated soon into a deep learning experience for us about technology, industry, graphic design, and our values. In the spirit of vulnerability and education, I want to share with you all 3 lessons I learned through a month-long process of logo creation for my own popular podcast.

Lesson 1: Language Matters

The title of the podcast “Radical AI” first emerged in a bar in Boulder, Colorado. I had just met my soon-to-be co-host about two weeks prior in Barcelona during a conference about AI Fairness, Accountability, and Transparency (FAT*/FAccT 2020) and we had made a follow-up meeting to discuss our research and connect. As we sat down and grabbed a beer it was clear that not only did we get along quite well but we also had the same frustrations with how AI Ethics is highlighted in conferences and in the media; ie. that it is dominated by already well-known white male scholars.

Based on our experiences we knew that the folks that have historically been highlighted in AI Ethics spaces, and especially in AI Ethics podcasts, represent only the tip of the iceberg of the incredible people in the academy and industry doing groundbreaking AI Ethics work. To put it crudely, we had a shared frustration that even in an industry with ethics in its name the stories of folks with historically marginalized identities (ie. women, black folks, people of color, and more) were still being buried so egregiously. I ended the conversation saying off-handed “well, I’m thinking about starting a podcast that is… for lack of a better word, more radical about AI Ethics.”

About a week later we met up again to plan and “The Radical AI Podcast” was born. The problem was that we still were not entirely sure what the term ‘radical’ meant in the context we were working in. Coming from a community organizing and slam poetry background from when I lived in New York City I had images of what I meant by ‘radical’ but it was difficult to come up with the language. The project adapted to both be an interview space to lift up amazing scholars and also a space for us to collect a range of definitions for what ‘radical AI’ meant in the first place.

As we began to interview folks we were contacted by a group in California that had already begun organizing under the banner “Radical AI.” We met with them to not only discuss possible partnership but also to see if we had similar definitions of what ‘radical AI’ could mean. Their definition is as follows: Radical AI begins with a shared understanding that society distributes power unevenly — pushing People of Color, Black, Indigenous, Womxn, Queer, Poor, Disabled, and many other communities to the margins. Growing from these roots, Radical AI examines how AI rearranges power and critically engages with the radical hope that our communities can dream up different human/AI systems that help put power back in the hands of the people.

Using this definition as a starting point we began to plan our logo design. We wanted the design to be inclusive, representative of the people and values we wanted to lift up, and embody our call for power redistribution. Unfortunately for us, this starting point did not take into account all the sheer number of connotations ‘Radical’ might have and we were about to make a misstep that we would soon regret.

Lesson 2: Identity Representation Matters

We wanted a queer punk androgynous robot with a mohawk. You can’t make this stuff up. Jess and I are both white, straight, and able-bodied. And our first move in designing a logo was to tell our graphic designer to model off of a character in the as-of-yet still unreleased Cyberpunk 3030. Let’s say this right out of the gate: this was a huge blunder. We own that.

1_hw8Tevd9OW7MwrstA3jlTg.png

Logo Attempt:

Huge Blunder Edition

Our hearts were surely in the right place. The thought process was ‘well, if we are trying to uplift identities outside of the status quo then lets do it boldly and center those identities in our logo.’ Unfortunately, as is the case with many roads to hell, this one was paved with good intentions. At first we could not see that not only were we narrowly defining the identities that might fit into a ‘radical’ bubble and thereby being reductionist, we were also creating branding off of identities that neither of us represented.

Thankfully, before getting too close to launching that iteration of the logo we spoke with several of our friends and mentors in the AI Ethics space. We could tell very quickly that they were not pleased with the direction we had taken. They were helpful and honest in pointing out our blind spots. In particular they were helpful in separating the concept of ‘radical’ from any one group or set of identities to instead grounding it in our original definition rooted in power and shared values.

Without meaning to we had let our hidden biases and assumptions about identity groupings get in the way of creating a logo that was more truly inclusive. And so we went back to the drawing board.

Lesson 3: Sometimes Simple is Best

We worked with our designer to return to our original definition of Radical AI and the values that underpinned it. From there we identified metaphors that represented what we see as our fundamental project as a podcast. We came up with phrases like ‘an offering,’ ‘planting seeds,’ and ‘uprooting abusive systems of power.’ We then came up with symbols that summarized those metaphors, finally landing on: a hand, soil, and a sapling. As we worked towards a final image our goals were threefold:

1. For the image to help listeners to be able to have space to interpret and define ‘radical AI’ for themselves.

2. For the logo to represent resilience and hope.

3. For the logo to be simple, not prescriptive or overly complex.

After a few weeks of iterations we came up with the following logo, which is now the logo that is live for Radical AI:

3.png

Logo Mach 3:

Where we ended up.


What we like about this logo is that it is simple, represents our values, and creates a feeling of movement, resilience and hope. Though we remain humble and open to feedback from our community, thus far we have received positive feedback and are happy with where this project has ended up.

Conclusion: what does this have to do with tech design?

Often times in the tech sector we forget these three lessons: that language matters, that identity representation matters, and that sometimes (read: often) the simple solution is that best solution to complex problems.

Through trying, failing, and learning from mistakes in this logo design process I was reminded of just how vital it is that those of us who design and consume tech keep these lessons in mind.

Ethics is, at its core, about how we live our values out into the world. This includes how we create and consume technology. For those of us in design positions or positions in which we are creating public brands it is imperative that we remain vigilant about the language we use, the biases we might bring into our work unconsciously, the downstream impacts of the images we share, and the level of complexity we bring to solutions that might be better served by simplicity.

Previous
Previous

18 Pieces of Advice from the Guests of Our First 18 Interviews

Next
Next

How Technology Shapes Society