I readily admit I’m not a data guru like many of my colleagues, which is ironic since data is our bread and butter here at Community Solutions. Using data, we can tell compelling stories about those impacted by potential changes to safety-net programs, or how we as a state have improved in those areas.
But there are times when I read and share our work with others, I question how I am represented by the same data we share with you?
Seeing me in the data
Last year was the year of fact sheets for us, and one project examined a few counties—Cuyahoga, Lake, and Geauga. One of the things in those fact sheets are demographics of residents in the county, including race and ethnicity. For example, according to our Cuyahoga County fact sheet, nearly 400,000 Black people live in the county. I am one of nearly 400,000. I am also one of the 73.8% of households that earn an income, and among the 45.3% of registered voters who voted in the November 2023 general election.
Over the last few years, our research team has worked hard to ensure that I and others could see ourselves in the data we shared, an endeavor we continuously work on as new tools, technologies, and methodologies emerge. In fact, while developing our latest factsheet projects, we changed the way we calculate race, to better capture nuance and disparities. While I would love to only see myself in the positive data shared, it would be disingenuous of me not to recognize that I also show up in negative data sets, such as data on police encounters, like traffic stops.
A few years ago, as I was coming home from night class, I was pulled over by a patrol car a few streets away from my house. Like many who are pulled over, I was nervous, but I was also fearful, because as a Black woman on an almost deserted street late at night, this traffic stop could go very wrong. If the officer was overly vigilant, if I made the wrong move, or a variety of other “ifs” had occurred, I could have been one of the 304 on-duty police involved shooting victims in Ohio. Thankfully, the officers who stopped me let me off with a warning and some fussing about making sure I made a complete stop at the stop signs.
Good data doesn’t magically happen.
Who’s at the table matters
Good data doesn’t magically happen. It takes years of building and honing systems and software that collect, aggregate, and synthesize raw information into usable data points that help answer specific questions or test theories we may have about certain aspects of our world. And building those systems and software begins with one question: who should be involved?
It was a question like this that a Case Western Reserve University student asked of Yeshimabeit “Yeshi” Milner, Founder and CEO of Data for Black Lives and the luncheon keynote speaker for the 2024 Data Day Cleveland conference, on how to address bias when building an artificial intelligence system. In answer, Yeshi told the student to make sure that those impacted by bias—Black and brown folks—were included in the building of the system. A simple, yet complex solution, and one that we at Community Solutions take seriously. When we engage in a research project that requires information from specific groups, we use a combination of techniques to receive that data—focus groups, surveys, key informant interviews.
There are times when the person or group who should be in the room to help inform a project or research didn’t get the invitation to be at the table, even after all good faith efforts were made. That doesn’t mean that they don’t have a voice. Even after data has been released, people should always ask questions about it. Those questions can help researchers and data professionals address flaws or additional data points, providing a stronger data set in the future.
“Depending on the kind of questions we ask, and who we ask the questions too, it can be a really powerful tool to reveal what’s not working…” Yeshimabeit “Yeshi” Milner, Founder and CEO of Data for Black Lives.
More than a statistic
Years ago, there was a well-known practice among job seekers and hiring professionals that Black sounding names received fewer calls for job interviews than white sounding names. A statistic I am very familiar with since Ebony, and all its various spellings, was one of the names on that list. A recent study by Forbes found that hiring bias due to Black-sounding names was still happening, which is not a shock to many people with ethnic, non-white, sounding names.
Data like name bias, stats about Black moms being more likely to be drug tested during a live birth than other moms, or stats that says Black women are three times more likely to die from a pregnancy-related cause than white women, can lead to some murky waters if the context of the data isn’t provided. Data without context can cause more harm than good, and at times, can be weaponized to belittle and degrade others.
Bring your own chair
There have been times when my voice and concerns were not considered. Where the data shared about me and my community didn’t seem to represent us the way I thought it should because there wasn’t any context to it, just numbers and hard to understand methodologies. Being Black, being a woman, there’s more data on what is negatively affecting me than what is positively affecting me.
“If they don’t give you a seat at the table, bring a folding chair.”—Shirley Chisholm
There will be plenty of times when systems and projects that impact me won’t invite me to participate in the planning process, but that won’t stop me from having a seat at the table. If I have to make a space for me to inform and be heard so I am represented fairly and properly, in the spirit of Shirley Chisholm, I’ll bring my own folding chair.