At This Education Nonprofit, A Is for Analytics
Social services agencies are turning to data to find the practices that get the best results.
Topics
Competing With Data & Analytics
Christopher House is a Chicago-based education nonprofit originally founded in 1906 to help immigrant families adjust to life in the United States. More than 100 years since its founding, it has evolved into a family of schools that prepares children and families from low-income households for success in life, in school and the workplace by providing a continuum of education. In 2013, Christopher House opened an elementary school in Belmont Cragin to make good on their promise to provide that continuum of education. In the second year of this new facility, they are serving children from six weeks old through second grade, with plans to add a grade every year as their oldest students advance.
Over the last 10 years, says CEO Lori Baas, the company has focused on high-quality infant school, preschool, early childhood education, elementary school, afterschool, and parent school programs, college and career readiness, and a commitment to using data at every step in every program along that continuum of education. The company’s director of quality assurance, Traci Stanley, oversees Christopher House’s agency-wide database system to track student outcomes — data that’s used to assess programs and suggest improvements.
MIT Sloan Management Review guest editor Sam Ransbotham spoke to Lori Baas and Traci Stanley about how data informs Christopher House’s educational program.
We’ve done interviews with other organizations that have the classic corporate model. You’re obviously quite different. But you are very data-focused. Was the initial founding of the organization around data, or has this focus on data emerged over the last 10 or 15 years?
Lori Baas: It has emerged over the last 10 years. It started with our board of directors, who were saying, “Lori, you tell us Christopher House has one of the best early childhood programs in the city. How do we know? Tell us what you’re basing that information on. Or you say we’re increasing kids’ grades in elementary school. How do you know?”
So we first started internally — creating or identifying tools to use for assessments, so we’d be able to present that data to our board, to evaluate programs and improve programs. But, then it also evolved as an organizational conversation about how we compare with other organizations in the city.
We now can show, based on the assessments, not only how our kids are improving in their cognitive development, or social-emotional development, but also how we compare to similar organizations. So that was the genesis of the benchmarking collaboration.
In addition, the Chicago Community Trust, who had been funding our internal development of metrics and data, recognized the good work Christopher House had done and asked us to consider how we could expand the impact of their investment. They provided funding for us to identify other partners to create the collaboration.
So it sounds very top-down focused, if the board of directors instigated a lot of this by wanting numbers behind statements. Is that an accurate depiction?
LB: I think that’s accurate; it’s been driven by the board and the our leadership team. In addition to the work we’ve done developing and identifying the tools, also we had to move through a complete culture shift that took several years.
Five years ago it was a challenge to convene data analysis meetings and review the benchmarking data from all agencies.
We’ve gotten to a place now where — and I refer just specifically to Christopher House — our program and our teachers from early childhood look forward to the opportunity to meet with their peers, analyze the data, and figure out how to do their jobs better. It was a pretty significant culture shift.
You have an organization with people who are probably, I would think, less technical, less data-oriented than maybe a high-tech firm would be. How did you effect that cultural change?
LB: A lot of it was a top-down mandate at the beginning.
Traci Stanley: Leadership had genuine interest and motivation in understanding the impact their programs have on children and families, so it started there.
Also, for the program staff, the idea of engaging in a shared-learning collaboration of their peers, where — based on data — they could look at who is getting better results and make apples-to-apples comparisons. That type of experience was something unique that they couldn’t get anywhere else in the city.
So there’s a top down to get it started, but I think I’m hearing you say that maybe there’s peer pressure with part of that, too. They were seeing what others are doing, and able to gauge apples to apples, as you said.
LB: Part of it is peer pressure, but part of it is, people get into this line of work because they really do want to have a positive impact on the world, or a positive impact on teaching kids or counseling parents. And once they can see that the time spent entering the data into the software or participating in a data analysis meeting is actually increasing their ability to have a positive impact, it just creates and builds on their internal motivation.
That makes sense. People want to spend their time helping people and not typing into the computer. Was there any pushback against using these systems or putting the data in?
TS: Yes, when it came to duplicity of data entry. We have funder databases and then there is an agency-wide software used as our centralized intake system, so there is some information that needs to be tracked in a couple of databases.
What is helpful to staff using a centralized software is that it’s customizable. So a funder may require us to track certain data elements, but using our central system, we can track above and beyond — what we want to know. I think staff appreciate being able to report on outcomes for funders and have the ability to learn about additional things that we think are making a difference in programs.
Who within your organization does the actual analysis of the data?
TS: It’s everyone, from the CEO to the program line staff. We’ll pull the data results and look for trends, look for where scores are increasing or staying constant. So that information we bring to a data analysis meeting with program staff, program directors, and share a summary of what we’re seeing.
We engage program staff in a conversation and ask: When you look at this data, what do you think about scores that are increasing or decreasing? What do you think is contributing to those results?
So when you said you have all the raw data in the software, and there’s a process of summarizing for the meeting or for the board, who is doing that summarizing or that compiling of the raw data entry into these consumable chunks for the board?
LB: Traci does it, our Director of Quality Assurance.
So, Traci, was that a skill you had two decades ago? Where did that ability come from?
TS: In school, I majored in sociology, and so there were basic statistics classes that I took. When I started at Christopher House, I was, for several years, a director of programs for our adult education and youth development programs. And so it was in that role that our department started measuring outcomes. I definitely enjoyed that aspect of the job, so it’s evolved over time from there.
So, nitty-gritty, how did you actually get more skills along those lines? Did you take classes? Did you Google?
TS: I think it’s some learning as you go. There’s networking groups that I’ve participated in with other quality assurance and evaluation folks around the city. Conferences, trainings — a lot of trainings are through the makers of the software, learning how to create reports using their products.
How savvy are the people who are consuming these reports? The board of directors, for example — are they asking for summary stats, charts, regression models?
TS: Our board asks to see the high-level impact, or high-level results. In the nonprofit field, benchmark statistics are difficult to find. Measures of success vary dramatically. Process and practices aren’t standardized. So several years ago, our board asked, when we were sharing outcome results, they asked, how does Christopher House compare with others? And it led us on a search to find benchmark data that really wasn’t available.
So this data collaboration is made up of social service agencies who are volunteering to work together to create the comparative data. We have similar programs, similar target populations, and we have developed a set of common outcomes together that we are using to report out externally, to our boards and internally, so that we can all learn from each other and make program improvements based on data.
We’re not researchers. We’re not out to prove anything. We engage in this process to learn from each other and improve programs.
I suspect when the doors close, some people in the organization don’t believe the numbers, or don’t believe the focus on data. We see lots of pushback anytime you measure anything. So what kind of resistance do you feel there’s been to working with data, or to focusing on the number as an outcome measure rather than a qualitative assessment of improvement, for example?
TS: When you’re talking about program staff, there can be resistance when staff are not incredibly data savvy. And so part of the work that we’ve done is training and education to help staff become more comfortable using data.
I think some of the cross-agency trainings we’ve done to get staff speaking a common language and understand how to use evaluation effectively has taken away, alleviated, some of that resistance.
LB: I’d also add we have high expectations of staff, so I think there’s some resistance that comes from employees trying to manage multiple priorities.
Or it’s barriers to fully adopting and investing in the software. The software is expensive and agencies need to hire an additional staff person to oversee their evaluation system. So if there are financial challenges as an agency, sometimes the priority of this work doesn’t stay at the top of the list if budget cuts need to be made.
How do you decide where to make that tradeoff, or convince yourself, yes, it is worth a person to do this versus another staff person actually working with a kid?
LB: At Christopher House, it’s a priority to ensure we assess progress towards our mission and continuously improve program. So we’ve remained committed to evaluation, invested Traci’s time and my time in trying to recruit, train and keep staff engaged and committed to evaluating programs based on data.
And corporations, philanthropic foundations and government entities are continuing to move towards a funding model that will fund agencies based on their ability to show their measureable impact. So there will be increased opportunities for organizations able to articulate their impact through data. Evaluation opens up funding sources that may not otherwise be there. And doing the work now will prevent lost funding sources in the future, as the field continues to move in this direction.
Do you think you have the skills that you need in the organization? Or if you don’t, what kinds of things do you think would be helpful or be useful, or would you look for to build up those skills?
LB: Additional resources to help ensure calibration across teaching staff and agencies to consistently administer and interpret assessment tools. For example, Christopher House’s expectations for a child’s progress on a developmental scale might be slightly different than another teacher at another agency. So resources to increase reliability of scores would be useful.
We also want to engage an academic partner, in both validating our current metrics and providing some of the analytical expertise that we don’t have internally in our organization. As the project manager, Traci has done a good job analyzing data, ensuring data quality, and holding people accountable to their data program improvement reports. There’s been a significant positive impact on six different organizations and their ability to provide programming, but we’re missing an academic research and evaluation partner.
What you think is next around data and analytics for your organization or for your group or your coalition of organizations? Where do you see this going?
LB: Our goal is to continue to grow the number of partners. More partners deepen the pool of data we have to inform practice. The sophistication around using data or software varies from one organization to the next, so part of the process in identifying new partners is ensuring they have the internal capacity to build a strong data tracking system.
TS: To build on the vision Lori shared, I think, also, we want to use this group and the tools we’ve created so that we can be a model for other groups who want to establish a data collaboration. Nonprofits right now, most of them, if they are looking at historical trends, they’re comparing against their own data.
So to give an example, a few years ago, Christopher House’s college and career prep program saw 62% of the students were either maintaining a 3.0 GPA or better, or increasing their GPA. When we put our scores next to six other agencies and saw we were below the group average — we wanted to do better. And the beauty of collaborating is we’re in a room with other providers who are doing exactly the same work. And we learned that the agency who is the top performer in the room offers one-on-one tutoring and one-on-one mentoring. Compared to Christopher House’s program, we had small-group tutoring and mentoring.
So we restructured our program. The initiative engaged more than just the program staff in youth development; our volunteer manager recruited heavily for this program, to make sure that we have one-on-one mentors for our students. When we re-ran reports a year later, we saw an increase — 74% of our students had improved their GPA or maintained a 3.0 or higher.
So using the data, we can compare, and what we thought was good, we actually identified as needing to be better. We have this opportunity to learn from others and what they’re finding effective in their programs, and we can replicate that. I would like for more social service agencies to have similar experiences.