Community Colleges Are Launching Programs Based on Vibes. It's Costing Them Everything.
A local employer calls the dean. “We need welders.” The dean nods. An advisory committee meets, agrees it sounds right. Someone pulls a Lightcast report that confirms welding jobs exist. Six months later, a new welding program launches to twelve students in a region where three other colleges already produce more welding graduates than local employers can absorb. Nobody checked. The vibes were strong enough.
The Vibes-Based Program Development Process
Here's how most new workforce programs actually get started at community colleges. Not the version in the strategic plan. The real one.
Step one: an employer calls. Or a board member hears something at Rotary. Or a dean attends a conference and comes back excited about drones, or AI, or cybersecurity, or whatever the AACC keynote was about that year. The signal is anecdotal — a single data point dressed up as market demand.
Step two: someone runs a Lightcast report. This is the “data” part. The report confirms that yes, jobs exist in this field, nationally. Maybe even locally. It shows median wages, growth projections, and a list of related occupations. What it doesn't show: how many graduates your region's employers can actually absorb. Whether three other institutions within 50 miles are already producing that talent. Whether the wage data is for the certificate level you're building or the bachelor's-level role that happens to share a SOC code. Whether local employers actually hire from noncredit programs or exclusively from four-year pipelines.
Step three: the advisory committee meets. These are good people who genuinely want to help. But advisory committees are structurally biased toward confirmation. The dean is asking “should we build this program?” — not “should we build this program instead of something else?” The committee wasn't convened to say no. They were convened to validate a decision that's already emotionally made. Most do.
Step four: curriculum gets developed, faculty get hired (or reassigned), equipment gets purchased, marketing goes out. The institution is now $150K–$250K deep in a bet that was placed on a phone call, a generic data pull, and a room full of people who didn't want to be the one to say “actually, I'm not sure about this.”
“We sunsetted more than 30 underperforming programs — without losing a single faculty member — once we started using real-time labor market data to drive decisions.”
— South Texas College, on shifting to data-driven program review (Community College Daily, 2025)
Why This Keeps Happening
It's not that college leaders are incapable of data-driven thinking. It's that the infrastructure for rigorous program validation barely exists. The tools available are either too broad (national labor market platforms that don't account for local saturation) or too narrow (a single employer's hiring needs extrapolated to represent “market demand”).
The Lightcast Problem
Lightcast is a good tool. It's also a blunt instrument for program-level decisions. A Lightcast report will tell you that “Registered Nurses” are in demand nationally. It won't tell you that your service area already has two nursing programs with waitlists, that the local hospital system exclusively hires BSN graduates, or that the wage premium for your specific credential level is $8,000 less than the number in the report because Lightcast is showing the SOC code average across all education levels. It's labor market data, not program validation.
The Advisory Committee Problem
Advisory committees serve a real purpose — employer input is essential for curriculum alignment. But they're being asked to do something they're not designed to do: validate market demand at the program level. A local employer saying “I need five welders” is not the same as “this region can sustain a new welding program at your institution.” One is a hiring need. The other requires competitive analysis, wage benchmarking, enrollment modeling, regulatory review, and institutional capacity assessment. No advisory committee is doing that in a two-hour meeting.
The Time Problem
Doing this analysis properly — competitive landscape, local employer demand, wage data at the right credential level, regulatory requirements, enrollment projections, financial modeling — takes weeks of a program developer's time. Most CE directors and workforce deans don't have weeks. They have a board meeting in March and a catalog deadline in April. The vibes-based process isn't chosen because it's preferred. It's chosen because it's fast. And fast beats thorough when you're understaffed and the president is asking what's new for fall.
The Real Cost of Getting It Wrong
When a new program underperforms, the costs compound in ways that aren't always visible on a budget line.
The direct costs are real: faculty salaries (or overload pay), equipment purchases, facility modifications, marketing spend, curriculum development time. For a typical noncredit workforce program, this ranges from $50K for a lean launch using existing faculty to $250K+ for programs requiring specialized equipment or new hires. That's money that could have funded a program with actual demand.
But the opportunity cost is worse. Every program slot is a strategic choice. Launching a drone technology certificate because it sounded exciting at a conference means not launching the industrial maintenance program that your region's manufacturers are begging for. The institution didn't just lose the money it spent on drones. It lost 12–18 months of momentum in a market where timing matters, and it lost credibility with the employers who needed something else.
Then there's the sunset problem. Colleges are institutionally bad at killing underperforming programs. Faculty get attached. Sunk cost fallacy kicks in. “Give it one more semester” becomes a five-year slow bleed. South Texas College sunsetted 30+ programs when they finally ran the data — and they managed to do it without losing faculty, because the data made the case that humans couldn't. Most institutions never get that far. The underperforming program just sits there, consuming resources, occupying a catalog slot, and slowly eroding institutional focus.
Stop guessing. Start validating.
Wavelength runs a 7-agent validation pipeline against BLS wage data, IPEDS completions, state workforce priorities, and live employer demand — so you know whether a program will work before you spend $200K finding out.
See How Validation Works →What Validation Actually Looks Like
Data-driven program validation isn't about replacing human judgment. It's about giving human judgment something to work with besides a phone call and a Lightcast screenshot.
A proper validation answers seven questions before a dollar gets spent:
Is there actually demand?
Not nationally — in your service area, for the credential level you're building, from employers who hire from programs like yours.
Who else is already doing this?
How many completions are nearby institutions producing? Are employers already saturated with graduates?
What do the wages really look like?
At the certificate level, not the bachelor's level that shares the same SOC code. Entry wages, not median — because your graduates are entry-level.
Can you afford to run it?
What's the break-even enrollment? What equipment costs are year-one vs. recurring? What does the 3-year financial model look like at 60% capacity?
Does it fit your institution?
Do you have the faculty expertise, the physical space, the accreditation runway? Or are you building from scratch in a field you've never touched?
What are the regulatory requirements?
State board approval timelines, programmatic accreditation, clinical site agreements, licensure alignment — any of these can add 6–12 months or kill the program entirely.
Are employers ready to hire from you specifically?
Not "are they hiring" — are they willing to hire from a noncredit community college program, and do they have a track record of doing so?
Most institutions are answering maybe two of these seven questions — and answering them loosely. The other five are assumptions. When a program underperforms, it's almost always because one of the unanswered questions had a bad answer that nobody thought to check.
The Workforce Pell Accelerant
Everything described above is about to get more consequential. Workforce Pell launches July 1, 2026, opening federal financial aid to short-term programs for the first time in decades. That means more money flowing into workforce programs, more institutions launching them, and more scrutiny on outcomes.
The institutions that validate rigorously will launch programs that enroll well, place graduates, and attract more Pell funding. The institutions that launch on vibes will burn through their window, produce mediocre outcomes, and face the kind of accountability reporting that turns a bad program decision into an audit finding.
With Iowa already moving toward unit-level ROI reporting for community college programs, the margin for error is shrinking. Every program you launch will have a measurable outcome attached to it. The question isn't whether your programs will be evaluated — it's whether you'll evaluate them before you launch, or after they've already underperformed.
Validate before you build.
Wavelength's Program Validation Report answers the seven questions that matter — using verified government data, not vibes — in days, not months. Under $5K. Dean-approved without board review.
Get a Validation Report →Get more like this
Workforce intelligence insights, delivered occasionally.
Related Service
Feasibility Study
Know if a program will succeed before you invest. Full feasibility analysis with GO / NO-GO recommendation.
Learn more