Thursday, July 25, 2013

Quality of Computer Science Higher-Education, Part 1

A recent discussion about H1-B quotas and tech workers on Hacker News brought up a very interesting question for me. The observation is that US companies are finding it hard to staff programming positions. Some comments suggested that higher-education is producing a lot of unqualified graduates. There was also a claim that the quality of CS education has declined substantially. Having been on three sides alluded to in this discourse, doing the hiring, teaching, and job-seeking, I thought it would be interesting to see what data we have and what we can glean from the said data.

Aside: I like to collect some data on the whole fizzbuzz phenomenon (i.e., the majority of job applicants not being able to program fizzbuzz). If you like to contribute your perspective and see the aggregate results of where the skills gap is, please consider taking my survey.

Albeit an imperfect measure due to a variety of factors but foremost the fact that test-takers are a fraction of the general job-seeking population, the GRE Computer Science Subject Test scores gives us a glimpse at the situation with computer science students. According to the National Center for Educational Statistics, from 2003-2009, the test scores remained largely flat: 712,715,715,717,715,712,708. Over the same time period, this largely mirrors the absence of trends in the other subject tests such as biology, chemistry, mathematics, and literature.

The NSF occasionally sponsors research on retention rates in STEM disciplines. According to a University of Oklahoma study from 1992-1998 of 119 colleges, the S&E graduation rate was 38%. Of course, most of this work is focused on the goal of improving retention rates, which may impact the diversity of the graduates we're seeing.

Lenore Blum of CMU and Jane Margolis of UCLA have conducted considerable longitudinal studies on the challenges facing women in introductory computer science course sequences. Blum's work identified four problem areas that contributed to massive attrition: experience gap, confidence doubts, curriculum and pedagogy, and peer culture.

Nevertheless, we are faced with a lot of anecdotal evidence (personal experience and more than a few from hearsay) that there are a lot of job applicants with Computer Science degrees who can't program. This is a very unsettling observation. I don't think the solution is to call these people idiots and complain about the state of education. The American higher-education system is the jewel of the nation, attracting waves of students from overseas and producing the very talent that has created the modern information economy. Furthermore, the claims of the failure of the US university Computer Science departments seem to go far up and beyond what is observed in other disciplines. Generally speaking, EE grads know how to design a basic flip-flop. So, I think it is very important to figure why these interviewees are failing and how can educators, hiring managers, and students work together to improve the candidate pool and job filling experience for everyone.

Some more than decade old research report that programming competency isn't readily found in students who completed their first or second course. A study of 216 1st year students from across 4 universities each of whom have completed an introductory course yielded an average score of only 22.89 / 110 in a set of exercises that asked the students to implement an RPN and infix calculator. The results turn out to be bimodal as is the case with many of these types of assessments. Unsurprisingly, data also shows that most of the attrition in CS programs occur in the second year, an observation which is not unique to CS. But before we slam US universities, the 4 universities in the study were included both US and non-US universities. The authors did, however, find statistically significant differences between the performances between schools. A similar discouraging results were found for students' skills at tracing, reading code, and software design, and testing even for near graduating students. In the software design study, the data did show, however, that number of courses taken to be positively correlated with performance. The authors speculate that that has to do with the degree of interest in the student. However, taking these studies at face value, we would have to conjecture that university Computer Science programs were never good at producing programmers. But how did the instructors of these courses fail to identify the students whom have not met the minimum standards? In a SIGCSE'04 paper, Daly and Waldron argue the normal assessment tools such as exams and programming assignments fail to correctly assess programming ability. They propose programming lab exams as a more appropriate tool.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.