The other day, when I had the mini-research breakthrough on the problem that's vexed my student and me for weeks now, I was sitting in front of the computer with this student. We were painstakingly tracing through data generated by some of our code, line by line.
Student: When I ran the program last night, the data was the same up until iteration X. After that, it was different.
Me: OK, let's look at the data from iteration X-1.
[scrolling, scrolling, ... staring at screen]
Me: Well, it looks like the data at iteration X-1 is also different.
Student: Huh.
Me: Let's go look at iteration X-2....Yep, that's different too.
[this goes on for a few more rounds]
Student: Huh. It all looked ok last night. Of course I only looked at some of the output, not all of the output. I never thought of tracing backwards through all of the steps.
***
I wish I could say that this was an isolated incident. But it seems like many of the students that I work with, and most of the students that I teach, have no idea how to approach a problem like this. Debug a program? Try randomly changing lines of code and hope that something works, eventually. Verify that the data a program is generating is correct? Just look at 1 or 2 values; that should suffice.
It is easy for me, at this point in my career, to throw up my hands and say "Kids these days! Don't they teach them the scientific method anymore? Don't they know how to conduct an experiment?" Because I view tasks like this--debugging, data checking--as scientific experiments. You have a problem, you form a hypothesis, and then you construct thorough tests to prove or disprove that hypothesis. Lather, rinse, repeat.
But then I got to thinking: How did I learn to think about CS problems this way? Was it because I took years and years of science classes and logged countless hours in the lab? Well, that probably didn't hurt, but I don't think that's the complete answer. I do think that I learned more from experience: from being burned by problems like this before, sure, but also from watching and learning. I learned by observing what my mentors did, what my advisors did, what the smartest students did.
When I think about it this way, I don't feel so frustrated by scenes like the one I described above. Instead, I think of them as opportunities to model good research behavior to the next generation of researchers, to help my students learn how to research by showing them what good researchers do to solve problems.
Did my student learn anything from our interchange? I suspect he did. I also suspect that he'll make similar mistakes many, many times in the future. But I'm hoping this will at least cause him to start thinking a little more carefully about how he's approaching research problems in the future.
Teaching by example is hard! :)
Subscribe to:
Post Comments (Atom)
7 comments:
Jane: I can relate to your experience with the student. I do a lot of computer-based work and that includes helping students figure out to do the analysis that they need to. I see my students do things that seem so obviously wrong to me, but I'm sure I made the same mistakes as a grad student, too. I thought about your backing through the steps- I've had to walk a student back through several processing steps to find where things wrong- to me it's the obvious solution, but to them it's a mystery! I agree that it's something only experience can teach you.
I've seen the lack of debugging skills in undergraduate CS majors as well. Part of the problem is that some programmers don't really think about what a particular method is really supposed to do. And then they compound a vague understanding by not testing at the method level. They string together half a dozen methods into a progam, and only start testing when the program is "complete". This generally means they are dealing with lots of bugs and the prospect of debugging these all at once. We teach students to break a problem into manageable pieces, yet we don't effectively teach them how to code and test these pieces.
I'm beginning to think that we should be teaching test-driven development from the start. At the very least, we should be teaching students how to use automated unit test frameworks.
Having been a student and now working in academia as a programmer, I can identify with the situation.
The problem in school is that there was a lot of hands on learning so you tend to learn from each other which is good on one hand but then you lose out on the basic skills like how to test depending on who explains things to you. Everyone seems to have a different style of doing things with code :) and if that didn't meet with yours you tend to lose out, i.e could not work with some people.
When you are working however, you tend to pick up on how to test from people who have been in the field for years.
If in the basic classes, all these issues were discussed, students would start thinking along these lines in the beginnning.
I am guilty of what you describe. In college, I took 2 intro programming courses as a physics major. As a graduate student, I was suddenly handed large programs that had to interface with other programs and hardware (all written in different languages, some of which were poorly documented) and the leap from writing short simple programs to what I was given was not one I was able to make on my own. A friend gave me a few tutorials on proper debugging and it made a world of difference.
I didn't understand what tools were available to me for debugging, so I didn't know where to look for them. I also didn't understand that error message x, means I should look for y. I think one of my biggest problems was that I assumed that the problems were in my code and kept looking for them there, rather than systematically ruling out that all of the other mountains of code I was using but hadn't written weren't causing some of the problems. These lessons in debugging taught me how to debug someone elses code, without needed to understand every detail of that code which would be absurdly time consuming. They taught me how to find which pieces of the unknown code that I needed to understand so I could fix the problems. We all need guidance sometimes. I that your student will benefit from the example that you set from them.
I think we're part of the problem, as we don't formally teach topics like debugging We need to teach code reading and debugging as a central part of learning how to program. Some good books on these topics have come out in recent years too, including Code Reading and Why Programs Fail.
Seems like most people are commenting that we need to teach these skills early and often. Yep, I agree! I know that I do this when I teach the intro courses (and sometimes, when appropriate, I bring it up in the upper-level courses too), but I suspect that I'm the exception and not the rule. Even then, though, it's hard to force good habits--I think students have to learn and then *practice* what they learn, and there just doesn't seem to be enough time or emphasis put on that.
This is an interesting take... debugging as "experimentation", especially to me as a "theoretical" physicist. I kinda like looking back at the 2 years I spent debugging a computer program while working on my dissertation and relabeling it "experimentation"!
Post a Comment