Portfolio for TCSS 360 Software Development

Computing and Software Systems
University of Washington, Tacoma
Josh Tenenberg

Course Offering: Fall, 2004

Version 10 August 2005

URL: http://depts.washington.edu/comgrnd/portfolios/joshTenenberg/portfolio360.html


Purpose and Audience

The main purpose and audience for this course portfolio is to disseminate information about this course to teaching faculty in my department who will be (or are) teaching this course. There is thus an assumption of a common disciplinary background in Computer Science and Software Engineering, and a shared local context. The Course Objectives, Design Enactment, which describes the main teaching methods, Rationale, which describes my underlying reasons for the course design and enactment and the Student Outcomes are probably the most important parts of the portfolio for this audience.

Another audience consists of faculty in software-related fields at other universities, since such courses are taught at many other universities. The same parts of the portfolio as just recommended will also be suitable for this audience, with particular attention paid to the Curricular Context section, though this section undoubtedly is not as complete as it would be were this the primary audience. And finally, faculty in other disciplines might wish to examine this portfolio. While the details of course Design will likely be least interesting for this audience due to the shared discipline-specific knowledge that this section relies upon, the discussion in the Rationale section will place my work in a larger theoretical context and hence will likely be of most interest to this audience.

A word about hyperlinks in this document will be important. My intent is that the portfolio to be a single file, a coherent and linear document that stands on its own. As a result, it will help the reader to understand the reasons why I have included hyperlinks within this document, of which there are three kinds. The first and most frequent kind are internal hyperlinks to other sections of this portfolio (such as Design"), and are included strictly as navigational aids. The second kind of hyerlink is to another document related to this course that I have authored (such as the Course Syllabus). It is not intended that the reader follow and read this to understand the balance of the portfolio, since any relevant part of the referenced document will be included here, either verbatim or in abstract. Rather, it is included for the interested reader who wishes to obtain further details on my course and teaching. The set of all such hyperlinks is listed in the Appendices section. And finally, I have included hyperlinks to external documents authored by others (e.g. the ACM 2001 Curriculum Guidelines), for the purposes of follow-up scholarship.

Course Objectives

The primary course objectives are for students to: 1) access the practitioner literature to learn about professional software development practices and to recognize their own connection to the larger community of professional software developers; 2) apply these practices to a term-length software project of approximately twenty collaborating modules (i.e. classes in object-oriented terminology) undertaken in a team; 3) work effectively in a software team by using explicit practices for documenting commitments and ensuring individual accountability, and 4) reflect on their use of these practices, their learning throughout the course, and their teamwork behavior so as to better understand the relationship between their actions and specific outcomes.

This course met for 10 weeks in Fall, 2004, with 14 students attending. The course met for two class sessions weekly, 135 minutes each session.

Course Design

The discussion within this section summarizes information about the content and structure of the course as included in the course syllabus, and provides a more detailed discussion on course design than is included in the syllabus. It also includes discussion on assignments and grading, drawn from the syllabus, the Weekly Report document, and the Project Requirements document, all of which are posted on the Internet for students to read.

Course Topics

The course is structured around the concept of the software lifecycle. As the syllabus indicates: "The advantage of having an understanding of a software product's lifecycle, according to Boehm, is that it addresses two key questions: 1) what does the software developer do next, and 2) when does he/she move to the next phase? Typical lifecycle phases include determining what the software product is supposed to do (called the requirements); determining the different parts of the software, the tasks that each part will carry out, and the interaction between the software parts (called the design); writing the programs that realize the different parts (called the code or implementation); and verifying that the programs carry out the desired behavior (called verification and validation, which includes testing)."

There are a number of different software lifecycle models that are discussed in the professional literature, including waterfall, evolutationary, and spiral. As Boehm discusses, each of these models help software practitioners engaged in a design project answer two fundamental questions: What task do I do now? And for how long do I continue this task? Because our degree program is software-centered ("Computing and Software Systems"), our students will be spending much if not most of their professional lives engaged in the different aspects of the software lifecycle. It is thus important that students understand the lifecycle as an enframing conceptualization, as well as learn specific activities that will allow them to carry out the goals associated with the different phases of whichever lifecycle model that they are currently using. Thus, students read about, and class discussion is centered explicitly on lifecycle models during most of the first week and a small part of the final week of the course.

The balance of the course is centered on a set of specific practices, enacted within a social setting with other student software developers, derived from the professional practitioner literature associated with different phases of the software lifecycle. The set of specific practices used in the course and their main purposes are as follows. Each is listed in the order in which it occurs in the course, along with a reference to the associated reading that the students are required to do in preparation (see the Annotated Bibliography). Each topic occupies one class session of 135 minutes unless otherwise specified.

  1. Explicit documentation of user requirements for revealing the difficulity and importance of capturing user intentions. (Faulk)
  2. Use cases as a means for determining behavioral requirements and as a basis for designing and functional testing. (Cockburn)
  3. Group practices, including weekly group reports and individual milestone reports to ensure accountability and commitment.
  4. Paper prototyping for the development and the usability testing of user interfaces. (Snyder)
  5. CRC cards for developing class designs (Beck and Cunningham).
  6. UML class, sequence, and state diagrams for notating class designs (Fowler).
  7. Design principles and heuristics for evaluating design quality. Two class session (Riel).
  8. Design walk-throughs for communicating design ideas and receiving critical feedback (Marciniak).
  9. Design by contract (and the associated use of Java's Assert mechanism) for clarifying class responsibilities (Horstman, Sun).
  10. Pair programming to provide continous code inspections and to disseminate programming expertise between students (Williams and Kessler).
  11. Uniform coding standards to provide a common basis for code naming, indentation, bracketing, use of whitespace, and documentation (Sun). One-half session.
  12. Automated unit testing for simplifying design, providing test accountability and documentation, and for regression testing (Beck and Gamma). Two class sessions.
  13. Project retropectives as a structured means for reflecting on group effectiveness (Kerth).

There are also a set of recurring themes that run throughout the different lifecycle phases and parts of the course:

the smallest thing that gets the job done, with the least redundancy and fewest interaction among parts. This is reinforced through the use of heuristic guidelines for design and mechanisms to foster critical discussion and discernment.
Early and frequent feedback
among students, between students and their professor, and between students and the piece of executing software. This is fostered through regular group meetings, through devoting class time to presentation and critique of student work, the use of pre-milestone meetings with the instructor, and through use of test automation.
Levels of abstraction
working from abstract to concrete in all phases of the lifecycle as a means for matching effort to resources. Student work embodies this point by including examples of work for each milestone expressed at different levels of abstraction.
Iterative development
as a way to reduce complexity. Working initially at a high level of abstraction provides a mechanism for receiving early feedback, which can provide the basis for revising, expanding, and improving the current milestone, thus establishing a dynamic of critical and iterative development on all phases of work.

Course Work

The coursework assigned to students includes the following: a pre- and post-course survey, worth 4% total of the course grade, a design exercise worth 2% of the course grade, weekly summaries of the readings (called "treatments" -- see below) worth 14% total of the course grade, weekly group reports and a project retrospective (I drop the lowest grade that the group receives on a weekly report), worth 10% of the course grade, a group project, worth 70% of the course grade, individual write-ups of each project milestone that each student individually hands in, part of the individual multiplier for the project grade, as discussed below.


For each assigned reading, the student is to write a treatment. The treatment has a particular five sentence "syntactic form"; it is not simply a summary in whatever form that the student cares to write it. In these five sentences, the student conveys the reading's main thesis, a less obvious but equally important thesis, the main argument in support of the less obvious thesis, a statement on why the author's method of writing effects his or her purpose, and the main audience for this reading. A well-written treatment thus requires the student both to read with care and also to write with precision. Treatments are due at the beginning of the first class of each week for all readings that are due during that week.

Course project

The principles discussed in this course find practical expression in a software project that each student will develop as part of a group effort. In groups of 3 and 4, students develop a piece of software each quarter of several thousand lines of code, comprising approximately 15 - 20 software modules (or classes in object-oriented parlance). Each group develops a project to the same project description. The software generally involves 3 or 4 actors, each with 2 - 6 goals, a small amount of input, and persistent data. Graphical user-interaction is not an essential aspect of these projects. During the Fall 2004 term, students developed software for a fictitious trail maintenance organization similar to the The Washington Trails Association (www.wta.org), an organization whose primary purpose is to maintain public trails in Washington state. The purpose of this software is to allow land managers to post their maintenance projects, to allow volunteer workers to sign up for (and cancel) a work party for doing trail maintenance, to schedule work parties, and to manage the data necessary to do the scheduling.

The project has three distinct phases, each lasting approximately three weeks, and each with a set of documents associated with the end of the phase (called the milestones): the requirements, the design phase, and the code and test. In this regard, the project follows a standard waterfall model of software development.

Students prepare a requirements document centered around a set of use cases, a structured notation for capturing the interactions between a user and the system focused on a single user goal. The requirements document also includes non-functional requirements. The final length is usually 6-10 pages of prose text and use cases.
The design document details the set of classes, responsibilities that each class will carry out, and the collaborations between the different classes to achieve the required functionality. Design documents are between 10 and 15 pages, primarily notated with a UML class diagram that includes all public methods, a UML sequence diagram for each of the two main use cases along with a psuedocode description of the dynamic object collaborations in carrying out the main success scenario, and a state diagram that shows the flow of control between the different screens of the user interface.
Code and Test
The final hand-in includes functional source code, code for automated unit tests, input test files, javadoc files for each class, user manual, and a file describing the full test plan. There are typically 18 - 25 classes of between 2000 - 4000 source lines of code, about 2/3 of which is functional and 1/3 of which is test code.

One week prior to each milestone hand-in, each group meets with me privately for 30 - 60 minutes. A subset of the milestone documents are presented by the group. At this meeting, we discuss progress to date, I review key milestone documents and provide feedback, and I answer any remaining questions that students might have about the milestone or the course.


Much of the student's work (and hence grade) is done within a group context. I randomly assign students to groups of size 3 or 4. Although I have tried different kinds of group assignments (students choose, balancing gender, balancing native/non-native speakers, balancing by learning styles), I find random assignments most defensible. One of the lessons that I want students to learn is that successful groupwork is not a matter of chemistry, so much as it is a matter of creating the conditions whereby individuals will mutually commit to carry out work, will mutually monitor one another's compliance, and will support one another throughout the course of the entire term.

I use a number of mechanisms to monitor progress and increase individual commitment and accountability. These include:

Individual commitments
Into the syllabus, I place a set of commitments that I expect of each student with respect to their groupmates. This is neither a contract nor negotiated; I state unequivocally, and stress the first class session, that these are minimum requirements that I expect from each student as a condition of being in the course. These commitments include such things as meeting at least weekly at a regular time with one's groupmates, undertaking a fair share of the work, and posting meeting minutes within 24 hours of taking them.
Weekly in-class group mini-meetings
Ten minutes are devoted to a short in-class group meeting each week to provide a brief opportunity for students in a group to "touch base" with another.
Weekly group reports
Each group is required to schedule at least one 1-1/2 hour face-to-face meeting each week of the term; most groups schedule additional ad-hoc meetings as needed, especially during the week before each milestone is due. Weekly group reports are handed in each week that summarize the group meeting(s), and that also summarize for each group member the set of work commitments that each person has undertaken and completed for the upcoming and previous week. I read these carefully, and meet with groups during their in-class mini meetings if I see any troubling signs (e.g. inequity or signs of failure to achieve the next milestone).
Individual reports
Each milestone is handed-in as a group, and a grade is assigned to the entire milestone. One class session after each milestone, each student is required to provide a write-up related to the milestone. This milestone includes a self- and mutual-evaluation of each person's contribution to the group, discussion of the group process, discussion of specific technical aspects of the project, and a reflection on lessons learned. I tell students that the information they provide is confidential, but that I might report aggregated comments anonymously in discussion with one of their groupmates.
Individual multipliers
Although each milestone is given a group grade, each student's grade on the milestone is this group grade multiplied by the individual's individual multiplier. As I state in the project requirements that I handout to students: "Each individual will receive a participation score -- the "individual multiplier" -- between 0 and 2. The grade you receive on a milestone will be the group grade times your individual multiplier number. The default multiplier value is 1. In some cases I will award a multiplier of less than 1. In rare cases, I will award a multiplier of greater than 1. You will not do better on this multiplier by being more individualistic, but rather, through having a documented history of cooperative group efforts (encouraging groupmates, helping others, resolving conflicts, putting in extra time) that indicate that you have undertaken and fulfilled your group commitments and are doing everything you can for others in your group to do likewise." The individual reports are used as a basis for making this determination.

Course Enactment

I use a number of different methods for in-class teaching, including demonstrations, lectures, cooperative groupwork, full-class discussion, and small group disscussion. I will focus on the teaching of the "Class Design" componenet of the course to highlight the interplay between these different methods. I choose this part of this course because I believe that design is the most important and skill-based aspect of software development and the part most neglected in other parts of our curriculum.

The main goals of this design unit are for students to adopt specific practices for generating, evaluating, notating, and describing software designs. This unit stretches over three weeks and includes the following sequence of activities. I describe each topic, the goals of each class, and the method of class instruction.

The general approach I use is "generate-and-test", i.e. I provide them with a simple practice for generating alternative designs, followed by a set of heuristic criteria for judging and choosing between the alternates. Throughout, they are exposed to different ways to communicate designs and challenged to make their design rationales explicit.


Prior to the first class in this unit, students spend 1-2 hours performing a design exercise. The exercise involves reading a design brief that describes the functions of a video rental store for which software is to be written, and then answering a series of questions comparing four alternative class designs for the video store that are presented (e.g. "rank the designs in terms of ease of testing"). This exercise was a precursor to one that colleagues and I used in a multi-institutional, multi-national study of student software design (Fincher et al, 2005). A link to this exercise can be found in the Appendices section. This exercise provides a common problem and set of designs that I refer to throughout the entire Design unit. It also orients students to some of the issues associated with class design. Finally, it engages students' critical faculties in a low-risk fashion, since they are neither generating nor critiqueing their own or other students' designs.

Design Overview

This is the first class session just after the students do the pre-exercise. Students first discuss their answers to the design questions for approximatley 45" in groups of 3 and 4, and then we have a whole-class discussion for approximately 30". I finish the session with both a summary and a brief lecture of principles that were not uncovered in the discussion.

The intent of the small group discussion is for students to begin to engage in the process of design rationale construction, i.e. developing an understanding of why one design is preferable to another in a particular context. The intent of the whole class discussion is to identify general characteristics that distinguish software designs, e.g. testability, efficiency, maintainability. During the lecture, I discuss characteristics of design spaces more generally, drawing on the work of Goel and Pirolli (1992). I also discuss that some of the characteristics of design (e.g. parsimony) tend to be valued across design contexts, while the value of others (e.g. space efficiency) is highly context-dependent. This sensitises the student to matching the design process to the design context, one of the characteristics of design expertise: ``experts do not approach every problem in the same way but rather adapt to the inherent constraints of the task'' (Adams et al, 2003).

Class-Responsibility-Collaboration (CRC) cards

This class session begins with my providing a brief (15") review of CRC cards, reinforcing points from from their related reading. CRC cards are index cards (or half-sheets of paper) that specify the name of a class, a short bullet list in english of the main responsibilities it is to carry out, and a list of the other classes with which it collaborates. Although CRC cards can be used as a notation for communicating designs, my main goal in using them is for students to have a public, shared artifact for developing and discussing how to decompose a problem into a set of classes. I ask them to identify the separate nouns and verbs from one of the use cases that they developed in their project group's requirements document, a practice suggested by Wirfs-Brock (1990). The nouns are candidate classes and the verbs are candidate responsibilities. I then ask them to construct a set of CRC cards for the classes, using the candidates that they have identified. This occupies 60" of class time, during which I talk with individual groups, ask them questions about their design, point out difficulties, and compile a set of questions that students are asking that I believe warrant discussion in a whole-class format. I then have one or two groups present and talk through their designs to the entire class, and finish with discussion on the compiled (and any additional) questions that I had gathered. I view this class as one of the key classes of the quarter, since it introduces students to a specific practice that allows them to externalize and discuss their design ideas that is intermediate between their internal thoughts and actual program code.

Evaluating designs

Once students have the framework for generating designs that CRC cards permit, they need some basis for choosing among the designs. Although they examined alternative designs during the pre-exercise, there was no set of principles that I provided them with which to arbitrate. During this class session, they apply a set of heuristic guidelines -- along with rationales -- for class designs developed by Riel (1996), which is their assigned reading for the class session. Given the open-ended and context-constrained nature of design, these heuristics do not result in either a fixed algorithm or an optimal design; rather, they provide principles that must be traded off and debated in order to determine fitness to fuction within this particular problem setting.

During this class session, students work for the first 45" in their project groups. I ask them to apply the different principles to the "video store" designs, asking which designs violate which principles. I then have them reevaluate the questions that I originally asked them concerning their preference orderings of the different designs in different design contexts. As before, I move among the groups, answering questions, prompting, and collecting questions that I believe will be fruitful to discuss as an entire class. We then discuss these principles as an entire class, centered around the questions that I collected; I pay particular attention to groups that developed contradictory interpretations so as to highlight these and try to resolve them. During the next 30", I present a lecture, where I review a few of Riel's heuristics, with examples, that have historically been overlooked or violated in previous student projects. These often concern problems with dependencies, and communication between interface objects and data objects. During the final 15 - 30", I introduce the notions of loose coupling and tight cohesion as a set of over-arching concepts that are assumed by and embedded within Riel's heuristics.


The entire class session is devoted to walk-throughs of student designs. Each student comes to class, prepared with CRC cards of their group's design. New review groups are formed, that include one student from each of the project groups. One after the other, each student presents his or her group's design. Not only is structure presented, but students are urged to step through the computation of one or two key use cases, demonstrating how the different objects collaborate to carry out functionality. After each presentation there is a brief discussion, where the other group members probe the presenter for design rationale, and make suggestions. The next student to present takes notes on the questions and suggestions for the presenter to take back to his or her group. Each presentation takes approximately 25". The class is timed to allow 10" at the end for students to meet with their project groups, to discuss the most important points touched upon during the walk-through.

The goal of this class is for students to make explicit their implicit assumptions about their designs, for each person to learn their group's design sufficiently for presention and defense, to subject a group's design to early critique, and for students to view other designs that carry out the same task. As to the latter, I tell students that the appropriate metaphor for the walk-through session is "idea swap meet" and that the price of admission is their own group's design. I allow students to use ideas that they learn from other groups in their subsequent design revisions.

UML: class and sequence diagrams

This class session is centered on the reading and writing of UML class and sequence designs, with approximately half the class devoted to each topic. The main method of instruction is intructor-guided inquiry using small "turn to neighbor" groups of 2 and 3 students. This involves first presenting a class (sequence) diagram, asking about the "meaning" of the different parts (eg. arrows, boxes, annotations, numbers, dashes), and the overall interpretation of the parts as a whole. I start with one of the diagrams from their reading, and then move to new diagrams. Time permitting, I ask them to generate a simple class (sequence) diagram from the "video store" designs presented during the first class.

My goals are primarily twofold. First, since UML is the predominant notation used among the software engineering community -- both in academia and in industry -- for notating software designs, I believe it is important for students to be able to read and interpret these diagrams when they encounter them in the future. And second, I believe that these notations are helpful for the groups to use as a basis for externalizing their ideas as they generate, discuss, defend, and regenerate their own designs. In this sense, it serves as both a socially shared artifact for communicating design ideas and as cognitive scaffolding for design generation. Although I would like these notations to be correct, in the sense that they use the notation according to their standard semantics within the software development community, I am less concerned that these documents be detailed or that they be maintained throughout the lifetime of the project.

Design by Contract (DBC)

This class session has the deepest conceptual material, and also connects back to students' formal work with discrete mathematics in pre-requisite courses. I therefore primarily lecture for 10-15", then ask students to apply the principles just described to small problems in "turn-to-your-neighbor" groups of 2, finishing with a full-class discussion to validate answers and discuss questions that might have emerged.

Design by Contract (DBC) stems from the work of Hoare (1969), Dijkstra, and others, that applies principles of first-order predicated method, and is enforced through both explicit checking of preconditions that throw exceptions for violations, and postcondition checking using assert statements.

I focus on this topic because 1) I agree with Mayer that making the contract explicit in this sense is important for improving code quality, 2) that this is particularly important in student groups, where students generally "divide and conquer" the coding, i.e. different students work on different code units. Being as clear as possible about each class's responsibilities is an essential part of both documentation, design, and coding, 3) the blackbox unit testing that I ask students to write for their code depends upon having a clear idea of each method's specification, and 4) this connects back to, reinforces, and provides an a posteriori rationale for some of the mathematical formalisms that they studied in earlier courses.

Student Outcomes

I report on student outcomes with respect to each of the four course objectives. I focus in particular (but not exclusively) on the parts of the course dealing with design as outlined above. Permission to use course materials for research purposes was obtained from the University of Washington Human Subjects Division, and results are reported for that subset of students giving their consent for the use of their class assignments. All names used below have been changed to protect student anonymity. Some of the grammar in the quotes below have been corrected to conform to speech patterns of a native speaker to further protect anonymity. Finally, because of the small number of students in the class during Autumn, 2004, some of the quotes and statistics below are taken from students in substantively similar sections of this course taught in prior terms.

  1. To what extent do students access the practitioner literature to learn about professional software development practices and to recognize their own connection to the larger community of professional software developers?

    Students regularly hand-in the treatments (at least 80% hand in treatments each week), and most treatments show evidence that students have done at least a quick reading. And there are frequent references to the readings in their weekly minutes. I have no evidence, however, that students feel much connection to the larger software development community through the literature, that they feel that the practitioner literature is theirs. In none of the individual reports that students have done do they mention such a connection, even weakly. And rarely will students send me articles of interest or links to course-related websites as they have done in some of my other courses. In retrospect this outcome seems obvious: I hand-select the papers, neatly packaging them for students at the start of the term. My belief is that because students have not waded through the professional literature themselves to separate wheat from chaff, not even for a small portion of what they read in the course, that they do not feel the connection that I had hoped. And they have certainly not adopted the habits of regularly reading the practitioner literature.

  2. To what extent do students apply these practices to a term-length software project of approximately twenty collaborating modules (i.e. classes in object-oriented terminology) undertaken in a team?

    I am always surprised at the amount of energy that students expend in meeting each milestone. Students average about 20 hours per week on this course (including attending class) as reported in their end-of-term anonymous evaluations, though this amount is generally higher just before a milestone, particularly the programming milestone. They discuss their fatigue and hard work in class sessions, they write about it in their individual reports, and they tell "war stories" about it during the end-of-term retrospective.

    I observe students engaging in the practices during the class session, but this is primarily a function of centering the class session itself around practice. Outside of class, students regularly report on their use of most of the practices in their weekly reports, as in the following examples drawn from these reports:

    Then, Phil suggested that we try to write the use cases before we jump into the algorithm that has to be used.

    The first agenda item was to review our respective contributions to the SRS document. We started with the use cases written by Sally.

    We started by reviewing the current design of the class diagram

    Sam offered to draw the sequence diagram.

    The meeting concluded with a debriefing on the SRS. Overall everyone felt good about the outcome. One thing that we needed to focus on was really nailing down the requirements for different stages of the project. The main reason why we received a 3 and not a 4 was because of a little carelessness.

    ... he also began testing using JUnit. They also worked on the use case for grading a test after a student had taken it.

    In looking at final projects, approximately 25% of students do not write any unit tests. And the majority of students who do write unit tests still do not have a clear idea of how to generate the different test cases.

    Final projects also provide evidence that most students do not consistently apply the practice of design by contract, i.e. specifying preconditions and postconditions for each method, as well as class invariants. Nor are students consistently doing precondition checking, either with explicit checks and thrown exceptions or with assert statements.

    However, it is clear that some practices are not highly valued by students, and I am unclear as to the extent to which students will continue with these practices beyond this course. Will they collect requirements before coding? Will they design? Will they unit test? And will they debrief or do project retrospectives? My suspicion is that unless reinforced, either in their professional work or in subsequent courses, many of these habits will drop away when they leave the class.

  3. To what extent do students work effectively in a software team by using explicit practices for documenting commitments and ensuring individual accountability?

    Students are conscientious in documenting their groupwork. Although they are sometimes unclear as to the amount of detail to provide in their first weekly report, with instructor feedback they quickly begin to document their work process at a level that both provides public accountability and commitment.

    What is most surprising to me is that, although I am willing to penalize students for free riding, how little free riding is actually reported. Because I have used groups for several years, and have done so in the past without any mechanisms for public accountability, students have regularly reported free riding. The fact that I receive virtually no such reports is strong evidence that it rarely occurs. Mutual and public accountability that allows students to see that their groupmates are meeting their commitments helps to ensure the cooperation under "quasi-voluntary" conditions, as elaborated below.

    In the individual reports that students hand in after each milestone, students are asked "Suppose you have 100 units of something desireable to distribute across your team in proportion to their overall contribution and effort on this project. How would you distribute it? [The sum over all of the team members will be 100%.]" Examining all students who gave consent in past quarters in this course (individual reports for the Autumn 2004 quarter had not been anonymized at the time of writing this portfolio), less than 17% of the reports ranked other group members as contributing less than their equally weighted share minus three percentage points. That is, less than 22 points for any student for 4 person groups, and less than 30 points for any students for 3 person groups.

    As part of the coursework, students fill out a pre-course and post-course questionnaire examining self-attitudes about groupwork and technical skills. The following table summarizes a series of two-tailed, paired-sample t tests on a series of before/after questions. Parentheses indicate wording changes made in going from the pre- to the post-course survey. The number of students used is 24, and the number of degrees of freedom (df) for each t test is 23. Each question was rated on a 5-point Likert scale ranging from 1 ("strong disagree") to 5 ("strongly agree").

    Question Pre-course
    t Sig (2-tailed)
    I like doing course projects in groups. 3.04 3.42 -1.989 .059
    Effective groupwork is much more a matter of individual chemistry than it is a matter of group skills and a supportive environment for groupwork 3.17 4.00 -2/632 .015
    In my groups from previous (current) terms, I lived up to my group commitments 3.17 3.88 -2.6 .016
    In my groups from previous (current) terms, my groupmates lived up to their group commitments 1.88 3.08 3.709 .001
    My groups from previous (current) terms had one or more students who did not do their fair share of the work 3.08 2.46 2.223 0.36
    My groups from previous (current) terms had one or more students who dominated the group 3.38 3.79 -1.856 0.76

    Even with a small sample, a number of changes in student attitude occured with statistical significance at the 0.05 and 0.1 significance levels. Students reported liking groupwork more at the end of the course than in previous courses, improvements in keeping commitments for both oneself and others in the group as compared to previous courses, a reduction in free riders, and an increase in those who dominate.

    Although students consistently demonstrate a high level of commitment and a low level of free riders, they also consistently rate "weekly reports" as some of the least useful documents, and ones that they would be unlikely to adopt in the future. Students either do not perceive these mutual monitoring mechanisms as valuable, or they view them as superfluous given their other monitoring mechanisms.

    One thing to mention that provides me with evidence of the success of the group mechisms for mutual monitoring and avoidance of free riding, is that I have eliminated the "crisis group", where students come to me after several weeks to tell me that their group is in shambles, that all members hate one another, and that they fear that they will fail the course. The weekly reports provide me with early feedback - and an opportunity to intervene - before such problems become severe. Also, though I rarely need to use sanctions, I will reduce credit for individual members if there is a consistent and documented pattern of some amount of free riding. Having used groups in the past with the occasional group failing spectacularly, I consider the lack of such crises to be quite telling.

    Related to this, I often have students come to me at the start of the term to complain about a student with whom they have been randomly grouped. The most usual complaint is that the other student has free ridden on past group assignments, though often there are problems of "chemistry". I tell the student that it isn't important to be friends with their groupmates, but simply to work effectively together. And if they can learn to work effectively with someone who has been a free rider in the past, then they have really learned something quite useful in other contexts. I tell them that I will hold they and their group members accountable to the same standards, and that they will need to do so as well. I decline their requests to be placed into a different group. I ask them to reserve judgement, and to talk with me later in the term if the problem materializes. I have never had a student return to say that the other student has in fact freeloaded. On two occasions in which students have initially complained about their group assignment, they have written that their project group is the best group with which they have ever worked.

  4. To what extent do students reflect on their use of these practices, their learning throughout the course, and their teamwork behavior so as to better understand the relationship between their actions and specific outcomes?

    I see considerable variation in the extent to which students exhibit metacognition and reflection, and considerable variation in the groups that regularly and explicitly monitor their progress. Anecdotally, groups that report spending time each meeting to monitor their progress are almost always among the top performers, though there are many top performing groups that seem never to monitor or discuss their performance.

    All students do some amount of reflection in their weekly reports. What I do not see much evidence of is students making chains of inference between behavior and outcomes.

In summary, students are meeting many, but not all of the goals that I have for their learning.


Underlying my entire course design is a conception of software development as a reflective, iterated, social activity. In this section, I hope to answer the questions "why practice?", "why iterated?", "why social?", "why reflective?

Why practice?

Students will learn how to develop quality software not simply by reading, though this is important, but through active engagement within specific contexts. For example, in addition to reading a paper on design principles (e.g. Riel, 1996), the course project requires students to {\em create} a software design that embodies the principles studied. When learning about software in particular, Papert (1982) claims that students can learn programming languages as they would their native language: through immersion in rich and complex "microworlds" that both enable and demand exploration and engagement with computational artifacts. I apply this principle during class sessions by having students primarily engaged in active exercises on specific practices with corrective feedback from peers and the instructor. The practice and feedback that they receive during class sessions will then serve as a bridge to engaging in these practices outside of class sessions during their group meetings, some of which they will also carry out individually.

Mary Bowechop Greene, master Makah basket maker captures the centrality of practice in the advice she gives to novices.

I would suggest that the new basket makers stay with it until you know what you're doing. It's easy to start and just leave it. But if a person can stay in there long enough to know how to do it, then as you have time; it takes actual doing to perfect, and it takes actual doing to speed up in your work. You can't just sit and know how to do it, you have to actually, actually do it.
The same can be said of software designers.

Why iterated?

Within the domain of engineering, Adams (2002, p1) reports on the relationship between design iteration and design effectiveness in engineering design.

Iteration has been found to constitute effective design practice (Adams, Turns and Atman, in press; Bucciarelli, 1996; Radcliffe and Lee, 1989) and provide mechanisms for supporting design innovation (Dorst and Cross, 2001; Suwa, Gero and Purcell, 2000). For example, Suwa et al (2000) found a significant relationship between invention and unexpected discoveries during design sketching. In a comprehensive empirical study of iteration in engineering student design processes, Adams (2001) found that iteration is a significant component of design activity that occurs frequently throughout the design process; and measures of iterative activity were significant indicators of design success ... and greater engineering experience.

It is tempting to view learning as a purely internal affair, occuring in a context-independent fashion in the mind of the learner, involving only individual cognitive practices. I suggest, however, that learning in general, and design iteration in particular has as a prerequisite the externalization of design ideas in a form that makes the design amenable for critique and change, part of a repeated cycle of design. This view is informed by recent cognitive research on the extended mind: a view of the mind that extends beyond the confines of the individual brain to include external artifacts as well. As Lave and Wenger suggest "learning as internalization is too easily construed as an unproblematic process of absorbing the given, as a matter of transmission and assimilation" (Lave and Wenger, 1991). Andy Clark points out the commonsensical bias we have toward viewing the mind (and cognition) as a purely internal affair: "we are in the grip of a simple prejudice: the prejudice that whatever matters about MY mind must depend solely on what goes on inside my own biological skin-bag, inside the ancient fortress of skin and skill. But this fortress was meant to be breached" (Clark, 2001).

Clark (2001, p.19) describes an empirical study by Van Leeuwen, Vertijnen and Hekkert on the interaction between artist and artifact in the act of creation:

The question the authors pursue is: why the need to sketch? Why not simply imagine the final artwork "in the mind's eye" and then execute it directly on the canvas? The answer they develop, in great detail and using multiple real case-studies, is that human thought is constrained, in mental imagery, in some very specific ways in which it is not constrained during on-line perception. ... The sketch pad is not just a convenience for the artist, not simply a kind of external memory or durable medium for the storage of particular ideas. Instead, the iterated process of externalizing and re-perceiving is integral to the process of artistic cognition itself.

What Clark suggests is that externalization of thought, through various types of representation, gives rise to new perceptual and cognitive operations that allow for reflection, critique, and iteration. That is, the act of bringing thoughts into material form, such as expressing software designs in words and symbols, is not merely a formal exercise, akin to taking mental dictation, but is itself constitutive of and essential to creation. Program designs, when made explicit and encoded externally in sign and symbol, are both the fuel and the output of the design engine. Mind, then, is not simply the sum total of representations and processes within the brain, but includes external representations as well.

For software in particular, externalization of program design and code engages the designer's critical faculties in the process of falsification, of subjecting one's ideas to real-world constraints. Popper claims (1987) that this is a distinctly linguistic phenomenon, and yet it can clearly be extended to virtually all types of external representation with propositional content:

... human descriptive language differs from all animal languages in being also argumentative, and that it is human argumentative language which makes criticism possible, and with it science.

There is a world of difference between holding a belief, or expecting something, and using human language to say so. The difference is that only if spoken out, and thus objectivized, does a belief become criticizable. Before it is formulated in language, I may be one with my belief: the belief is part of my acting, part of my behavior. If formulated, it may be criticized and found to be erroneous; in which case I may be able to discard it."

These ideas find expression within in my course in several ways. First, I provide students with both generative processes and notations for expressing ideas about software, as indicated in the unit on design detailed above. Second, I direct students to work with these processes iteratively, to generate initial ideas at abstract levels and then to refine them successively through collaborative critique. Third, though I present notations such as UML, the goal is not that the student develop complete mastery of the details of the notation; hence, I do not require students to maintain all of their development documents throughout the development lifecycle. Rather, I view such externalized designs as scaffolding toward the implementation of the final computer program. Their value is more in the kinds of perceptual and communicative operations that they enable for creating software than in any intrinsic value that might persist over a software product's lifecycle.

Why social?

Vygotsky provides the beginnings of a theory of social cognition that views cognition as situated activity within a social setting. An important aspect of his theory concerns the zone of proximal development, which he defines as " the distance between the actual developmental level as determined by independent problem solving and the level of potential development as determined through problem solving under adult guidance or in collaboration with more capale peers" (Vygotsky, 1978) (italics in original). Lave and Wenger interpret Vygotsky from a social perspective: "learning as increasing participation in communities of practice concerns the whole person acting in the world. ... Briefly, a theory of social practice emphasizes the relational interdependency of agent and world, activity, meaning, cognition, learning, and knowing" (Lave and Wenger, 1991). This view of knowledge as being socially distributed within specific settings is further developed by Brown et al. (1993), who have designed inner-city K-12 classroom environments to both facilitate and exploit the distribution of expertise that can occur within a dynamic learning setting. They describe how such distributed expertise can lead to powerful forms of reasoning and skillful activity, especially in learning some of the critical reasoning skills necessary for doing science.

Kennedy and Eberhart introduce their text Swarm Intelligence (Morgan Kaufmann, 2001) with the following. "The discussions in this book center around two fundamental assertions and the corollaries that follow from them. ... I. Mind is social. We reject the cognitivistic perspective of mind as an internal, private thing or process and argue instead that both function and phenomenon derive from the interactions of individuals in a social world."

What all of these researchers have in common is a conception of learning not simply as the transfer of propositional knowledge from one person to another, a simple model where learning consists in the passive reception of "content" or messages sent by the teacher. Rather, learning is conceived of as situated activity in a specific social setting. The novice incrementally acquires expertise through active engagement in the social group's activities, starting at the periphery and working toward a fuller and more central engagement.

It is important not to underestimate the crucial importance of the social in the construction of computer programs. For instance, Eric Raymond provides a meditation on the social structures and practices associated with open-source as opposed to commercial software development in his essay "The cathedral and the bazaar", a chapter in his book (1990) of the same title. (For a critical review of this work, but not in contradiction to the social nature of software development, see Bezroukov, (1999) "A second look at the cathedral and the bazaar").

I comment on this more extensively in the paper "On the Meaning of Computer Programs" (2001), where I argue that the very understandings that software developers have of the computer programs with which they interact are crucially dependent upon issues of sociality:

Programs are written so as to be both executable by computers in order to carry out useful work, and to be read by other people who must maintain the programs in order to fix errors and to extend the program's functionality. In order to construct meaning from a program the program reader makes economic choices about her actions where the costs and benefits are influenced not only by cognitive constraints but also by the organizational and social context in which the program-related activities occur. ... Of fundamental importance is the extent to which the reader believes that she shares common knowledge with the program writer, both in programming and application domains. This common knowledge is associated with the different social and language-using groups to which the reader and writer belong. Group-specific language is used to economically index the large quantity of group-specific knowledge that provides the interpretative context for meaning construction. Following cooperative conversational maxims, program writers exploit shared knowledge and language by using the abstraction and naming mechanisms of programming languages to express programs at a variety of different levels. Program readers likewise exploit this shared knowledge and language as well as the cooperative communicative intent of the writer to balance the level of meaning that they construct against the resource constraints under which they operate.

This course embodies my commitment to social learning through my placing considerable grade weight on group-produced artifacts, having engagement in group processes as central goals, and selecting practices -- in requirements gathering, design, and programming -- that are primarily social.

But one cannot simply assume that placing students in groups will ensure that such groups work effectively to produce software. Johnson and Johnson, two of the world experts in the use of group learning, discuss the ways in which groups can fail: students free ride on the efforts of their groupmates, disengage from the group (especially in the presence of free riding groupmates), and dominate the group inappropriately (Johnson et al, 1991). Johnson and Johnson provide a set of principles for overcoming these difficulties. These principles include positive interdependence, i.e. that students are rewarded for coordinated, collective action, that group members have sufficient time face-to-face, that there are mechanisms for individual accountability, that there is frequent use of groupwork skills, and that the group monitor and adjust its own behavior based on success toward achieving group goals (Johnson et al, 1991).

The failures that the Johnsons discuss are similar to the kinds of problems that social scientists have described when examining problems of public goods and common resources. The joint effort that students carry out is, in essence, an intellectual commons; rather than being a natural resource, such as a pastureland or watershed, threatened by overexploitation, it is a man-made commons, threatened by lack of effort. Each student has an incentive to free ride on the efforts of the others, but if all students do so then the group effort will die aborning.

In this regard, the intellectual commons that students share bears a striking resemblance to the situation described by Garret Hardin as "The tragedy of the commons" (1968):

Each man is locked into a system that compels him to increase his herd without limit -- in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons.
Solutions to collective action problems in managing commons are suggested by Elinor Ostrom, deriving from examining both successful, collectively managed common resources in different parts of the world over hundreds of years, as well as failures to do so. From this examination, she concludes "All efforts to organize collective action, whether by an external ruler, an entrepreneur, or a set of principals who wish to gain collective benefits, must address a common set of problems. These have to do with coping with free-riding, solving commitment problems, arranging for the supply of new institutions, and monitoring individual compliance with sets of rules" (1990, p27). The groupwork situation in a course bears similarities to what Ostrom describes as one of quasi-voluntary compliance, borrowing the terminology of Levi (1988). In a quasi-voluntary compliance situation, such as what occurs with taxpayers, there is no direct coercion to have participants comply, but punishments can be high if non-compliars are caught. According to Ostrom, Levi reports that taxpayers will comply if 1) the external "ruler" operates in good faith and provides the goods that taxpayers believe that they are paying for, and 2) taxpayers believe that the other taxpayers are also complying. This suggests that in a groupwork situation, the professor has a role in helping to establish fair rules of play, and that students will continue to contribute to the group effort when they see that other students are doing so. I call this an "I will if you will" situation.

Despite good intentions that students should "work in groups because they will be developing software in groups when they leave school", poorly implemented groupwork might simply reinforce ineffective social practices, thereby doing more harm than good. Many of the mechanisms that I describe in the Coursework section are specifically designed to deal with problems of commitment, free riding, monitoring, and the supply of new institutions (i.e. with the development of new rules to meet new conditions and constraints). These include a requirement up-front that students abide by a specified set of actions that are necessary to carry out groupwork that I place in the syllabus, weekly reports (monitoring), task matrices (commitment, monitoring), jointly authored documents (commitment, free riding, monitoring), individual and mutual evaluations (monitoring, free riding), debriefings and retrospectives (rulemaking), and the use of both group evaluations coupled with individual "multipliers" (monitoring, free riding).

Why reflective?

As Ann Brown states, "Metacognition refers loosely to one's knowledge and control of own cognitive system (1987)." Explicit practices for thinking about thinking can help students to understand the link between their activities and their learning. It helps them go from implicit practices that might be ineffective to explicit expert practices. One of the three key findings in the book How people learn that summarizes learning theory through the 1990's, states "A `metacognitive' approach to instruction can help students learn tomain reflective activities from this course is the individual report that each student completes at the end of each project milestone. I ask students to reflect on groupwork (e.g. "What are the things that your group did that were positive and constructive?"), on the learning that they have achieved, (e.g. "What was the most important part of this milestone and what did you learn from working on it?"), and on the technical aspects of the project (e.g. "How did you validate that your design meets the specs?").

In addition to broad theoretical results on metacognition, there are established professional practices that incorporate reflective activity within software development, and empirical studies of the reflective practices of professional engineers. The practices of debriefings and project retrospectives are two of these professional practices that I incorporate in this course.

Curricular Context

This course is the only course in our curriculum centered on computer program development as a design activity, rather than as formal, mathematical modeling or as the syntax and semantics of computer programming. In this sense, it concerns itself with what Kees Dorts (2003) calls wicked problems. As Goel and Pirolli (1992) describe, such problems -- characteristic of all design spaces -- involve a wide distribution of information that is only accumulated through design activity, are subject to considerable constraints, both from the natural and social worlds, generally do not have right or wrong answers but instead involve context-sensitive tradeoffs, involve a temporal separation between design specification and design execution, enactment, or embodiment in physical artifact, are generally large in scale, and consist of a number of interacting component parts. Thus, improving design skills is instrumental for student success in subsequent courses and in the workplace.

I view this course as the most important course that many students take in their major simply because, to some extent, it acknowledge software construction as a design discipline. Thus its focus on active engagement in designing via the use of specific practices embedded within a social setting. What comes to the fore is design discussion and debate with fellow practitioners concerning both the problem to be solved and the emerging solution, and making pragmatic tradeoffs under constraints. It provides students with an overview of the process that will take them from a problem statement to a completed software product, shows them that the professional practitioner literature is their literature, introduces the notion that design is a complex craft that will take them many years to master and that requires attention to practice and critical reflection, and provides them with a set of concrete, practices that they can immediately employ in teamwork settings to develop software products that are much larger and much higher quality than they have every developed.

This course is a Core course taken by students near the end of their junior or beginning of their senior year. As a Core course, it is required for graduation in the Computing and Software Systems major (essentially a Computer Science degree, despite the ambiguous name). In order to satisfy this course's pre-requisites, which are rigidly enforced, entering students must have had 2 - 4 previous courses in introductory programming, a discrete math class, a data structures class, an algorithm analysis class, a technical writing class, and a class in managing technical teams (though the latter two will likely be dropped starting with the Fall, 2005 term). Students cannot be assumed to have taken any of the upper level elective courses offered in our department (e.g. databases, artificial intelligence, programming languages). This course serves as a prerequisite for many though not all of the upper-level elective courses in the major.

Given the number of pre-requisites, only students majoring in CSS take this course. Many students report that they are rusty with programming going into the course, with some claiming that it has been a year since they last did any programming of substance. Most have had no previous formal education in program requirements gathering, design, and testing, nor have they applied formal specifications to program modules.

This course is a 5-credit course meeting 4.5 hours per week with the instructor in a classroom, with no additional scheduled labs or discussion sections. Students report spending an average of 15 hours/week outside of class on all work for this course. The course is taught in a 10-week quarter, and is taught each quarter during the academic year, sometimes twice per quarter. Class sizes have averaged 15 students per quarter.

In terms of those aspects of the ACM 2001 Curriculum Guidelines, this course implements the following specific guidelines, all of which are within the Software Engineering topic area.

The course explicitly does not deal with design patterns or larger architectural issues. Nor does it concern itself with hardware, databases, web technology, client-server, enterprise application development, persistence and transaction management, and many other such concepts that real software must take into consideration.

Historical Trajectory

I have taught this course two to three times per year since autumn quarter, 2000. The course has changed during that time, reflecting an evolution in my development as a software designer and teacher of design. Many aspects of the course -- particularly my refining of the group practices -- grew from my responses to situations that I was observing and experiencing. So, although I can provide a philosophical rationale, this was never separate from, but informed by and informing, the lived situations with individuals and groups that I was encountering as teacher of the course. I briefly describe some of the more important changes along with rationale, in approximate chronological order.

Minutes of weekly student meetings
I added weekly reports that document student meetings because I wanted more insight into how the groups were actually functioning. I suspected that some of my groups were doing poorly not for technical reasons but for reasons associated with group processes. Bruegge and Dutoit (2000) provide a fine chapter on meetings (and documentation), which reflected my experience in peer faculty groups, i.e. that they are often more effective with clear documentation to provide public monitoring, memory, and accountability.
Self and mutual evaluations
I adapted the current form I have from Vicki Almstrum at the University of Texas, Austin as a means for students to be mutually accountable.
Random group assignments
I started to assign students to groups only after having several students tell me that choosing their own groups was disastrous. This was because they could not say "no" when their friends asked them to be in the same group but could not hold their friends accountable for fear of damaging the friendship. I assign randomly because I perceive it as fairer than any other method.
Waterfall to Iterative and back
I read Beck's Extreme Programming book (2000) shortly after it came out. After attending OOPSLA in 2002, I decided that rather than having students deliver their project in waterfall-like stages (requirements, design, code/test), that I would have them do three increments of functional code delivery, one increment every three weeks. I was disappointed with the results, and student course reviews were lower than I had received in any course at UWT. The level of stress was at a very high level starting from the first day of class, and the quality of code and design that students delivered was quite poor and never improved. I spent considerable time accounting for the disappointing results I didn't want to abandon this XP practice prematurely if I thought that it offered considerable benefits within the context of my other goals. I believe that students and I were not happy with this model for a few main reasons. First, it asks students with different skills and strengths that might have not have previously worked together to deliver code too quickly. They simply are never able to "gel" as a group with a lower-risk deliverable as they are with a requirements document. Further, it asks them to undertake a small version of all phases of the software lifecycle with hardly any explicit instruction in any of these phases. And second, subsequent milestones do not improve, since the press of additional requirements prevents most groups from refactoring their earlier code or incorporating lessons learned from material discussed in class. I thus returned to using the waterfall model to structure project delivery during the following quarter. Quality of all documents improved, including design, code, and tests, and student satisfaction returned to its high pre-XP levels.
Automated tests and Junit
Although I abandoned an iterative code delivery schedule from the XP practices, I preserved Junit and continue to use it. My conversion experience, inspired by OOPSLA (where I met developers who told me that they were delivering code on time and under budget, and spending over 50% of their development budget on testing) occurred when I starting using Junit in my own coding. Certainly I liked being able to do regression testing by pressing a key (i.e. the costs of developing a test are amortized over the number of times the test is run throughout the software's lifetime). And I liked having the code itself be the documentation for how I was doing the testing. But most importantly, I liked the effect that automated unit testing was having on the code designs. The more I used Junit, the more I was designing for transparency of logic which meant ease of testing.
Task Matrix
I added the task matrix to the weekly meeting minutes as a simple way to record individual work commitments during the ensuing week and to record who fulfilled work commitments of the previous week. The primary reason is so that students can mutually monitor, and know that I am overseeing their work as well. This is consistent with the role of the "sovereign" in a quasi-voluntary taxation system as defined by Levi (1988). I believe that such mutual monitoring and external oversight ensure continuing effort in the contingent social arrangement associated with "I will if you will" settings.
Equity Estimator
I added this shortly after the Task Matrix, primarily as a way to raise the issue of equity -- what it means, how to achieve it -- on a weekly basis within the groups. Although there have been times when I have used the estimator to discuss unusual unbalanced workloads, students generally rank workloads as equitable even when later, in their individual milestone reports, students report a far from equal distribution.
Individual Multiplier and Individual Reports
Assigning individual grades based only on group scores provides no mechanism either for rewarding the heroic effort or punishing the free rider. Paradoxically, having a mechanism for sanction obviates the need to use it, except in rare circumstances. Part of the reason for this is that I require a consistent audit trail (in documents handed in as part of the milestone, in individual reports of group members, in weekly reports) in order for this multiplier to be less than one. The other reason is that, by all evidence that I have, most students make a good faith effort to carry a fair share of the load. Perhaps, as Ostrom (1990) hints, fairness is the basis for sustained collective action. Students have been remarkably generous toward their groupmates who fail to perform at a high level as long as the under-performing groupmate has made a sincere effort to contribute equally.
I began to assign a considerable number of writing assignments when I began to teach the Computer Ethics course. In preparing to teach this course, I came across the webpage of my colleague Michael Kutcher at UWT from his course "Technology and the Modern World". In addition to finding a number of useful readings (a few of which I borrowed for my course), I stumbled onto his use of the "Treatment" (or "Precis"), adapted from our mutual colleague John Peterson, now at UC Berkeley. In addition to using Treatments in the Computer Ethics course, I started to use them in this course as well. I was thunderstruck; the level of discussion in class dramatically improved. I stopped "teaching" the readings, and instead focussed on student interpretations, confusions, and reactions, making sure as well to correct their misconceptions, and engaging them directly in the described practice. I also found the treatment to provide a clear basis for grading, since each sentence has a particular function that is teachable and verifiable.
Design walk-throughs
I added design walk-throughs early in my offerings of this course. One student from each group would present the design to the entire class, followed by a whole-class critique. This was problematic for two reasons. First, most of the class is inactive during the individual presentations, and second, the public questions and critiques were uneven in quality and value of feedback. I changed the design walk-rhough class as follows. Adapting the idea of Jigsaw and Reciprocal Teaching (from Ann Brown, disussed in Branscomb et al (2000)), I took one person from each project group, and placed them in a Review group. In sequence, each person presents their project group's design to the people in their review group, and one of the listeners takes notes to record comments, critiques, and suggestions. This requires everyone to come prepared to present their group's design, yet lowers the level of formality so as to elicit a more engaged discussion of each design.
Meetings with Instructors
During the first several offerings of this course, it was not uncommon that a student group would hand in a milestone in which students had a completely different interpretation of my expectations for their work, in terms of quality, content, and form. In response, I required each group to meet individually with me at least one week before each milestone to demonstrate and discuss a subset of their milestone documents. Not only does this provide me an opportunity to give corrective and encouraging feedback, it allows me to answer any questions, allay fears, and give advice. This has the effect of both reassuring students and me that they are on track to successfully complete each milestone. And it has completely eliminated milestone hand-ins from groups that are "off in left field"


As with software designs, course designs are not developed from whole cloth. I am grateful to several people for their contributions to this course and this course portfolio.

When faced with teaching this course for the first time, in an area quite different from any I had studied as a student, and with a literature that I was largely unfamiliar with, I did what most 21st century course designers will do: I searched the Internet. I was fortunate to stumble on the materials developed by Vicki Almstrum, from the University of Texas at Austin. Though my course has diverged considerably, I am appreciative of her general conception of this field, from which my first course design was largely based. I continue to use adaptations of her self and mutual evaluations.

I owe thanks to Sally Fincher (University of Kent at Canterbury) for introducing me to debriefings (which I teach to my students); to David Socha (UrbanSim) for introducing me to Norm Kerth's retrospectives and for the many discussions that we have had about all things design; to Esther Darby, private consultant and retrospective facilitator, for helping me with the mechanics of running a retrospective; to Adam Barker, former student and now interface developer extraordinaire at Google, for re-introducing me to paper prototyping, and demonstrating why they are important; to Steve Hanks, for the discussions that we had about this course in summer, 2003 and for the work that we did in characterising what makes a good term-length group project; to Tammy VanDeGrift (then at UW Seattle, and now at the University of Portland), from whom I borrowed the individual reports at the end of each milestone, an adaptation of the lab reports she required of each person in a pair programming team in introductory programming courses; to David Notkin, (UW Seattle), whose course syllabus and lecture notes for a similar course that he teaches are illuminating; and to Michael Kucher (UWT) and John Peterson (UC Berkeley) for their use of treatments.

I am also grateful to several people who have helped me in developing this course portfolio, and in understanding course portfolios more generally. First and foremost to Sally Fincher, who introduced me to the idea of course portfolios, and suggested that I start a project involving the concurrent development of course portfolios among community college and university instructors of Computer Science in the Puget Sound region (see the Disciplinary Commons webpage). I am grateful as well for many conversations with Jennifer Robinson, Director of Instructional Consulting at the University of Indiana, Bloomington, who knows as much about course portfolios as anyone on the planet. Thanks as well to Qi Wang (Tacoma Community College), who developed her own portfolio in Winter, 2005 in parallel with my efforts, and who critiqued parts of the work here. Qi's sincere effort in reflecting on her own teaching and documenting it was an inspiration, and without the comradeship for a partner on this enterprise it is doubtful that I would have completed this portfolio.

Finally, I am grateful to the contributors to the public portions of the portfolio repositories at University of Nebraska, Lincoln (part of the Peer Review of Teaching Project) and at Indiana University, Bloomington. Having examples of interesting portfolios served as an important stimulus and foil in the construction of this portfolio.


For those papers that are assigned to students, a brief annotation is provided in the document annotatedBib.html summarizing the main theme of the reading and my rationale for its use to suit course objectives.

  1. Adams, R., Turns, J., and Atman, C., "Educating effective engineering designers: the role of reflective practice." Design Studies 24, 2003. 275-294.
  2. Adams, R., Turns, J., and Atman, C., "What Could Design Learning Look Like," Expertise in Design: Design Thinking Research Symposium 6, Sydney, Australia, 2003.
  3. Adams, Robin. "Understanding Design Iteration: Representations from an Empirical Study." Proceedings, International Conference of the Design Research Society. 2002.
  4. Beck, Kent. "A quick overview" and "How could this work." Extreme Programming Explained. Addison-Wesley, 2000. 53-70.
  5. Beck, Kent and Ward Cunningham. "A Laboratory For Teaching Object-Oriented Thinking." OOPSLA'89 Conference Proceedings. New Orleans, Louisiana, USA, October 1-6, 1989. Also appeared in SIGPLAN Notices, 24:10, October 1989.
  6. Beck, Kent and Erich Gamma. "Test Infected: Programmers Love Writing Tests". http://junit.sourceforge.net/doc/testinfected/testing.htm, accessed 4 August, 2005.
  7. Bezroukov, Nikolai. "A Second Look at the Cathedral and the Bazaar". First Monday 4(12), December 6th 1999. Available at http://www.firstmonday.org/issues/issue4_12/, accessed 10 March 2005.
  8. Boehm, Barry. "A Spiral Model of Software Development." Software Engineering. Eds. M. Dorfman and B. Boehm. The Institute of Electrical and Electronics Engineers, 1997. 415 - 426.
  9. Bransford, J., Brown, A., and Cocking, R. (eds.), How people learn. National Academy Press, 2000.
  10. Brooks, Frederick P. Jr. "No Silver Bullet: Essence and Accidents of Software Engineering." Software Engineering. Eds. M. Dorfman and B. Boehm. The Institute of Electrical and Electronics Engineers, 1997. 13 - 22
  11. Brown, A. "Design Experiments: Theoretical and Methodological Challenges in Creating Complex Interactions in Classroom Settings," The Journal of the Learning Sciences, 2(2), 1992. 141-179.
  12. Brown, A., Ash, D., Rutherford, M., Nakagawa, K., Gordon, A., and Campione, J. "Distributed Expertise in the Classroom." Distributed Cognitions. Ed. Salomon, Gavriel. Cambridge University Press, 1993. 188-228.
  13. Bruegge, B. & Dutoit, A. "Testing." Object-Oriented Software Engineering: Conquering Complex and Changing Systems. Prentice Hall, 2000. 326-369.
  14. Cerbin, William. "The Course Portfolio". APS Observer, 14:4, 2001.
  15. Clark, Andy. "Natural Born Cyborgs". In Cognitive Technology: Instruments of Mind. The Fourth International Conference. Beynon, M. and Nehaniv, C. and Dautenham, K. (eds.). Spring, 2001.
  16. Cockburn, Alistair. "Introduction to Writing Effective Use Cases" and "The Use Case as a Contract for Behavior." Writing Effective Use Cases. Addison Wesley, 2001. 1-19 and 23-33.
  17. Dorst, Kees. Understanding Design. BIS Publishers, 2003.
  18. Faulk, Stuart. "Software Requirements: A Tutorial." Software Engineering. Eds. M. Dorfman and B. Boehm. The Institute of Electrical and Electronics Engineers, 1997. \item Fincher, S., Petre, M., Tenenberg, J. et al., "A multi-national, multi-institutional study of student-generated software designs", 4th Annual Finnish / Baltic Sea Conference on Computer Science Education, Koli Calling, Joensuu, Finland, 2004.
  19. Fowler, Martin. "Introduction", "Class Diagrams: The Essentials" and "Sequence Diagrams". UML Distilled, 3rd Edition. Addison Wesley, 2004. 1-17, 35-52, and 53-63.
  20. Goel, Vinod and Pirolli, Peter, "The Structure of Design Problem Spaces", Cognitive Science, 16, 1992. 395-492
  21. Greene, Mary Bowechop, interviewed by Theresa Parker on May 13, 1999. From display at the Makah Museum and Cultural Center. Viewed June 28, 2003.
  22. Hardin, Garret, "The Tragedy of the Commons", Science, 162 (13 December 1968): 1243-1248.
  23. Hoare, C.A.R., "An axiomatic basis for computer programming", Communications of the Association for Computing Machinery, 12:10, 1969. 576-580.
  24. Horstman, Cay. "Guidelines for Class Design." Object Oriented Design & Patterns John Wiley & Sons, Inc., 2004. 122-133.
  25. Hutchings, Patricia (ed.). The Course Portfolio: How Faculty Can Examine Their Teaching to Advance Practice and Improve Student Learning. Association for Higher Education, 1999.
  26. Johnson, D. and Johnson, R., and Smith, K. Cooperative learning : increasing college faculty instructional productivity. Washington, D.C. : School of Education and Human Development, The George Washington University. ASHE-ERIC Higher Education Report, 1991, no. 4.
  27. Kennedy and Eberhart, Swarm Intelligence. Morgan Kaufmann, 2001.
  28. Kerth, Norman. Retrospectives. Dorset House, 2001.
  29. Lave, J. and Wenger, E. Situated learning: legitimate peripheral participation. Cambridge University Press, 1991.
  30. Levi, Margaret. Of Rule and Revenue. University of California Press, 1988.
  31. Marciniak, John."Reviews and Audits". Software Engineering. Eds. M. Dorfman and B. Boehm. The Institute of Electrical and Electronics Engineers, 1997.
  32. Mayer, Bertrand, Object oriented software construction, 2e. Prentice Hall. 1997.
  33. Ostrom, Elinor, Governing the commons: The evolution of Institutions for Collective Action. Cambridge University Press, 1990.
  34. Papert, Seymour. Mindstorms: Children, Computers, and Powerful Ideas. Harper Collins, 1982.
  35. Raymond, Eric. The Cathedral and the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary. Oreilly & Associates, Inc., 1999.
  36. Popper, Karl, "Campbell on the Evolutionary Theory of Knowledge". In Evolutionary Epistemology, Rationality, and the Sociology of Knowledge, Radnitzky, G. and Bartley III, W. (eds.). Open Court, 1987.
  37. Riel, Arthur. "Classes and Objects: The Building Blocks of the Object-Oriented Paradigm", and "Topologies of Action-Oriented Versus Object-Oriented Applications." Object-Oriented Design Heuristics. Addison Wesley, 1996. 11 - 27 and 29 - 51.
  38. Shulman, Lee. Course Anatomy: The Dissection and Analysis of Knowledge Through Teaching. In Hutchings, Patricia (ed.). The Course Portfolio: How Faculty Can Examine Their Teaching to Advance Practice and Improve Student Learning. Association for Higher Education, 1999. 5-12.
  39. Snyder, Carolyn. "Paper Prototyping: Case Studies" and "Making a paper prototype". Paper Prototyping. Morgan Kaufman, 2003. 25-47 and 69-95.
  40. Sun Microsystems Inc. "Code Conventions for the Java Programming Language." (1999) 1 November 2004. http://java.sun.com/docs/codeconv/.
  41. Tenenberg, J., "On the meaning of computer programs". In Cognitive Technology: Instruments of Mind. The Fourth International Conference. Beynon, M. and Nehaniv, C. and Dautenham, K. (eds.). Spring, 2001. 165-174.
  42. Vygotsky, Lev. Mind in Society: the development of higher psychological processes, Harvard University Press, 1978.
  43. Williams, L. and Kessler, R. "All I Really Need to Know about Pair Programming I Learned In Kindergarten." Communications of the ACM, 43.5, 2000: 108-114.
  44. Wirfs-Brock, R., Wilderson, B., and Wiener, L. Designing Object-Oriented Software, Prentice Hall, 1990.