Dear Steve, I hope that you are well and enjoying working at NSF. I am writing in the hopes that you might provide some suggestions about how I might further pursue NSF funding for a project that I am collaboratively developing. In collaboration with Sally Fincher and four of the Bootstrappers/Scaffolders, I have been spearheading a project that appears to fit well with the CCLI solicitation #NSF 07-543. The project is called "Disciplinary Commons in Computing Education". It is focused on the goals of the CCLI solicitation related to "develop faculty expertise" and "assess learning and evaluate innovations", key components of "cyclic model" of educational innovation outlined in the program solicitation. The Disciplinary Commons has three primary components: 1. reflection by individual educators on their own teaching practices, including relationships to student learning and institutional goals (e.g. departmental, university, and disciplinary); 2. participation by computing educators in regional networks of other educators engaged in reflective practices; and 3. reification of these reflections in documents that are individually written, regionally peer-reviewed, and broadly disseminated among the computing education community. Sally and I both piloted this project in 2005-06, and have presented descriptive, evaluative, and theoretical results related to it at SIGCSE, ICER, as well as regional teaching and learning conferences. Sally is running another Disciplinary Commons in England this year. Our model has thus been successfully run; we are seeking funding to expand and adapt the model to new regions and disciplinary areas. We wrote a CPATH proposal to fund this project last year, ID #0721821. On receiving our reviews, it appears that there was a mismatch between our proposal and the CPATH funding priorities. What are your thoughts? Does this appear to be something consistent with CCLI? Is there another solicitation more appropriate? I also want to mention that I am available at your convenience for talking on the phone if you think that will be a more appropriate medium. I will appreciate any counsel that you might provide about how we can pursue this project more fully. Sincerely, Josh Tenenberg Date: Wed, 10 Oct 2007 10:21:36 -0400 From: "Cooper, Stephen C" To: Josh Tenenberg Subject: RE: Funding suggestions for project by Bootstrappers Hi Josh, I would like to chat with you about your ideas. I think that CCLI might be an apporpirate venue. I also think you may wish to consider REESE http://www.nsf.gov/funding/pgm_summ.jsp?pims_id=13667&org=DRL&sel_org=DR L&from=fund I normally spend much of the day on Fridays chatting with PIs. Would 3 or 3:30 or 4 PM ET this Friday work for you? (If so, I can be reached at 610-660-3026, as I telework most Fridays) My initial (i.e. not especially well thought out) response to your idea is: It seems intriguing, but how would I know if it was working; namely how would I know that faculty instruction (and thus student learning) was improving as a result of the use of these reflections? In the interim, perhaps you could send me a bit more detail -- namely your objectives, and then the outcomes you anticipate measuring as related to those objectives. I can chat with you more about what I know about REESE, and some suggestions for CCLI, and which might be the better route for you to go. Regards, Steve Steve, In what follows, I define the project objectives, summarize the previoulsy-piloted model on which the project is based that is used to meet the objectives, describe how the current project extends the pilot, discuss specific outcomes, and then summarize how the objectives will be evaluated. I also spend a bit of time specifically addressing issues of why this project should lead to improvements in teaching (and learning) and how we plan to assess this. I look forward to talking tomorrow. Josh The "Disciplinary Commons" has the following objectives: 1. Building a small set of regional "Disciplinary Commons" composed of 10 to 15 computing educators within a single geographic region who teach the same course in computing education, such as Data Structures, or Human-Computer Interaction; 2. Nurturing leaders among and within those regional communities; 3. Engaging participants in critical reflection on their own and one another's practice, and 4. Sharing the resultant insights and innovations with the broader computing education community and the larger STEM education community; Each of the regional "Disciplinary Commons" follows the same piloted and evaluated model for achieving these goals. It has three primary components: 1. reflection by individual educators on their own teaching practices, including relationships to student learning and institutional goals (e.g. departmental, university, and disciplinary); 2. participation by computing educators in regional networks of other educators engaged in reflective practices; and 3. reification of these reflections in "course portfolios"--documents that are individually written, regionally peer-reviewed, and broadly disseminated among the computing education community. Participation in a regional Commons involves groups of computing educators in an identified geographic region who teach the same course meeting face-to-face for one-half day each month during an academic year. During monthly meetings, participants collaboratively share, critique and peer-review one another's portfolio-in-progress. They discuss the critical issues in teaching disciplinary knowledge, the ways in which students encounter and overcome obstacles, the relationship between the teaching that is enacted and the learning that occurs. Over the course of the year, participants also peer-observe one another in the classroom to support one another's practice-in-action, and to develop an understanding of differences in institutional context. This model was piloted and evaluated by Josh Tenenberg and Sally Fincher within the pacific northwest of the US and throughout the UK (respectively) in introductory programming during the 2005-6 academic year. The material developed during the pilot Disciplinary Commons, as well as a number of talks and peer-reviewed papers disseminating this model can be found at www.disciplinarycommons.org. What distinguishes the proposed project from the piloted model is that it will: 1. Adapt the Commons model to different regions of the US, to additional disciplinary areas and practices, and to a wider range of institutions. Currently, these include a Data Structures Commons in the southeastern United States, a mid-America Commons focused on Software Engineering, a Rhode Island Commons focused on the teaching of Introductory programming that links high schools, community colleges, and universities in the state, and a New England Commons focused on computing within engineering. Depending on the funding received, additional regions and foci will be added. 2. Support a cadre of leaders of each of these regional Disciplinary Commons who have already demonstrated a commitment to computing education. The currently identified leaders (who will be Co-PI's) are Briana Morrison (to lead the Data Structures Commons), Dennis Bouvier (the mid-America Commons), Kathryn Sanders (The Rhode Island Commons), and Robert McCartney (the Engineering Computing Commons). These leaders will be supported through semi-annual meetings led by Josh Tenenberg and Sally Fincher; the sharing of materials already developed for the pilot projects; mutual critiquing of one another's materials for leading each regional Commons; and site visits to one another's regional Commons meetings. 3. Interlink Commons communities, through peer review of portfolios across regions, and structured meetings of all regional participants at the annual SIGCSE Symposia. The specific outcomes related to the objectives include: 1. An archive of course portfolios produced by participants in the Disciplinary Commons. These will share a similar set of components (as described above), will be peer-reviewed, and publicly available via the Internet. 2. A set of ongoing, regional communities of computing educators who can draw upon one another for such things as the sharing and review of course materials, a source of ideas for solving teaching-related problems, and articulation agreements for the smooth transfer of students from one institution to another. 3. Changes to teaching practice by the computing educators who participate and transfer of practice among these educators as result of the meetings, peer review of course portfolios, and peer observations within one another's classrooms. The way in which the project goals will be evaluated are as follows: 1. Building the regional "Disciplinary Commons": These will be evaluated via the records and reviews of the ongoing meetings, the material created for and derived from these meetings, and peer site visits by the regional leaders. Evidence of strength of the regional network will be discerned from ongoing and anticipated joint undertakings between the participants. 2. Nurturing leaders among and within those regional communities: Leadership will be evaluated both formatively and summatively by individual reflection, by peer evaluation, and by the project leaders (Tenenberg and Fincher). The formative evaluation will be particularly important as a basis for determining what additional interventions and materials are needed to ensure the success of the leaders in forming their regional communities. Characteristics of leadership will include recruiting and retaining participants, effective adaptation of the model to the specifics of the regional context, and regional dissemination of the Disciplinary Commons effort and results. 3. Engaging participants in critical reflection on their own and one another's practice: This centrally involves evaluating what kinds of change to and transfer of practice occurs among the participants. There are a number of reasons why we anticipate effective changes to practice: A. There is considerable theory (e.g. "how People Learn", Shavelson and Townsend) indicating the key role that meta-cognition plays in learning; B. There is considerable literature on importing scholarly practices to teaching, such as making teaching public, peer-reviewed, and amenable to being built upon. This work on the Scholarship of Teaching and Learning has been spearheaded by the Carnegie Foundation for the Advancement of Teaching. These practices are believed to be crucial for the growth of scientific knowledge, and to lead to improvements when used in teaching. C. The scholarly and reflective practices will be instantiated concretely through both the critical review that occurs during the meetings and through the construction and review of the course portfolios. D. Each regional Commons is focused on having each participant connect inferential links between their teaching and the learning that occurs, not only through reflection but through systematic analysis of student work. The "cycle of innovation" that is articulated as a foundation of the CCLI and REESE projects (derived from the 2003 Rand report "Mathematics proficiency for all students") takes for granted that teachers will be able to critically evaluate their own teaching so as to determine when new methods are appropriate. Our experience is that such critical evaluation is a skill that needs to be cultivated and cannot be taken for granted; the Disciplinary Commons has been designed to provide the knowledge, practice, and feedback required for this skill to develop. E. Our own project evaluation using survey, observational, reflective, and interview data, indicates that there is considerable change of practice that results from critical self-reflection, peer review, and peer observation. Specific evaluations that will be undertaken will seek to answer: i. Do the participants engage in cycles of reflection and change? We will evaluate this using surveys, interviews, and portfolio and course material review, some of which will be undertaken one to two years after participation. ii. Do participants engage in practices to make their work public and peer reviewed? This will be evaluated primarily through followup surveys and analyses of published portfolios, papers, and presentations. iii. Do participants build on the work of others? We will examine "chains of attribution" using emails and targeted interviews to determine the people and places from which particular changes have been borrowed. iv. Do participants learn to better assess student learning? We will evaluate this through pre and post assessment exercises by participants, and interviews using specific artifacts from each participants' portfolio for stimulated recall. v. Do the changes that participants make lead to improvements to student learning? We view this as the most challenging aspect of the evaluation. Our plan is to engage the participants' themselves in documenting change to practice and its impact on learning. But its evaluation will need to be fit to each regional context, and will be explicitly planned for by each regional leader as part of their activities in implementing their regional Commons during the first year. 4. Sharing the resultant insights and innovations with the broader computing and STEM education communitites: This sharing will take two forms. The first will be the portfolios themselves, and the evaluation will concern the extent to which participants complete their portfolios and make their work publicly available. Data from the pilot Commons indicates that 80-90% of participants do so. The second form will be in through publications and presentations that are disseminated to the research community; they will be evaluated in the quantity and the "quality" of the places in which they appear. Our record thus far with regard to the pilot project and to our previously funded NSF grants indicates that there will be considerable dissemination of results. ------------------------------------------------------------------------- Phone meeting with Steve Cooper, October 12, 2007. REESE as possibility: meant to be research in education in the STEM disciplines. Steve knows less about it. The research component got merged into DRL. However, our proposal will fly in CCLI. But we will need to craft it carefully. He has a number of comments related to what I had sent to him. His overarching comment is: what is specific about this project for Computer Science, as opposed to anything else? He is thinking about what reviewers will be thinking. Why is this CS Ed, rather than Ed with CS. When we put the proposal together, describe what "reflection" is in detail. This term gets overloaded, and lots of people mean lots of different things with this. So we need to say what we mean by it. Do we set up all of the Commons around a single course, or the interests of the people for each regional Commons? There is strength in doing a single course. Careful about getting too broad. He is thinking of potential pathway to get more and more money. We did it for a single course already: let us do it for 2 or 3 courses. He thinks that Robert is going to struggle to build an Engineering Commons outside his school. There are few places with engineering schools, especially in his region. Steve is dubious that this will work. He thinks that we will get a lot of participation from smaller schools. They are the ones who will do this. We might get one person from an R1, but the people who will do it are the ones who teach all the time. Key issue for phase 2 is that they don't adequately document results from phase 1. Phase 2 is more competitive than phase 1. About 20-25% of Phase 1's get funded, but only 15% of Phase 2's. But he thinks we have a decent idea and should go for it for phase 2, if that is where we are headed. For phase 2 proposals you almost always need a real external evaluator. Someone who does this kind of evaluation - someone legitimate. It is a show stopper without an external evaluator. Pulling someone from an Ed school who does this all the time. Someone who has experience in program assessment. Josh mentioned about the CELT (Center of Engineering Learning and Teaching, that Cindy Atman heads) folks at UW Seattle, and Steve said that makes sense if they have someone good. And we need to give our external reviewer enough $ - around 10% of budget, so they can actually do the work. Make sure that he has "results of past work". Put in a 1/2 page on Bootstrapping. But focus 2-3 pages talking about the pilot and what we learned. Under "engaging participations in critical reflection". We need to describe how it worked in the pilot! Be fairly light on the mechanisms of the project itself - we don't need to go on and on. Instead, we need to focus on the results of our pilot and how we know that change occurred. He recommends getting some support from people who would like to be participants - a few letters from each leader. From administrators as well is ok, but teachers who would participate is better. From "Joe teacher" at "Podunk U." Talk about course portfolios: recommends including one of the "portfolios" as a "worked example" with reference to the URL to the whole portfolio. Pick "the best one"! Steve will try to choose reviewers that won't complain that "this isn't about how to teach subject X better", but he won't know them all. He said it is like used car sales: there should be something for everyone, and no red flags so that any of the reviewers are going to downgrade it. When we list the specific outcomes related to objectives. The second outcome is weaker than the others. That goes on anyway - at least in theory. Keep it there, but merge it into something else so it isn't highlighted. (this is about regional networks) Below that, we talk about evaluating leadership. The process of leadership evaluation is VERY interesting in its own right. This could be HUGE. Steve would be intrigued in working with us on that - and NSF would be interested. It might be something that we explore and get money for in a different setting. He recommends that he would keep it - but recognizing this is very hard to evaluate. It probably would be out of HRD just to develop the leadership thing if we end up struggling with this and need some separate money. The "provenance" stuff might be too expensive to get at. Careful on that! Tough to measure if people learn to assess better. May want to talk about possible rubrics we might use to get at this. He strongly recommends that we have a network of 1/2 dozen people to review the proposals as mean-spirited as possible. The more eyes you have the harder it is to miss something. Just to get people to look at it. Little stuff that you can miss. He thinks that we have a chance to write a good proposal. Overall this is fantastic stuff, and he really encourages us. He is available for questions as we need. He thinks that this proposal has potential for a followup phase 3, and most of the proposals that he reads don't. As for Bootstrapping and Scaffolding, Sally can be included, but we will need to fund her as an external consultant. Best to include her bio, though most people will know of her.