Review of “A Field Manual For a Whole New Education” by Goldberg and Somerville

In “A Field Manual For a Whole New Education“, (FM) Goldberg and Somerville follow up on “A Whole New Engineer” (WNE, see my reflection) to present a more concrete plan for effecting institutional change. They propose the creation of an “innovation incubator” within one’s institution and a structured series of sprints to reflect on values, envision possibilities, design a restructured education, and plan implementation.

I will focus on the heart of the new content which is the change effectuation process. FM improves upon a weak spot in “A Whole New Engineer” by explicitly making the institutional change process open-ended and focusing more on the co-design aspects instead of simply how to build support for innovation. The most compelling insight from the book was the “ego distraction” of faculty on the institutional change team via beginning with desired values and affect of the educational setting. In Goldberg’s telling, professors are most attached to the curricular content but not the manner of delivery and structure around the content. By allowing professors to begin with the intended outcomes of student learning, the proposed ‘rebooting’ sprint structure avoids or minimizes conflict over curricular space to recast the focus on the joy of learning and creating.

The proposed sprint structure for educational change generally follows best practices for collaborative design in starting with values, user personas, and intended outcomes, proceeding to ideation, compressing to a cohesive solution, and then forming a plan for implementation. However, there is one aspect of the proposal which appears to me to be unjustified. Within the final sprint intended to ‘negotiate change’ and turn it into a concrete plan, the authors suggest reducing the size of the design team to a “negotiation team with two kinds of members, those who are concerned primarily with the risks of changing and those concerned with the risks of not changing” (178) so that the concerns of each are acknowledged and addressed. From my outside position not having worked on this specific kind of institutional change team, I am skeptical of what appears to be an explicit factionalization approach. There are many kinds of possible change within the curriculum content, structure, manner of delivery, institutional values, business model of education, or pattern of student engagement. Each team member may have different positions for or against the current state, or among a variety of new possible proposals. Even if the members adopt a perspective of negotiating “from interest” (creating mutual value) rather than “from position” (zero-sum negotiation), the framing is still adversarial rather than strictly collaborative. It seems unnecessary to create a ‘negotiation subteam’ and have members with explicit pro- or anti-change roles because it would then suffer from concomitant adversarial role-playing. Instead, all team members should evaluate proposals against the design goals, which are the student learning outcomes. Compare this to “A Generative-Evaluative Design Meeting Style” which does not have any adversarial framing. In terms of space devoted, this was a minor element of the proposal, but stood out as contrary to the spirit of collaborative design.

The other half of the change process is the creation of an innovation incubator or “respectful structured space for innovation”. The incubator encompasses the community interested in educational change, the physical space where they meet to discuss ideas, the model classes and pilot programs, and their shared conceptual vocabulary. The incubator as a long-running entity balances the proposed fixed-length sprint structure as a nexus for continuous improvement. The authors present the trade-offs of establishing this incubator before versus alongside the sprint process: a longer preparatory period helps to grow deeper roots of cultural change, but the structured sprints channel energy for innovation within an institution into concrete action. The description of this incubator was well-founded.

Having covered the major new content about change effectuation from Chapters 6-7, I will briefly summarize the remainder: Chapters 1-2 were mostly repeated content from WNE, with stronger emphasis on the need for “unleashing” student autonomy and motivation and sparking joy. Chapters 3-4 described a few valuable perspective or mindset shifts for educational change management, such as: seeking nuanced and balanced understanding of competing goals to avoid over-correction (“co-contraries”), adopting Dweck’s growth mindset, choosing incremental experiments to learn about new possibilities (“little bets”), and focusing on human advantages over machines such as emotion, intentionality, and comprehensive instead of narrow understanding. Chapter 5 covered the importance of attentively listening for the sake of understanding the other instead of listening merely in preparation to say one’s piece.

What was missing? The “Field Manual” would have benefited from many more field reports, particularly about attempts of educational innovation at a wider variety of institutions, and less cursory introduction of concepts that were described in depth in WNE. The authors briefly touched on alternative cost structures to higher education such as work-study, but did not introduce detailed proposals. Integrating joy, affect, and student “unleashing” into higher education is an admirable goal, but without progress on reducing costs, academia risks becoming more of a hindrance than a facilitator to learning, particularly in the rapidly evolving world of software.

Overall, FM is worth reading for Chapters 6 and 7 on change effectuation, and the rest is skimmable for someone who has already read WNE. Together, WNE and FM present a vision and an implementation plan for a new kind of human-centered education, but do not adequately grapple with the challenges of cost or speed of change required to stay technically relevant.

The Flow of Cognitive Apprenticeship and New Tech Propagation

How can a teacher not only impart knowledge to students but also cultivate the mindset and skills necessary to learn independently? How can an engineer teach peers to integrate an immature technology into new applications while the best practices for its usage are being developed through their very work? The first question is central to pedagogy dating back at least to Plato’s Theaetetus and Phaedrus. The second is the analogue of the first in the engineering workplace; but is even harder because the shape of the technical problem is still being discovered and there is no set curriculum.

I was inspired by Mel Chua’s 7 Technique Cognitive Apprenticeship Theory to reflect on my past experience as a tutor and how similar technical communication flows have evolved in my current engineering practice. To adapt what Chua wrote, it is called “cognitive” apprenticeship because the teacher does more than teach the content: the teacher makes the metacognitive skills visible that are required to grow as a learner. This is central in the academic setting and made concrete by learning about a specific subject, unlike in a vocational apprenticeship where learning the craft is the primary goal and becoming a better learner is the side effect.

In new tech propagation, the tangible goal is to show how a novel technology can be used to build products, while the meta-goals include: communicating about and managing uncertainty, formalization of technique, connecting new work to existing bodies of knowledge and drawing from them where possible, and developing a “community of practice”. These might require changing training, test, release, and planning processes, how risk is assessed and communicated, or how user studies are conducted. Similar to how metacognition requires reflection on one’s learning processes to change oneself, the summary meta-goal of new tech propagation is to understand how the organization must change to effectively use it.

We’ll start with the flow of communication in Cognitive Apprenticeship (CA), and then show its parallels in New Tech Propagation (NTP).

Cognitive Apprenticeship

Original Document, adapted above

Most tutoring interactions start with students asking for help on something fairly specific, which is directing attention to one part of the problem space in bounding. Then I would usually ask what they know so far. This establishes the baseline of knowledge before the interaction, against which we will compare the new knowledge state at the end. This also functions as a reflection on what they’ve done already. Finally, their articulation of their knowledge or process of working through a problem removes the ‘expert blind spot’ of the instructor’s underestimation of the difficulty of a problem for a new learner.

I wanted to stay out of the direct didactic space for as long as possible, so I would then attempt to guide with questions. I tried to highlight the gaps in their understanding by asking them to consider missed cases, recall relevant principles, or push to where a governing model breaks down. This is scaffolding. Alternation of articulation and scaffolding is the ideal, because it models the process of students asking questions about their own understanding to resolve issues independently. If there is just one very specific issue with the student’s understanding, or I could cannot come up with guiding questions, I might go from articulation to coaching without scaffolding.

If guiding with questions does not get to the answer, I would then switch to guiding with direct suggestions in coaching. Here students work through problems with instructor direction. Failing this, I would demonstrate the process directly with worked examples and explain as I go: the modeling step. From modeling, I could then either ask the student to explain their own process as they work through a different problem (articulating) or to go back to the problem set or reference material (scaffolding) based on lingering issues of understanding.

The narration process was the one technique that I had difficulty integrating into my usual interactions. It is a different means of guiding with questions. Students are rarely familiar with attempting to explain as someone else works through a problem, and I’ve found is often a source of frustration because there isn’t a clear direction to the interaction. Most often it leads to coaching as is the usual step after scaffolding. In the past, I hypothesized that the reflection necessary to translate between the expert practice and their own practice is best done independently later, and would take too much time for narration to work within the context of a tutoring interaction.

Finally, the student reflects by comparing the state of their content and process knowledge before and after their interaction. I had shown how to ask questions and work through problems, and they might implicitly compare this with what they had done. I might also explicitly ask for “lessons learned” to encourage this reflection.

New Tech Propagation

Presenting a novel technology to others with the aim of integrating it into new applications requires many similar interaction patterns. However, unlike CA in an academic setting, NTP builds upon a network of communication with a presenter-peer relationship in place of teacher-student. Instead of a single tutoring session, these interactions unfold over weeks or months of attempted tech integration.

The core interaction is demonstration of the new technology and conversation with peers about its capabilities by the presenter which includes the “Modeling” and “Bounding” modes above. The most natural progression is to have peers attempt to use it by direct practice in “Application” which takes the place of “Articulation”. Hints are provided with Scaffolding which then leads to Coaching – attempted usage with direct supervision, which may return to Demo + Conservation. These interactions are all fairly similar to CA.

Narration takes on a new meaning because the peer may have knowledge that the presenter does not. The narrating peer attempts to describe his understanding of what the presenter is doing, which may include issues that the presenter does not see, connections to other fields, or past attempts to solve the same problem.

For example, suppose a computer vision team were evaluating “event cameras” for “camera traps” to capture images of Antarctic wildlife. Here “Alice” is the presenter and “Bob” is the reviewer who begins by narrating the proposal.

Alice: Event cameras differ from traditional cameras in that they only output changes in observed light for individual pixels instead of light received during an exposure window for all pixels, which makes them potentially well-suited to capture low-frequency events with large amounts of movement in the scene. They have the benefit of lower power consumption, which would conserve battery life of field-deployed camera traps, which means we can operate them for longer and justify the cost of more remote deployments.

Bob: I’ll repeat back to make sure I understand. You are proposing to move to a different system of image capture and animal detection that only responds to changes in pixel intensities with the intention of preserving battery life. In your view, by aligning the pattern of processing data with most valuable data, we have a more purpose-driven system. The claim is that power consumption would decrease because instead of processing all frames, we would process only the most interesting data. Is that correct? If so, how do we know that our power consumption would actually change?

Alice: Yes, that purpose-alignment and battery life are the proposed benefits. We can simulate this behavior in the lab in an end-to-end test by setting up a monitor in the front of an event camera and the baseline conventional camera system. Then, we can replay data we have received from the field that has a low frequency of animal movement events and monitor power consumption over time.

Bob: That sounds reasonable to validate the power consumption claims. Do we know that we have explored other possible optimizations to improve battery life? Would the event camera give the same quality of data as the conventional camera? How do we know that we will capture all of the same events?

Alice: Separately from the battery life test, we can compare the behavior of the event camera and the conventional camera through a simulation of each as image processing algorithms on our existing data set to get the precision and recall of detected events.

Bob: Is our existing data set biased because it has been captured with conventional cameras? How well will we be able to simulate the effect of using these event cameras instead? What does a pixel reading from an event camera really mean in comparison with a conventional camera?

Alice: Because our existing cameras capture at 30 FPS, and the responsiveness of event cameras is much faster, we cannot make perfect comparisons between the two platforms using our existing data. Event cameras can respond within microseconds. But the sum of all pixel change events that would happen between two frames is probably approximated by the inter-frame diff. We can validate this with a new data set of a side-by-side comparison of the two camera types in a single field study.

Here, Alice presents the idea, Bob starts by repeating it back to ensure understanding and tries to find the limits of what is currently known about event camera technology. Bob is pushing to learn how the two camera types will be compared. Alice recognizes Bob’s focus and responds with design of experiment proposals. To further the ‘tech propagation’ Bob could then create the detailed experimental design (Application), which Alice would review. Bob’s design would give Alice the opportunity to Scaffold or Coach about anything Bob missed about event cameras.

Either party could have chosen to move to discussion about the cost of the two types, more specifics about the gains in battery life, the number of different providers of these types, how durable they are in the field, the relative difficulty of developing image processing algorithms for event cameras, etc. It is not critical to touch on every aspect during the “Reasoning about Uncertainty” and it may be better to move to Application on the most important topics first.

Internally, Bob may be thinking about the right way to store the new streams of data from event cameras, if they will be queryable in the same way, or if the change in source data will allow the animal experts who receive the images to distinguish between different polar bears. Alice may be thinking about the long-term sustainability of this camera system because it is less commonly employed. By cycling between the different “Cognitive Apprenticeship” techniques, Alice and Bob can accelerate the communication of these unstated ideas. Each element of the cycle advances the technical evaluation and also prompts reflection on how their organization would have to change to support it.

Modularity in Curriculum Design

In the land of Surricula every youth must hear the sagas of stories from the sages before going out to the shore to fight the sea-goblins that dwell there. The sages inherited the sagas from their own elders and added their own twist to each story. Some stories depend on previous ones that others tell. But the sages rarely listen to the latest stories told by the others. The youths often lose sight of how they all fit together. Sometimes the sages meet to divide the science of fighting sea-goblins into better sequences of stories, or to add or subtract some, but once they do, they usually let each member decide for themselves how to tell them. Some sages even freestyle their stories and don’t write them down. A reform movement of sages calling themselves the Ceruleans decides that it is time to show the connections in the sequence by having multiple sages tell related interwoven stories – for example, about the history of the sea-goblins and biology of the dunes on the anti-goblin berms. The star Ceruleans tell compelling stories, but even among the Ceruleans, few want to retell the ones that others came up with in the same way. It takes a great deal of effort and skill to compose these interwoven stories, and the benefits are subtle – only visible in the long term success against the sea-goblins. Can these new stories scale beyond the stars?

Modularity is the principle of formally defined functional encapsulation. Components, called modules, perform specific functions with defined patterns of interaction with other components so that: they can be used in larger systems without requiring the system designer to reason about the deep internal details of how each function is accomplished (Decomposition), they can be redesigned and swapped in the place of other components that obey the same modular interface (Interchangeability), and the performance of each function by a module does not interfere with that of others (Closure). Modularity is a foundation of effective engineering because it allows for subdivision of work into smaller tasks that are easier to reason about and can be given to different people, for future improvements to the specific functions without breaking the behavior of the entire systems, and for components to be reused in any system that requires the same function.

Are these benefits of modularity specific to engineering, or can they also be applied to education? Are the Ceruleans right to argue that an excessive focus on modularity actually works against comprehensive understanding of the subject matter? After all, decomposition tries to break large problems into smaller pieces that do not require systemic comprehension. A completely modular curriculum can be disjointed, especially if it is merely “subdivided” into different courses without clear definition of the expected outcome and the place of each course in the broader sequence. At its worst, the curriculum becomes broken into different fiefdoms over which various instructors have control without a clear picture of the whole.

In place of the traditional model with courses as the core modular units, we argue it is better to think of education, and software engineering education in particular, as a fabric of interwoven disciplinary threads with explicit dependencies between the concepts and skills in the sequence. For example, within Discrete Math, the concept of a partial order is the basis for any algorithmic understanding of executing a collection of calculations with chains of dependencies which arises in Data Structures and Algorithms. This and other such dependencies should be stated as explicit linkages in the curricular fabric.

By making concepts or skills to be the base modules, curriculum designers have the freedom to innovate in small pieces by improving individual lectures, readings, problem sets, or project assignments, while also being able to compose such modules into courses and then into disciplinary threads. While there may be many disagreements about the proper content of a Data Visualization course, it is much easier to communicate about one concept such as small multiples and what should be the module’s “contract” or mark of success for its skillful employment. Then content creators are free to disagree about the right way to teach this skill and iteratively improve on its delivery. This incremental improvement is especially valuable in software where the state of the industry constantly changes.

Contra the Ceruleans, Surricula‘s sagas need openness and documentary rigor to enable such improvability more than bespoke interdisciplinary stories, which we will explain in greater detail in the next posts.

An Open Software Engineering Curriculum Introduction

Technological civilization depends on engineers who can think creatively, work on teams, design experiments, build new products, support existing systems, talk to users, and contend with questions of the proper use of technology in society. How can people with technical talent and interest learn all of these skills? Does this learning need to happen in a traditional college setting? Could it happen elsewhere, or continue throughout one’s life? How can we make such learning accessible to as many people as possible?

This “Open Software Engineering Curriculum” is an iterative approach to answering all of these questions within the domain of software engineering. In addition to the core knowledge of computer science, it aims to set out the professional skills required to succeed as an engineer. A large part of this will be the curation of existing resources with new content developed as necessary. This will draw from my experience at Olin, at Amazon, and own personal reflection. In particular, this effort aims to contribute to Olin’s “revolution in engineering education” by explicating its mostly unwritten theory and comparing it to what is needed in practice.

Envisioned Outcomes

Who is the “whole new engineer” (WNE) described in Olin’s vision documents?

In my own formulation, the WNE:

  1. Has competency in all stages of the product life cycle, which we call “Explore, Learn, Conceive, Design, Implement, Operate“.
  2. Works to improve the well-being of the product’s users and society.
  3. Understands management of a venture or business and how engineering fits into the advancement of its goals.
  4. Works effectively in teams.
  5. Is a capable communicator in writing, oral presentations, and documentation.
  6. Reflects on how engineers learn and develop their skills, and understands engineering as a discipline that is open to improvement.

Who could become this WNE? It should not just be high school students who enter a Bachelor’s program designed with these goals in mind. It ought to be anyone who wants to develop an engineering mindset, whatever their state of life.

Principles

To promote this, OSEC as a project will be:

  1. Accessible: All internal works will be in the public domain. External works referenced as part of OSEC should be accessible as digital files, should preferably be in the public domain or under permissive licenses, and if not be low-cost.
  2. Complete: Completeness of content is more important than avoiding repetition.
  3. Iterative: The content will improve over time. It is more important to cover all relevant topics adequately and solicit feedback for improvement than to make each segment perfect on first publication.
  4. Reviewable: Resources will be structured to allow for in-line commentary and easy response.
    • As of 2023/09/24, this is using the in-line commenting feature on the constituent documents, attached at the end. This will evolve in the future.
  5. Curated: External content referenced as part of OSEC will be reviewed thoroughly and contextualized within the broader project. Every piece of content will have explicit justification for its inclusion. This will be most of the effort.
  6. Shareable: The internal OSEC content will be distributable as a single file by minimizing resource size and using common formats, or as a repository.
    • This content will likely move to github eventually.
  7. Opinionated: While it will integrate content and feedback from many sources, this OSEC as a compendium will be one person’s work to maintain coherence. Its placement in the public domain will allow extension or remixing by others as they see fit.

OSEC’s three main parts will be a curated software engineering curriculum, a collection of essays on engineering practice and how to develop a learning community outside of the typical structure, and meta-commentary on the design of such a curriculum and pedagogical theory.

Contents

  1. Pedagogy and Curriculum Structure
    1. Direct Instruction, Do-Learn, and Crystallized Engagement
    2. Mindset of the New Engineer
    3. The Engineering Toolbox and Toolshed
    4. The Company Cohort
    5. Modularity in Curriculum Design
    6. The Flow of Cognitive Apprenticeship and New Tech Propagation
  2. Engineering Practice
    1. A Generative-Evaluative Design Meeting Style
    2. The 80/20 of Team Cohesion
    3. Review and Approval Systems
    4. The Engineering Design Sequence
  3. Core Content
    1. Curriculum Overview

An Open Software Engineering Curriculum Overview

What is the right curriculum to achieve the envisioned outcomes for software engineers? I propose that such a curriculum has four interwoven disciplinary threads with the desired professional skills and mindset integrated into their courses. The four threads are:

  1. Computers as Formal Systems: Programming languages are unlike natural languages in that their statements mean exactly one thing. Any piece of code is a definite specification of operations or “instructions” that the “processing units” of a computer will perform. Instructions are simple arithmetical operations and movements of stored data, but they can be built up into extremely powerful tools. This thread starts with treating a computer like a ‘big calculator’ to crunch numbers to understand the behavior of physical systems in “Modeling and Simulation”, develops with the systematic construction of software, and reaches its completion with the theoretical foundation of what computers can do in “Foundations of Computer Science”.
    1. Modeling and Simulation, Software Design, Discrete Math, Data Structures and Algorithms, Computer Architecture, Software Systems, Databases, Foundations of Computer Science.
  2. Computers for Mathematical Analysis: Engineers work with data, and data from the real world is noisy! Processing large volumes of data requires computers. Engineers need to learn how to use computers to collect data, manipulate it, draw inferences from it, and present it to people in a way they can understand. This thread begins with a foundation in Probability, Statistics, and Linear Algebra, progresses through Machine Learning (automated pattern recognition) and serves as the basis for the mindset of experimental design.
    1. Modeling and Simulation, Linear Algebra, Probability and Statistics, Machine Learning I: Tabular, Data Visualization, Machine Learning II: Computer Vision, Computational Robotics.
  3. Computers in Integrated Products: We can use computers not only to do math, but also in embedded physical systems that do things in the real world. Once we physically deploy complete systems, the fun of integration challenges begins! This is a sequence of integrated hardware, electrical, and software projects that cultivates the mindset of building full products with computation at their core.
    1. Real World Measurements, Fundamentals of Robotics, Computational Robotics.
  4. Human Context of Engineering: Engineers build things for people with other engineers. How do we design the right products for our users, along with them? How do we communicate our mathematical analyses and experimental results? How do we learn from and teach others? And then how do we integrate all of our technical and interpersonal skills into human organizations that build things together? This thread has several courses explicitly within it, but permeates through the curriculum.
    1. User-Oriented Design, Data Visualization, Teaching and Learning, Venture Management.

While these threads are presented distinctly for clarity of communication, the knowledge and skills they impart are all tied together. This particular example of an “Open Software Engineering Curriculum” (OSEC) has eight “phases” that could roughly correspond to eight semesters, or eight increments of independent study. With eight semesters, there is plenty of room for “distribution requirements”. Each pair of phases (I and II, III and IV, etc.) could be done simultaneously to compress the sequence into two years without other courses included. Each pair contains a course with a substantial team project, highlighted in green. There are seven underlined courses that lend themselves to the creation of projects that are suitable for a “portfolio”.

As described in the “Structured Exploration” and “Mindset” chapters, this curriculum is designed to promote a proactive approach to learning, teamwork, user orientation, and long-term systems thinking. These themes are introduced early in the sequence so that they can be cultivated over time.

OSEC, as a project, is mostly a work of curation and “restyling” existing content to better promote the proposed outcomes. Almost all of the named courses correspond to well-known pedagogical modules which are listed as references. These references will be incrementally reworked and further curated. I have intentionally “tied” certain modules together in the sequence that ought to be taught within a single block. Because software engineers are primarily practitioners, learning of math is tied to application within this curriculum.

The intention of this OSEC is to present a comprehensive and explainable course sequence. Compare this to the MIT Computer Science and Engineering requirements and outcomes documents. Instead of merely listing the content, this is meant to explain how it all fits together.

Courses

  1. Real World Measurements: Arduino C:
    1. Imagine a “weather station” that monitors wind speed, rain, humidity, sunlight, etc. and is physically deployed. To put together such a weather station requires integration of microcontrollers and basic circuits, as well as simple mechanical design to enclose it and make it resistant to the elements. The physical system must be maintained through time, over months or even years, and can be progressively improved in response to problems. Once it is deployed as a system and reporting data, the data stream can be processed and uploaded to the cloud. Visualizations can be built on the data stream as a simple website. The data set presents opportunities for analysis and predictive modeling. Multiple stations can communicate as a mesh network. Starting from even a simple measurement device, there are many avenues for problem-based technical depth within software engineering across the product life cycle. The practice of physical and software maintenance within this project helps to prevent a mindset in which “everyone wants to invent the future, but no one wants to be on the future’s on-call rotation”. All of the technical features are built upon the continuous proper functioning of this device and data pipeline.
    2. This is the central project within this course, but there can be other kinds of sensor integration possible. Its location at the beginning of the curriculum is to introduce ideas of system integration (HW, EE, SW) and long-term maintenance. For the sake of simplicity of management and ensuring good learning for each student, this is not naturally a team project.
    3. Budget for Arduino, servos, sensors, lights, wires – roughly $200.
    4. Data can be used as the basis for future courses: Modeling and Simulation, ProbStat, Machine Learning, Data Visualization.
    5. Emphasized Stages: Explore, Learn, Design, Implement, Operate
    6. References:
      1. MIT Introduction To Electronics, Signals, And Measurement
  2. User Oriented Design
    1. Students can learn how to care about long-term effects and sustainability of products for the people using them simply by working with them over longer periods of time. Olin’s “Engineering for Humanity” is an introductory design course that pairs students with seniors and has them try to solve problems that they face, often centered on negotiating the physical environment. Working with seniors in this way has many advantages: physical objects are often not easy for them to use, and so there are plenty of opportunities for assisting them. Solutions can be inexpensive. Seniors are often overlooked, generally have a lot of free time, and are happy to talk to young people who want to help them, so they are an ideal user group. By extending this engagement beyond a semester-long course and doing long-term support for these projects, students can gain an appreciation for lasting impacts on users as well as technical sustainability.
    2. This is located at the beginning of the curriculum to plant the seed of thinking about impact on the end user. Some students enter into engineering just wanting to create something that is cool, and not on the end result, or might be hesitant to engage with people. Seniors naturally want to talk to them, so they are a great group to start with.
    3. Team Project, Portfolio Project
    4. Emphasized Stages: Explore, Conceive, Implement, Operate
    5. References:
      1. Olin Engineering for Humanity
      2. MIT Principles And Practice Of Assistive Technology
  3. Modeling and Simulation, Probability and Statistics, Linear Algebra
    1. Topics: Weather Forecasting, Stocks and Flows, Simple mechanics and thermal simulation, Introduction of datasets that require probabilistic modeling, Introduction to Bayesian and Frequentist statistics, Monte Carlo Methods, Transformation of physical systems through time, Introduction of Linear Programming, Optimization Approaches.
    2. Emphasized Stages: Explore, Learn, Implement
    3. References:
      1. MIT Probabilistic Systems Analysis And Applied Probability
      2. MIT Introduction To Probability And Statistics
      3. MIT Linear Algebra
      4. MIT Modeling and Simulation
      5. MIT Inference of Data and Models
      6. MIT Modeling Environmental Complexity
      7. Olin Quantitative Engineering Analysis
      8. Olin Modeling and Simulation
  4. Software Design and Discrete Math
    1. Methodical design of software and learning the mathematical basis of software systems. Object Oriented Design, Modularity within software projects, Introducing time and space complexity analysis. Building a video game as a group to introduce collaborative SW development, simple user experience testing, Model View Control design pattern.
    2. Discrete Math is often mistakenly taught after the introductory software “design” course, but this is wasteful because all of its concepts are naturally included in learning the theory of software. Software projects can be chosen to illustrate different ideas within Discrete Math. Formal thinking and constructing proofs of mathematical relations introduces students to the idea of computers as mathematical symbolic manipulation machines instead of just tools. It would be wrong to start with the theory and postpone “building things with computers” until after the math.
    3. Team Project, Portfolio Project
    4. Emphasized Stages: Explore, Learn, Design, Implement
    5. References:
      1. MIT Math for Computer Science (Discrete Math)
      2. MIT Software Construction
      3. MIT Creating Video Games
      4. Olin Software Design
  5. Machine Learning I: Tabular and Data Visualization
    1. Analysis of datasets and “statistical learning” leading to close examination of the data and its distributions. Emphasis on communication to people in general interest technical writing alongside purpose-driven visualizations. Making a simple website for these artifacts. Understanding how pattern recognition can be automated. Can build upon the weather station data. Creating ML models of physical processes: this is an advancement over the simpler approaches introduced in Modeling and Simulation / Linear Algebra. Includes (review of) necessary Calculus concepts. “Citizen Data Science”: using data visualization and modeling to present an argument about some topic of public interest. This is an exercise in public communication through data. Accuracy of analysis and presentation is key.
    2. Portfolio Project
    3. Emphasized Stages: Explore, Learn, Conceive, Design, Implement
    4. References:
      1. MIT Intro to Machine Learning
      2. MIT Visualization For Mathematics, Science, And Technology Education
      3. MIT Machine Learning For Healthcare
  6. Data Structures and Algorithms
    1. Formal analysis of computational processes.
    2. Emphasized Stages: Learn, Design, Implement
    3. References:
      1. The Algorithm Design Manual, Steven Skiena
      2. MIT Intro to Algorithms
      3. MIT Design and Analysis of Algorithms
  7. Fundamentals of Robotics
    1. Building integrated mechanical, electrical, and software systems in a team. Characterization of motors. Build a simple robot that moves in the world in response to sensor input. After this course, students are well-posed to mentor a high school robotics team.
    2. Team project, Portfolio project
    3. Emphasized Stages: Explore, Learn, Design, Implement, Operate
    4. References:
      1. Olin Fundamentals of Robotics
  8. Teaching and Learning
    1. Starting in phase V, transition from foundational learning to specialized learning. Engagement with high school students or earlier learners of this curriculum in a teaching or mentoring capacity. Mentoring is a core engineering skill for the sake of organizational sustainability and learning how to communicate to more junior people is a completely different mindset from communicating to professors who know more about the subject. After the “Citizen Data Science” from “Data Visualization”, communication deepens by teaching others technical subjects.
    2. Emphasized Skills: Communication, Mentoring, Reflection on Process of Learning
    3. References:
      1. How Learning Works” (Susan A. Ambrose et al.)
  9. Computer Architecture
    1. Working up from logic gates and arithmetic logic units to a complete simple computer in simulation. Instruction sets and Assembly language.
    2. Emphasized Stages: Learn, Design, Implement
    3. References:
      1. MIT Computation Structures
      2. MIT Computer System Architecture
  10. Machine Learning II: Computer Vision
    1. Classical computer vision techniques, Convolutional Neural Networks, Transformers. Data augmentation. Object Detection, Segmentation. Animal identification in the wild, which can be integrated into the “weather station”, then relate presence of animals to environmental conditions.
    2. Portfolio Project
    3. Emphasized Stages: Explore, Learn, Design, Implement, Operate
    4. References:
      1. MIT Intro to Deep Learning
  11. Software Systems & Databases
    1. Introduction to: Operating Systems, Web Servers, Networks & Distributed Systems, Database Management, System Performance Analysis. Emphasis on detailed design reviews of peer’s artifacts.
    2. Emphasized Stages: Explore, Learn, Design, Implement, Operate
    3. References
      1. MIT Computer System Engineering
      2. MIT Operating System Engineering
      3. MIT Performance Engineering Of Software Systems
      4. MIT Database Systems
  12. Venture Management
    1. How do we integrate all of our technical and interpersonal skills into human organizations that build things together?
      1. As of 2023/09/21: Open question on whether this should attempt a practical component of operating a business. I am more skeptical of this because it is difficult to make actual business operations self-contained within the scope of a course and still be a meaningful management experience. Current plan is that this would primarily include the curation of case studies and readings on management with exercises in opportunity assessment via interviewing hypothetical users and creating a business plan.
    2. References:
      1. MIT Patents, Copyrights, And The Law Of Intellectual Property
      2. MIT Engineering Risk-Benefit Analysis
      3. MIT Nuts And Bolts Of Business Plans
  13. Computational Robotics
    1. An integrated robotic system navigates in its environment to accomplish tasks based on image input. SLAM. Computer vision in the wild.
    2. Team Portfolio, Portfolio Project
    3. Emphasized Stages: Explore, Learn, Conceive, Design, Implement, Operate
    4. References:
      1. Olin Computational Robotics
  14. Foundations of Computer Science
    1. Theory of Automata, Turing Machines, Computability, Functional Programming. Standard approach would be to focus solely on the proofs. Intention in this course is to actually implement ‘automata’ in OCaml.
    2. Emphasized Stages: Learn, Design, Implement
    3. References:
      1. MIT Automata, Computability, And Complexity
      2. MIT Theory Of Computation
      3. MIT Information Theory

The Engineering Design Sequence

What are the different kinds of design meetings you might encounter? The Brainstorming meeting is one of the most important, but being prepared for all of the different design meetings and processes in the product lifecycle will help you anticipate the twists and turns of professional life.

1. Requirements Gathering and Inception: In this meeting you will define the problem statement: Who is the customer? What does the customer need? What is the timeline for delivery? The answers to these questions will evolve over time, but they must start from somewhere. This is the prompt for the design ideation meeting.

2. Design and Ideation Meeting: This is the subject of the previous letter. Every member of the team prepares for this meeting based on the initial profile of the customer discussed in the Requirements Gathering meeting. This meeting is within the team, and should be at most two hours long to start.

3. Design Preview: The Ideation meeting produces a few candidate designs that are worthy of further discussion. The “Design Preview” is a written rapid summary of these designs that is sent out to the broader engineering community within the venture. This allows a wide range of people outside of the immediate team an opportunity to offer high-level feedback on the various ideas. Other engineers can here suggest different technologies that may have been missed or mention unforeseen pitfalls. This Preview should be published the same day or the next day from the Ideation meeting, and the commenting period may be about a week long. The purpose of this process is to prevent the team from investing too much in the development of specific designs before many people have had the chance to mention problems that may have been overlooked. Distribution of these ideas and soliciting feedback also builds a more closely integrated engineering community.

4. High Level Design Review: A synchronous review of a design document that covers all of the major aspects at a summary level. The sections will likely include:

  1. Problem Statement, Customer Needs
  2. Major design constraints
  3. A component diagram of the most important modules of the system
    1. This shows for the happy path how the different modules interact.
  4. What are the APIs?
    1. What is the cost per million operations for each of the APIs? What are the load projections for the first year? In five years?
  5. What cloud services will be used, in public offerings (such as AWS) and internal dependencies?
  6. How do the servers scale? Could the service use a cellular architecture?
  7. Monitoring, logging, and archival strategy
  8. Security strategy
    1. What is the most sensitive data that this can handle? Will any safety-critical processes be impacted if this system is compromised?

The Design Preview was a sketch of the product and this is a full picture. It should be possible to read this document in less than 30 minutes so that there is another 30-60 minutes in the meeting for discussion and review of comments. There should be a smaller set of reviewers for this HLD who have built similar services but are on different teams.

The HLD will not be a complete specification of the system. It serves as a starting point for understanding how all of the pieces fit together. The designers must begin to consider each aspect of the system to write the HLD, and the reviewers have a chance to offer guiding feedback on these choices.

5. Low-Level Design Review(s): This is an asynchronous review of the most important algorithms, database schemata, call & retry patterns, state machines, and timing diagrams of the service. These include the unhappy paths in the system, such as how various dependency failures will be handled. The algorithms should be included as pseudo code. These design documents should allow line-by-line commenting.

Engineers will be able to work from this LLD to create individual stories (discrete units of implementation). Because of the level of detail and effort required to make this design, it is not necessary for every component to be complete before starting the asynchronous review. For example, there might be a single review of the database schemata, which could even have its own meeting.

6. Invalidated Assumption Response Meeting: “No plan survives first contact with the enemy”. In the practice of engineering, the laws of nature do not act adversarially as a military foe does, so plans survive longer, but are never perfect. All designs make assumptions about how the world works, whether that be about customer behavior, the structure of internal service dependencies, or the average runtime performance of an algorithm based on some distribution of data. Inevitably, something goes wrong. One should be psychologically prepared not only to adapt to these invalidated assumptions, but actively pursue evidence for their confirmation, with priority given to those that are the least certain.

In this meeting, the engineering team brainstorms options for dealing with these invalidated assumptions. This might require rewriting large portions of the service again, depending on the severity of the problem. Engineers must start from a reconceptualization of the product: if we knew at the beginning what we know now, how would we have designed it? Because of the demands of delivering the product, it might not be possible to completely rewrite everything, or start from scratch. But if you start from the ideal state and work towards what is feasible within the project timeline, you will come to a better solution than if you attempt to make the quickest patch to the design without an idea of how to do it the right way. The new conceptualization will also serve as a starting point for a re-architecture when the service encounters scaling challenges.

As the engineers on a team become more familiar with a design space, for example how to build customer-facing cloud services using AWS, their implementation speed will increase. Thus they will be more willing to discard outmoded components because the sunk cost fallacy will be less attractive. Keeping a level head in the face of major design changes that become necessary mid-way through implementation is an important part of being an effective engineer.

7. Scaling Challenges Response Meeting: Congratulations! Your product delivered clear value for the customer and you dealt with changing requirements effectively. It is so successful in the limited roll-out that management wants to deploy it everywhere, ASAP. You are ready for that, right?

While you may have designed your service to scale horizontally rather than vertically (such that increased load just requires more servers and not more powerful servers) you will probably find that adding new customers exposes new invalid assumptions, or reveals scaling processes that may require expert human input, such as customer-specific configuration. These processes need to be either automated, pushed to the customer (in the case of configuration or fine-tuning), or delegated to a deployment team. Accepting them as a recurring cost to the engineering team will not work as a long-term strategy. If you delegate it to deployment engineers, you will have to create training materials for them and prepare to spin up such a team, if you do not already have one. If the customers can be taught how to set their own configuration, then you may seemingly get that “for free” regardless of scale, but you risk compromising the customer experience and open the engineering team to high urgency requests from them. Automating such tasks is ideal in the long term, but may take too long to design for the current needs. Or, a new load on your service may also reveal hidden super-linear scaling behaviors such as synchronization required among all hosts or within some internal database or cache.

Prevention of these issues during the design phase beats trying to fix them in-flight, but despite your best efforts they will likely still arise. In this meeting, you will revise your growth projections for your service that you made in your initial HLD. You will now have concrete data for the performance of your service in production which you can use to make more confident estimates of resource needs. Your team will have a list of non-automated tasks and super-linear scaling behaviors that arose during the initial deployment of your service, and you will record all of them in a common register. The resolution of each issue will have a cost in some combination of automation (non-recurring engineering), deployment management (per-customer one-time cost), continuous support (recurring support engineering cost), and server costs (operational costs).

8. Support Plan Meeting

Every customer support ticket, service outage, scaling failure, manual maintenance action, and explanation of the service behavior to management imposes a potentially recurring cost to your team to sustain the product. The engineering team must work to automate such actions where possible and otherwise standardize the team’s response through the creation of “runbooks” or protocols. These runbooks are guides to diagnosing and addressing common issues, performing maintenance, or answering frequent questions. They will often include case studies of actual support tickets. When updating these runbooks, the team should have a live review so that each member can ask questions or share additional information about related issues that may have arisen during an on-call rotation. These discussions naturally create statements of requirements for redesigns that will prevent or automatically remediate such issues.

9. The Quick Fix: Acknowledged Tech Debt

In an ideal world, every discovered design flaw would lead to a rewrite of the service from first principles. However, in practice that cannot happen. All products, and especially software services, have a limited lifespan and it is sometimes appropriate to make a quick fix that isn’t pretty either in the expectation that it will be resolved at a later time, or that it will persist until the service is deprecated. The purpose of this as design meeting is to have common acknowledgement of suitability of this fix, review the documentation of its shortcomings, and sketch what would have been the right design if the team were starting from scratch. These sketches also seed the design of the future generations of the service.

10. Transmission and Inheritance

Changing ownership over a service would ideally look like a baton pass, but in practice it is moving into someone’s fully furnished house and slowly discovering all of the infrastructural problems with it.

As no one’s career should be tied to a single product, no product or service should be indefinitely owned by a single person or engineering team. The most natural progression is for engineers to move between teams every 1-5 years.

This rotation serves to:

  1. Bring fresh ideas, perspectives, and variety of technical expertise applied to specific problems,
  2. Reduce burn out by having people work on a variety of projects,
  3. Increase support redundancy and reduce siloization of efforts,
  4. Force regular maintenance of documentation and test its comprehensibility while onboarding new members to a problem.

This comes at the cost of:

  1. The effort of teaching new members and their effort in learning,
  2. The time to build new working relationships between engineers, product managers, customers, and other engineering teams.

If rotation is done too soon, engineers may not have had time to develop the deep knowledge of the problem space necessary to comprehensively evaluate designs. Working in one domain on a variety of different projects can enable engineers to see how problems relate and innovate on how to solve many things at once. In practice, however, premature rotation is rare because the cost of knowledge transfer is usually rated highly, and organizational inertia limits how frequently it is done.

The engineer rotating off of a project is responsible for finalizing documentation and summarizing the lessons learned during his tenure. He should write an overview of ongoing work, planned work, and research questions, and a retrospective of recognized shortcomings of the team’s designs and products. These should be reviewed together with the team.

A new engineer should start with an asynchronous review of the team’s documentation. A first reading is to become familiar with the shape of the domain, and a second is to make in-line comments for specific questions. Then one tenured engineer on the team takes responding to these comments and updating the holes in documentation as a task. They then have a live meeting to review the questions and give an introductory implementation task to the new engineer.

11. Deprecation

The memories of praises won are long-gone now, only a pile of technical debt remains. Customer needs, service dependencies, and even design paradigms have changed. Starting over is far more attractive than continuing to repatch the holes in the ship. It’s time to deprecate the service.

Deprecation, as a process, often requires its own design or execution plan. You want to ensure that your customers have a seamless transition to the next generation. In software, this typically means that both services are running simultaneously and the new one starts with a limited roll out. If the boundary between the service and the rest of the world is clean, it may be possible to simply reroute request traffic to the new one. However, if there are any unfortunate couplings such as database access it might not be that easy. As each customer switches, you may even discover load-bearing bugs or undocumented behaviors in the old system that were not replicated in the new design.

The deprecation execution plan should be reviewed both within the team and with other relevant engineering and customer stakeholders. The plan will likely include:

  1. Documentation of any changes in customer-facing behavior with the new generation.
  2. A plan for validating the migration of each customer.
  3. A schedule
    1. for communication with customers
    2. for each operation within the deprecation, such as an initial migration of customer records to a new database, not allowing creation of new entities, then not allowing updates of existing entities, final migration of existing records, validation of the migration, blocking reads of the existing database, and final deletion / archival of records in the old database.

A Generative-Evaluative Design Meeting Style

What is the right way to run a brainstorming or design ideation meeting? What comes naturally to most people is to sit in a circle and freely shout out ideas until one gets the approval of a few other members, with critiques of each interleaved with their generation. But there are better ways motivated by a desire to solicit input from all members of the team, promote information sharing, and deliberately form and consider ideas against agreed-upon criteria. Here we present a “Generative-Evaluative Design Meeting Style” with justification for each step of the process. Effective meetings don’t happen by chance, and by intentionally structuring them and reflecting on what works, we can achieve better results.

“A Generative-Evaluative Design Meeting Style”

  1. Prior to the brainstorming meeting, the group should have a high-level discussion of what the customer or end user values.  This could include things like safety, frugality, convenience, simplicity in user experience, similarity to existing tools or designs, etc.  The purpose at this time is not necessarily to have definite agreement on each of these values, but to prompt everyone to get into the generative mode of thinking by adopting the perspective of the end user.  
  2. Ideally before the design meeting, present the question or design challenge to the group so each member can begin thinking of ideas and doing research.  If this is not possible, start the meeting with 10-15 minutes of “quiet time” in which everyone writes down his or her ideas on paper or on sticky notes independently.
  3. After that time, everyone selects any number of ideas to share with the group from those written down.  Everyone selects at least one to share.
  4. One person is chosen to be the facilitator of the discussion.  This person should ideally be the most junior member of the group, or the role may rotate among members.
  5. Going in a circle, each member introduces one idea in a 1-5 sentence explanation.  There is no debate or evaluation of the proposal at this time during the sharing process.  Only the most basic clarifying questions may be asked.  The facilitator writes each idea on the whiteboard with a title and a short summary.  The fact that the idea is written on the whiteboard by the facilitator helps to build distance between the proposer and the idea itself.
    1. It is common and expected that some ideas will overlap in this process, which is no problem. The presenter of the idea that overlaps may decide if it is worth listing separately, as a variant to an existing idea, or not at all.
    2. Everyone goes in a circle (again with each person contributing at least one idea) until everyone has exhausted ideas to share.
  6. After all ideas have been shared, the group uses “approval voting” not for ideas to pursue towards design, but for which should be discussed further.  All members vote by raising a hand for each idea they consider to be worthy of discussion, and each person can vote for any number of ideas.  Members should vote for an idea if it seems viable or if it would enrich the group’s discussion.  The number of votes is recorded for each idea.
  7. The ideas then are put up for further explanation in the order of the number of votes they have received.   It is not necessary to discuss every idea.  The group may use its judgment for which ideas should be put  up for longer discussion. The presenter of the idea gives more explanation as desired from the group, and answers questions, gives clarification, etc.  Still, at this time, the focus is on exposition of each idea and not evaluation. During this step, others may come up with more ideas, which is to be encouraged, and may add them to the queue to be discussed.
  8. Once everyone has had sufficient opportunity to share ideas, the group creates a decision matrix of the ideas that have generated the most interest and seem most plausible with the values of the customer or the design as the criteria for evaluation.  This is ideally done on the whiteboard. At this point, discussion is opened up for full critique of ideas against the evaluation criteria.  More criteria may be added, and the group may collectively try to refine the means of evaluation, e.g., how to judge the simplicity of a user interface.
  9. The group does not need to make a decision on the design choice during this meeting.  The appropriate outcome of a brainstorming meeting in this style is a list of action items, such as the creation of experimental plans to compare designs, calculating the cost of each idea, mathematical analysis of system dynamics under the different designs, etc. that would inform the decision process for the top proposals.  Or, the group may decide that none of the ideas would satisfy the needs of the problem, adjourn to conduct further research, and repeat this brainstorming process.

This meeting structure may seem strange or unfamiliar, but has a variety of benefits over a “free style” or unstructured brainstorming meeting.

  1. Separating the generative and evaluative phases:
    1. This is an important psychological benefit.  Without wading into questions of why this phenomenon exists, there seems to be two different intellectual modes – one generative, creative, imaginative and another analytical-critical.  Everyone should ideally stay in the “flow state” of idea generation and feel free to propose things that might sound crazy, because those ideas may contain within them pointers to aspects of the problem or user needs that may otherwise be missed.  
    2. Brainstorming meetings often are impeded by either idea introduction and immediate critique, which can make some too hesitant to share, or immediate sharing of whatever pops into someone’s head, which can waste the group’s time.  By requiring everyone to write down and then intentionally share without immediate critique, this process moderates between those who are too eager and those who are too hesitant.
  2. Soliciting ideas from every member:
    1. Everyone is required to contribute at least one idea.  This helps to get a wide variety of experience (personal and technical) incorporated into the set of ideas evaluated.  Even if an idea is not selected, it may introduce a new technology, process, or design pattern to the group.
    2. This shared activity in which everyone participates helps to build team cohesion.
    3. Everyone practices design and ideation skills.
      1. Not only the most senior members should do the design.  Team design work should be collaborative.
      2. One of the junior members of the team facilitates the discussion so that he or she remains engaged.  The natural tendency is for the most senior members to have strong ideas from the beginning.  Having the junior members facilitate the meeting improves the vertical transmission of design knowledge by ensuring that the least experienced members can understand what is being proposed.  This also helps make ideas explicit and forces clear communication.  Design meetings are some of the most valuable opportunities for knowledge transmission in terms of time-density.
      3. By requiring everyone to meditate on the problem separately before sharing with the group, no one can be complacent with letting others do the design.  Then everyone can compare their own set of ideas generated with others’, which is valuable for both vertical and horizontal knowledge transmission.
    4. The requirement to write down one’s ideas with a formal process for introducing them to the group also restrains those who might take up too much of the discussion time and gives a structure to those who might be reticent or junior within the group.
  3. Approval voting of ideas for further discussion allows everyone to express their high-level evaluation extremely efficiently on every proposal.
    1. Voting for just one may obscure that some believe a 2nd best is still worthy of discussion, and each person coming up with a ranking of all ideas would take too long.
    2. It also allows the group to not waste discussion time on negative evaluation of ideas that no one wants to pursue.
  4. Separation of the idea from its creator by having the facilitator write each on the whiteboard with a title that describes the proposal instead of referring to the proposer.
    1. This helps everyone discuss the ideas dispassionately and without ego.
    2. Every idea is put in the same decision matrix, and then everyone looks at the decision matrix instead of the proposer.  This is another important psychological benefit, because idea evaluation is then framed in terms of what the customer needs instead of who is proposing it or a conflict between proposers.
  5. Once the idea is on the whiteboard, everyone can understand what has been proposed.  A common meeting pitfall is to verbally describe ideas without recording them, and then go in circles re-explaining what has already been introduced or debated.
    1. In a group of 4-8 people, it is likely that some will lose focus, and it is better that they can refer to the whiteboard rather than ask the basic questions again.
    2. The visual action (writing on the whiteboard) helps to focus attention when paired with the verbal description.
  6. Centering the critical discussion on the design criteria that were mostly decided before the meeting.
    1. This is “beginning with the end in mind”.  Everyone knows at the start of the meeting that the goal is to eventually evaluate the ideas based on these criteria.  The goal of the meeting is to decide on how to refine the evaluation of the proposed ideas in terms of experiments, research, and analysis.
    2. There needs to be serious consideration of the user or customer needs before brainstorming begins.  This helps to focus on the ideation itself during this meeting.
    3. It is expected that there might be some user values that were not explicitly acknowledged at the start, but only arise when comparing proposals.  This is a good result to have because the team improves its understanding of the customers by imagining what each proposal would do for them.  The team also moves from private judgments of the user needs to common explicit understandings by requiring framing the evaluation in terms of definite criteria.

In summary, this is not a definitive guide, but a set of suggestions for a better design meeting structure. Other structures are possible. By making this one explicit, new designs can build off of it.

Mindset of the New Engineer

What is the intended resultant mindset of this “Open Software Engineering Curriculum”? We will call it ‘Systemically Sustainable Innovation’ instead of disruption. We will develop it by examining two works: “A Whole New Engineer” (WNE) and “Rethinking Engineering Education: The CDIO Approach” (REE), and showing how ‘Crystallized Engagement’ is its formative paradigm.

Take the example of the obsolete bionic eye implant, “Second Sight” made for people without vision. The implant stimulated the retina of the patient with electric pulses based on a low-resolution separate camera feed which created a basic sensation of vision. Eventually the company went bankrupt and as devices failed, there was no possibility of replacement parts or support. Patients were left with non-functioning eye implants that complicated future medical procedures. Due to the low market size and high specialization, a secondary manufacturing process would not be financially viable, even if the designs were released. This illustrates several different aspects of ‘systemic sustainability’: The long-term technical reliability of the device was not proven. As an industrial process, low volume production and support could not achieve economies of scale. The company’s financial viability was tenuous because of a smaller-than-expected market and high costs of sales, regulatory compliance, and rehabilitation after implanting the device.

As a new product, it created a new dependence, which in turn created a new responsibility of care that could not be sustained as a system. While engineers should have freedom to innovate, they must also be mindful of the long-term implications of any new technology, which starts with its specific sustainability. Engineering education ought to foster this mindset through direct practice, which is the intention of the ‘Crystallized Engagement’ approach.

This formulation came, in part, from reflection on two recent works on engineering pedagogy. “A Whole New Engineer” by Goldberg and Somerville presents the story of Olin College and the University of Illinois’ iFoundry engineering program. Through experiential learning (Do-Learn), an emphasis on teamwork, and a focus on the joy of building, these institutions and the “Big Beacon” movement hope to bring about a cultural change in engineering education. We will contrast this with “Rethinking Engineering Education: The CDIO Approach” by Crawley et al. which presents a systematic consideration of each step of the engineering process (in their account): Conceive, Design, Implement, Operate and describes how to integrate them into the core content of engineering program. REE seeks to also weave professional skill development into these courses as a positive sum interaction instead of a time trade-off (p. 33). WNE is a vision statement for a different kind of culture in pedagogy: one that replaces weed-out thinking and competition between students with collaboration and personal development. It isn’t a how-to guide. REE is a bit narrower in its self-conception and isn’t tied to specific institutions in the same way that WNE is. It goes deeper into the methods of program development and evaluation. Both have theories of institutional change, and in particular how to move from courses based on Direct Instruction to a problem-based curriculum with projects integrated from the beginning of coursework. Neither acknowledges the other, despite many similarities and likely opportunities for the social networks of the authors to overlap.

But more importantly, neither gets the full life cycle of engineering right, which does not prepare students for thinking about systemic sustainability. As described in the previous essay, the ‘Do-Learn’ or ‘Structured Exploration’ approach to pedagogy allows technical problems to drive learning of the core engineering content. From there, projects give the opportunity to quickly plan and build prototypes that demonstrate the concepts presented within the courses. The CDIO approach, on the other hand, places a stronger emphasis on methodical Design after the initial “Conceive” phase, and also includes the “Operate” step. To operate a new product for an extended period of time naturally leads to thinking about reliability in a way that rapid prototyping does not. In “Rethinking Engineering Education for the 21 st Century” by Richard Miller, Olin’s first president, ‘sustainability’ is listed as one of the key modern challenges, but is not in the presented nine core competencies within engineering, and long-term operation is nowhere mentioned as an important part of the curriculum. Olin’s “Do-Learn” and project-based courses have what we would call Explore, Learn, Conceive, and Implement as the core steps. New problems or application areas often call software engineers in particular to learn about new domains even within the scope of a project, which means that they need intentional practice of Exploration and Learning at the beginning of any new venture. Altogether, our proposal is a union of these two approaches into the complete “ELCDIO” cycle within Crystallized Engagement. This full project cycle will take more time than the traditional project framing, but we believe it will pay off in the resultant change in mindset.

From here, we will examine the different kinds of sustainability. Engineers must first learn to get systems to work reliably, which we can call “Technical Sustainability”. There are several important obstacles to cultivation of this mindset within the curriculum. Reliability doesn’t carry the same appeal as disruptive innovation, and often doesn’t require the same level of theoretical depth that professors naturally want to impart in their curricula. When professors go to industry for technical consulting, they are usually involved at a low level of Technical Readiness because that is their comparative advantage and where their expertise is actually needed, and rarely in the reliability engineering of existing products. This can give them a skewed understanding of the future practice of engineering for most of their students. Or, they can discount reliability as something that can be learned on the job and not worth core curriculum time.

Further, the desire to cover a large body of theoretical knowledge leads to a mindset and habit of speed over quality. In school, a score of 95% is typically an A – the highest grade. In software, getting something 95% right means that the code doesn’t build, 97% right: it doesn’t pass unit tests, 98%: doesn’t pass integration tests, 99%: doesn’t deploy, 99.9%: you or someone on your team gets paged at 3 AM weeks later for an untested edge case that renders your system non-functional until you roll back the change or deploy a hot fix. In other domains of engineering, there may be design margins (for example, in load ratings of structures) to allow for error, but large margins are materially expensive. In practical engineering, you and your team need to re-do things until you get them right and not merely move on to the next project. Project-based learning often turns into making something technically interesting that barely works for the demo and then discarding it. The high-coverage low-repetition approach of most programs is fundamentally opposed to a mindset of reliability. This is not to discount rapid prototyping as a valuable skill, but it needs to be balanced with taking a product to a steady state at least a few times within the curriculum. Such ventures lead students to a better appreciation of the value of simplicity because of its alignment with reliability.

“Engineering begins and ends with people” was a repeated phrase around Olin and in its guiding documents. It’s true – but in common engineering practice its primary sense is different from what the authors likely intended, which brings us to the second kind of sustainability: organizational. Almost all engineering happens in organizations where the first kind of ‘centering engineering on people’ is being a good team player. While engineering projects should of course be ordered toward the benefit of the end user and society, the day-to-day practice is learning from previous efforts, training new people, evaluating and extending experiments that other people did, and presenting new designs to others. Organizational sustainability, as an objective, comes from the need to preserve this system of interactions between engineers to transmit theoretical and practical knowledge through time. Through most of the history of technical disciplines, apprenticeships within guilds transmitted this knowledge by active application. As the mathematical and scientific formalism of technique increased, engineers needed to learn more before this practical application, and so the schooling prior to work lengthened. Alongside this, the professors in ‘polytechnic’ schools spent most of their lives in academia, not in practice, which predisposed them to think primarily in terms of theory and mathematical design. To repeat: when these professors did have industrial engagements, they were at a low level of technological maturity, so professors did not commonly think of sustainment.

The gap between the mindsets of mathematical formalism and practical application within an organization creates the shock that many software engineers have when entering industry, especially PhDs. In a practical setting, software is (1) a high-level user interface to the processing unit’s instruction set (getting the computer to do what you want in an efficient manner) and (2) a communication mechanism between engineers. To simplify things, “Computer Science” deals with what is possible with computers, while Software Engineering cares about doing things efficiently and as part of an organization. Students might have a ‘hacker’s mindset’ (scrappy builder, not infiltrator) and know about getting things to work with few resources, but almost always require a great deal of training on how to write code that is readable and maintainable by others. An engineering program ought to at least introduce these ideas so that students can understand the multiple meanings of “engineering beginning and ending with people”.

Organizational sustainability includes the transmission of practical knowledge through time, and between generations. We can look at one example of a failure of such transmission. Fogbank was a secret material used in the production of nuclear weapons. Manufacturing processes were discontinued in the mid-1990s, and when engineers tried to restart them in 2000, they could not replicate the original process. Most of the staff involved in the prior production were gone and the equipment available was similar but not the same. Eventually, they recognized that material impurities in the original process were critical to its success and were able to recreate it in 2008. Because it was a secretive and specialized process, they could not rely on well-established or common knowledge. Knowledge transmission is rarely this challenging, but this example illustrates how products are the output of engineering organizations that are effectively living systems.

The remaining kinds of sustainability are farther from the day-to-day practice of engineering but are worth attempting to integrate into the course of education. “Industrial sustainability” relates to management of systemic risks such as low volume production, using specialized inputs with few providers instead of commodity components, geographically distributed supply chains, and having a small number of consumers. Commercial enterprises all want to create distinct products without comparable competitors, but these same enterprises and private consumers want to pay commodity prices in a competitive market for what they buy. Navigating the gradual commodification of products through time is needed to financially sustain any business. Financial sustainability also includes models of return on capital investment, accurately assessing non-recurring engineering versus continuing support costs, and the general management of a business. Finally, environmental sustainability needs no introduction here but should still be included in the curriculum.

So how can an engineering curriculum promote the proposed mindset of systemic sustainability? (1) Ownership of a single product through time while progressively adding features to it to increase technical depth, (2) habit formation through working with users over extended periods to support a product, and (3) explicit assignment of professional competencies to courses. These goals come at the cost of a highly modular structure in which there are only dependencies of theoretical knowledge between courses.

Because “Operate” is the most foreign part of the ELCDIO cycle to typical pedagogy, we need to spell out what it could look like. Imagine a simple “weather station” that monitors wind speed, rain, humidity, sunlight, etc. and is physically deployed. To put together such a weather station requires integration of microcontrollers and basic circuits, as well as simple mechanical design to enclose it and make it resistant to the elements. The physical system must be maintained through time, over months or even years, and can be progressively improved in response to problems. Once it is deployed as a system and reporting data, the data stream can be processed and uploaded to the cloud. Visualizations can be built on the data stream within a simple website. The data set presents opportunities for analysis and predictive modeling. Multiple stations can communicate as a mesh network. Starting from even a simple measurement device, there are many avenues for problem-based technical depth within software engineering across the product life cycle. The practice of physical and software maintenance within this project helps to prevent a mindset in which “everyone wants to invent the future, but no one wants to be on the future’s on-call rotation”. All of the technical features are built upon the continuous proper functioning of this device and data pipeline.

Students can learn how to care about long-term effects and sustainability of products for the people using them simply by working with them over longer periods of time. Olin’s “Engineering for Humanity” is an introductory design course that pairs students with seniors and has them try to solve problems that they face, often centered on negotiating the physical environment. Working with seniors in this way has many advantages: physical objects are often not easy for them to use, and so there are plenty of opportunities for assisting them. Solutions can be inexpensive. Seniors are often overlooked, generally have a lot of free time, and are happy to talk to young people who want to help them, so they are an ideal user group. By extending this engagement beyond a semester-long course and doing long-term support for these projects, students can gain an appreciation for lasting impacts on users as well as technical sustainability (again, from the value of simplicity).

To address the other major domain, organizational sustainability, students should work in teams, write and defend documentation, and conduct detailed design reviews with other students. While many professors recognize teamwork and technical communication as important, they don’t devote enough effort into methodically cultivating them. Some seem to believe that given the opportunity, most students will naturally develop team skills. In practice, it is a skill that can be learned by doing but accelerated by complementary instruction. Similarly, the “easy way out” of including technical communication in the curriculum is to have oral presentations of projects at the end of the semester, assessed in a rubric. But this does not match professional practice, and rubrics typically do not offer much in formative evaluation beyond directing students to develop their skills in some aspect of presentations. To prepare students in organizational sustainability, they must write reviews of and respond to feedback on designs and documentation. If resources allow, professors should evaluate the quality of these written reviews. Or, other students can assist in providing meta-commentary. The challenge of defending one’s designs to one’s peers and reviewing others’ is the core of industrial practice. To achieve Crystallized Engagement, the curriculum should explicitly assign these competencies to specific courses.

Finally, note that not every project needs to have the full ELCDIO cycle, but each step should be included multiple times and in conjunction with other steps. Some projects and courses will focus on systems integration in teams, rather than increasing theoretical depth. Detailed design reviews and long-term support should begin early to plant the seed of the comprehensive CE mindset.

The 80/20 of Team Cohesion

Olin excels at creating many opportunities for group projects from the very beginning of the curriculum and in extracurricular teams.  Within those group projects, the Olin faculty attempt to impart some skills around team conflict resolution, proactive reflection on work styles, and advice on how to give feedback.  I found much of it to be valuable, but there is no compendium of this advice to my knowledge.  It was often delivered in lectures, sometimes in videos, and also in direct guidance given to teams.  Much of it could be written down to advance the broader project of innovation in engineering education, but also to guide practicing engineers today who did not receive the same kind of instruction.  Here I hope to describe what I believe are the simple ways to solve most team cohesion problems, with a focus on setting up good structures to not merely prevent conflict but promote growth.

How do we not merely resolve team conflicts or try to prevent them, but promote team cohesion?  Here I try to present a few simple ways to improve the fundamentals of cooperation within engineering teams, though I believe much of this is transferable to any professional setting.

  1. Respect: This is the foundation for professional team cohesion.  Most people already have a sense of what this is, so I will describe the single most important attitude that promotes it.
    1. The Learning Mindset: Engineering organizations are learning organizations.  Each member is learning, as is the institution as a whole.  From the newest member to the most senior expert, everyone must constantly grow in technical and interpersonal skills.
      1. This perspective teaches that everyone has the potential for contribution, even if they currently lack certain skills.
      2. It instills humility.  Everyone must be aware that there are still things to learn as individuals, that the institution does not have everything right, and that it may have to adapt to change.
      3. It promotes collaboration, because the best way to learn is often from the practice and expertise of others.
  2. Begin with the End in Mind: The organization must begin with its final result in mind, which might be the products delivered to its customers or learning among its students.  It also must focus on its “end” in the sense of its purpose, which could be to “organize the world’s information” or “innovate in engineering education”.  Within Amazon, “being the Earth’s most customer-centric company” translates this into the principle of “Customer Obsession”.
    1. Work Backwards: Teams accomplish this by starting from values, such as safety, customer satisfaction, frugality, or environmental impact.  Then, by deriving evaluation criteria from them and means of experimentation or assessment for different solutions.  Finally, by generating ideas for solving these problems and working together to develop them.  This ordering of values, criteria, and designs directs everyone’s attention to the common purpose first, and avoids having “solutions (such as certain technologies) in search of a problem”.  Solutions are chosen based on their fulfillment of the institutional goal and not the standing of their proposers.  See the “Generative-Evaluative Design Meeting Style” for how to do this in detail.
    2. Align Individual Success with Team Success:
      1. In a commercial setting, everyone’s goal is to deliver value for the customer.  Create incentive structures that reward achievement of this common goal along with success in individual contribution.  Contributors who did more should be recognized, rewarded, eventually given more responsibility, and should mentor others; but success of the individual is not possible without success of the whole.
      2. In an educational setting, individual learning goals should be aligned with completion of the project objectives.
    3. Minimize Assignment of Blame: Suppose a bug in the code gets to production.  The focus should not be on assigning blame to the author, but assessing the processes that allowed it to happen.  Was there a code review done?  Was there enough time to do the code review?  Was it perfunctory, or were honest efforts applied to it?  Were there unit tests for the code in question?  Were there integration tests?  Were those tests insufficiently representative of real cases?  Was the task clear?  Was there sufficient detail in the design document?  In general, the goal is to find a process or “mechanism” that could have prevented the problem, and if there were already mechanisms in place but they were not followed, then the appropriate action (depending on the severity of the issue) is to reemphasize training or documentation for this process.  Engineering organizations are learning organizations, and it is expected that sometimes things go wrong.  The assignment of blame to a specific person is typically unproductive and does not advance the common goal.
  3. Professional Development: The organization must proactively create opportunities for professional development of all of its members, and align that development with the objectives of the whole.
    1. Regularly state and review development goals:
      1. In school, at the beginning of each project, and at least every semester for extended projects, have everyone explicitly state their technical and professional development goals.  For example, on a robotics project, one person might want to learn about network communication protocols and scrum management.  Another might want to learn about end effectors and improve in technical writing.  When reviewing the distribution of work among the team, make sure that some of each person’s goals is fulfilled.  It might not be possible to accommodate all development goals within the confines of the project, but given that learning is the major objective within school, it should be prioritized.  Further, team members will contribute more effectively when they are motivated and their work is aligned with their learning goals.  Balance drawing from the prior strengths of each team member with presenting opportunities for developing new skills.  Periodically check in as a group for progress towards these learning goals and allow for updates to them.
      2. In industry, there is necessarily a stronger emphasis on utilizing each member’s current expertise to meet the needs of the business.  There are fewer clean delineations between project cycles than in a school schedule.  For these reasons, the team and especially the manager must intentionally review opportunities for professional growth that align with the delivering results for the customer and each member’s development goals.  Emergent business demands will naturally interrupt the intended flow, so the manager and member should regularly work together to adapt the development plan.  All members should be proactive in documenting their goals, and in identifying opportunities for growth.  However, the team should not artificially create tasks just because someone wants to learn something new.  For example, suppose someone wants to use machine learning in his work.  It would invert the pattern of “working backwards” to try to use machine learning in a project just for its own sake, and would lead to a suboptimal design if machine learning wasn’t actually needed.  Instead, that member should review problems that the business faces to find ones where machine learning would provide value.  His manager should help him connect with other teams to learn more about potential opportunities.  In general, team members should have at least quarterly reviews of overall career trajectory and growth plans.
      3. A regular and formal review process ensures that everyone’s development goals are heard, which promotes common feeling within the organization.  Publicly stating these goals also allows others to give advice on how to pursue them.
    2. Growth Mindset Over Scarcity Mindset:  Everyone (or nearly everyone) wants to learn in school and grow in his or her career.  The team will cohere and succeed when everyone believes that opportunities for development will increase with the success of the organization, and that opportunities are not scarce.  Following from the previous example, as long as there are new ways to serve the customer or learn new skills (which there always are), opportunities naturally abound.  If instead the solution space is constrained and organizational processes cannot be revised, opportunities will seem limited, which can lead to a zero-sum perspective.  Focusing on the end goal and not the internal current state will help to instill the collective growth mindset.
    3. Distribute work among the team by type, urgency, and tediousness: There will always be tedious tasks that are too heterogeneous to automate, or fires that need to be quenched but get little recognition, and these tasks should be spread among the team intermixed with the main technical challenges so that all members are given opportunity for growth.  Not distributing urgent work leads to disproportionately fatiguing certain members.  Failing to spread the tedious work leads to boredom.  Without conscientious review of work assignments, these negative effects can compound.  In a professional setting, the same people may work together for years, so patterns of interaction can settle with certain people taking on specific kinds of work.  Team members must be intentional about not developing silos of expertise so that many different perspectives can be applied to the same problem space and the organization can be robust to movement.
  4. Clarity of Commitments
    1. State commitments at the beginning of each project, periodically during the project, and as new events arise.
      1. Use a project management and work tracking system to make it clear what everyone is working on, and who owns each task.  Regularly update the status of tasks.  A regular scrum meeting fulfills this.
      2. Promptly communicate external demands such as family responsibilities so that the team can be mindful of what each person is experiencing and redistribute the work as needed.
      3. In school, at the beginning of a team project, everyone should state their course load, expected extracurricular commitments, and family responsibilities.  All members should also state if they want to make the project their focus for the semester.  Is their aim a job well-done, or do they want to go above and beyond what is required?  Explicit acknowledgement of what everyone is signing up for at the beginning facilitates common understanding and helps to manage the workload through the project.  For example, if someone expects that she might have external commitments arise through the course of the project, she and the team might plan to complete the work early to allow for more flexibility in the schedule.
      4. Management of commitments in industry follows a similar pattern with the addition of long-running ownership of products and services.  The creator of some product naturally becomes the subject-matter expert, and if he does not document it and train others on how to support it, he may have an unstated and irregular commitment to teaching others about it.  Answering questions might not rise to the level of an explicitly acknowledged task within the work tracking system, which can undermine the clarity of commitments.   To maintain this clarity, avoid repetition, and propagate knowledge, the team should have common documentation of responses to support questions.  Depending on the intensity of the inquiry burden, it may be appropriate to have a rotation among the engineers for answering questions each week and improving the documentation.
    2. Unity of ownership:
      1. Every task has one owner.  It must be clear who is responsible for each task.  If that owner needs help, it is his responsibility to ask for it, subdivide the work appropriately, seek guidance, or have someone more suited to the task take it.  Such tasks range from the smallest code submission to the delivery of a contract to a customer.
      2. Single-threaded Owners: For each solution-level deliverable, such as a product that enters a new market, there should be a “single-threaded owner” who is working exclusively on it, and is ultimately responsible for the product performance.  Not every task or program will be large enough to require a single-threaded owner.
      3. Responsibility of routing: Every problem requires an owner, and it is the common responsibility to route each problem to its appropriate owner.  Becoming aware of a problem does not make someone its owner, but he or she must notify the team or party responsible and communicate all known relevant details.  Completeness of this communication makes the transition of responsibility from the noticer to the owner clear.
  5. A Well-Ordered Sense of Duty
    1. How do you view your work?  Is it merely a burden?  Is it a source of identity or validation?  What portion of your life does it take up?  From the perspective of managers and executives, how do you view your employees and reports?  Are they numbers on a spreadsheet, resources to be deployed, partners in working towards some goal?  What do you expect from them in terms of commitment?  Is there common understanding of that commitment level?
    2. Some say that they want their employees to be like “missionaries not mercenaries”, but unless their work truly has a higher purpose, this is inappropriate.  Most people will have necessity as their primary motivation for work, not mission, and it is unreasonable to expect the kind of commitment associated with a higher calling to, for example, deliver food or facilitate online payments.  Doing your job well is dignified and honorable, whatever it may be, but for most people it does not deserve the zeal of a missionary.  Also see Rerum Novarum.
    3. It is a matter of duty to society and ultimately to God to use one’s talents well (Matthew 25:14–30).  This duty must be “well-ordered” in that it should neither be shirked nor all-consuming and the sole source of one’s identity.  It must be understood as a contribution to the common good and participation in the goodness of Creation.
    4. This is admittedly dissimilar from the rest of the “simple solutions” in that this cannot be accomplished with better communication or meeting styles, and for some people is entirely different worldview, but once accepted is simple to apply.  This sense of duty points the work of the team towards its proper end.