Process
After Session 6, we further developed the first physical MVP of our website wireframing playground based on the feedback we received. Because we already had a working proof of concept at that point, we were slightly ahead of schedule and could use this assignment to refine the idea instead of only trying to get the basic concept running. The main technical hurdle we were addressing was how to make website prototyping more hands-on and accessible by letting users build a webpage physically first and only then translate it into a digital design. Our first iteration, therefore, combined a physical setup with a digital system, namely an A4 base sheet with ArUCo markers in the corners and a set of separate UI cards that could be arranged on top of it and scanned with a phone.
The physical MVP was improved in several ways. We laminated the paper UI pieces so they became more durable and reusable, which made the kit better suited for repeated handling and rearranging (see Figure 1). We also bought whiteboard markers so users could write on blank cards themselves. This was important because it meant the prototype was not limited to a fixed collection of prepared elements. Users could still start with the existing cards, but they could also add their own text or interface ideas while working. That made the prototype more open-ended and more supportive of experimentation. Instead of only showing a concept, we were building a material setup that invited people to try, move, adapt, and remake.
Tycho and I mainly worked on the programming side of the system and went through several iterations to improve both usability and reliability. We enabled full camera resolution, added a dropdown menu for camera selection (see Figure 2), implemented a flashlight toggle for dark environments (see Figures 3 and 4), and made it possible to upload a photo from the file system or photo library (see Figure 5). We also improved the mobile interface so it better fitted a phone layout in terms of colours, spacing, and overall appearance. In addition, I tested different libraries to make the application work well across browsers and across both macOS and Windows. These changes were not just cosmetic. They were necessary to make the prototype stable enough to function as a real first iteration, rather than a fragile demo.
In this iteration, the mobile view became the main interface for individual users. This is the view in which a user takes a photo of the physical wireframe and then adjusts the detected UI cards by changing colours, images, text, and formatting, such as bold or italic (see Figure 6). The PC view had a different role. It was meant for a general screen or beamer, so that the resulting designs could be shown to the whole group and discussed with others. This distinction made the system more useful in practice because the phone supported individual creation and editing, while the larger screen supported collective viewing and reflection.
The most important programming improvement was the ArUCo-based translation from the physical wireframe to the digital webpage. In the earlier version, the system could already detect markers, but in this iteration, we improved the visualisation so that the UI cards were not only recognised as separate elements, but were also placed in the correct position on the digital page. We also improved the detection of card rotation by giving the four corners their own ArUCo markers (see Figure 1), which made it possible to trace orientation more accurately. To keep the result stable, we locked the rotation to 90-degree angles whenever the detected rotation was close enough, within plus or minus 10 degrees. This meant that the photographed paper composition could be translated much more directly into a usable web design. We also added a second page in the mobile application to support this editing workflow (see Figure 6). Finally, we added a backlog of previous photos, allowing users to revisit, select, and delete earlier iterations (see Figure 7). Together, these functions made the prototype more useful for iterative design, because users could both create and refine their wireframes instead of starting over each time.
At this stage, the scaffolding was still fairly minimal, but it was already present in the design of the materials and interaction itself. The A4 sheet defined the workspace and orientation, the predefined UI cards lowered the threshold for getting started, and the blank laminated cards still left room for users to introduce their own ideas. The editing page in the mobile application also acted as scaffolding, because it allowed users to correct or elaborate on their physical composition after scanning, rather than having to get everything right immediately. In that sense, this first iteration did more than present a goal and a collection of materials. It included a tangible technical building block and an initial form of guidance that helped users move from physical experimentation to digital output.
Figure 1. The paper plastified web interface cards.
Figure 1. The paper plastified web interface cards.
Figure 2. Phone view of the home page (1/2).
Figure 2. Phone view of the home page (1/2).
Figure 3. Phone view of the home page (2/2).
Figure 3. Phone view of the home page (2/2).
Figure 4. Phone view of the home page with flashlight enabled.
Figure 4. Phone view of the home page with flashlight enabled.
Figure 5. Phone view of the home page with file upload.
Figure 5. Phone view of the home page with file upload.
Figure 6. Phone view of the edit page (currenly no picture made).
Figure 6. Phone view of the edit page (currenly no picture made).
Figure 7. The PC view of the website.
Figure 7. The PC view of the website.