My Changing Role
Throughout the project, my role expanded beyond UX/UI design. I first designed the product’s user experience by translating the researcher’s reflection framework into wireframes and interface flows. I then became the AI product builder, using structured prompts and iterative refinement to guide an AI platform generator in developing a functioning application. Finally, I acted as a video director, using AI video tools to produce the moral dilemma simulation that initiates the reflection process.

Watch the presentation here:
Watch the presentation here:
Passcode: lopes
From my previous project experience with websites and application, I would design high-fidelity screens in Figma and either build the website in a low/no-code builder (such as Webflow or Framer) or hand it off to a developer to code.
Instead of playing the role of the design executor, I learned to become the conductor.
Instead of building the platform myself, I needed to:
Learn proper prompt engineering for Lovable.
Guide the AI's decisions and actions.
Revise designs and test the platform.
These steps helped me prevent errors and misinterpretations by the AI, effectively creating a functional product.

Chat mode (now known as plan mode) is a tool I used to align my vision with the AIs and ensure that it understands what I want, preventing me from hours of re-corrections afterwards.
1. Understanding the Research Sequence
Met with the researcher to understand the goal of the character education simulation.
Reviewed documentation outlining the reflection framework used to guide students through ethical decision-making.
Identified the core objective: build a proof-of-concept platform that simulates moral dilemmas and records user reflection data.
2. Designing the Interface in Figma
Translated the reflection framework into a structured user flow.
Created wireframes in Figma for each step of the simulation process.
Defined the layout, navigation, and reflection prompts users would interact with.
Used these wireframes as the reference for evaluating AI-generated interfaces.
3. Selecting an AI Platform
Researched AI application builders capable of generating full-stack prototypes.
Tested multiple platforms to evaluate reliability and development capabilities.
Compared tools based on stability, flexibility, and database functionality.
Selected Lovable as the primary platform for building the prototype.
4. Prompt Engineering Research
Studied Lovable documentation and tutorials to understand how the AI interprets prompts.
Used Perplexity AI to research prompt engineering strategies.
Experimented with different prompt structures to identify common causes of errors.
Learned that precise, detailed instructions reduce AI assumptions and development issues.
5. Building the Platform in Lovable
Created a comprehensive foundational prompt outlining the application structure.
Defined pages, user roles, data storage requirements, and design rules.
Compared each page with my Figma wireframes to maintain design accuracy.
Produced a functional prototype demonstrating the simulation and reflection process.
UNDERSTANDING THE RESEARCH SEQUENCE
Before designing anything, I studied the reflection framework provided by the researcher.
The framework described how users should move through a structured reflection process after viewing a moral dilemma simulation.
This document became the foundation of the product’s user flow.
DESIGNING THE INTERFACE IN FIGMA
Before designing anything, I studied the reflection framework provided by the researcher.
The framework described how users should move through a structured reflection process after viewing a moral dilemma simulation.
This document became the foundation of the product’s user flow.

From data and details
From the user-flow sequence the researcher provided, I categorized data into web terminology, helping me visualize what each element would look like.

To visual designs
From the documentation and elements I categorized from the user-flow, I designed each page in the design software Figma.
SELECTING AN AI PLATFORM
My client wanted me to use an AI platform creator to build the Christian character education platform. We were between using Base44 and Lovable, so I tested them and compare/contrasted them.
After testing and research, here's what I found:
1. Strong UI generation
2. Inability to fully migrate platform
3. Flat credit usage
1. Plain initial UI
2. Complete code exporting ability
3. Varying credit usage
Lovable's ability to fully migrate platforms off of its software was the final determining factor in which platform to use. I communicated that Lovable was the better choice for this project and the researcher approved.
LEARNING PROMPT ENGINEERING
When I was testing Base44 and Lovable, I realized that prompting AI wasn't as linear as I expected. So I determined that it was essential to do further research on prompting AI.
To learn how to communicate effectively with the AI system, I practiced these three strategies:
Research
Because AI platform builders are still emerging tools, documentation was limited. I visited Lovable's website and found articles included best practices and videos explaining how their AI interprets prompts.
I also used Perplexity AI to research prompt engineering strategies and best practices.
Lovable had a workshop video that was especially helpful, here's a photo from my research and my note set-up.

Trial and Error
When I began experimenting with prompts, I discovered an important lesson: If a prompt contains missing details, AI fills those gaps with its own assumptions.
Since I was expecting the AI to understand my vague prompts, I ended up with problems such as:
missing features
incorrect page structures
database errors
hours of troubleshooting
One practice that helped me encounter less errors was use the chat mode (now known as plan mode) in Lovable where you can convey your idea without the AI immediately executing something. It gives you time to ensure that you and Lovable are on the same page.

Conversation Before Execution
One practice that helped me encounter less errors was use the chat mode (now known as plan mode) in Lovable where you can convey your idea without the AI immediately executing something. It gives you time to ensure that you and Lovable are on the same page.
Image from Lovable

BUILDING THE PLATFORM
Now that an AI platform was chosen and prompt research was done, it was time for the fun part.
Utilizing Perplexity AI and the notes that I had taken from my research, I devised a firm, initial prompt for the simulation platform.
If you're curious what this first initial prompt was, here it is:

Role
You are Lovable operating in Chat Mode to plan and scaffold a responsive research app with Supabase, asking clarifying questions and proposing a step-by-step plan before any code or schema changes. Do not modify code or apply migrations until the plan and schema are approved.
Mission
Deliver a production-ready v0 scaffold that is easy to extend: clean folders, typed components, accessible UI, explicit data models, and safe iteration steps that minimize rework.
Context (Product + Audience)
- Product: A simulation-driven app where students work through moral dilemmas and researchers analyze their written responses and behavioral data.
- Primary users: Students completing simulations.
- Secondary users: Researchers analyzing data and exporting results.
- Key behavior: Students progress through guided steps, entering text responses; researchers filter, view, and export results.
Core Features (Scope for v0)
- Students: Watch a short simulation video/animation and respond via sequential steps with “Next” navigation and a persistent progress bar.
- Auto-logging per step: text input, word count, time on page, and step number; autosave on blur, interval, and step navigation.
- Login/sign-up; resume progress if interrupted (per-user session progress).
- Researchers: Dashboard with table view of participant data, filtering by simulation/participant, and CSV export.
- Two roles: student (complete simulations, view own progress) and researcher (view/export participant data).
Information Architecture (Current pages)
- Home: brief description, login/sign-up, start/resume button.
- Step 0 series: 0A Welcome, 0B Simulation explanation, 0C Elements in reflection, 0D Reflection characteristics, 0E Watch simulation.
- Main steps: 1 Identify problem, 2 Working ideas, 3 Evaluate ideas (3.1 Personal experience, 3.2 Scholarly evidence, 3.3 AI reasoning), 4 Decisions, 5 Consider new info, 6 Reflect on process, 7 Post-survey, 8 Finish & thank you.
- Researcher dashboard: participant table, filters, export.
Design System (vibe + tokens)
- Tone: calm, open, faintly playful; engaging without biasing answers.
- Base: white backgrounds with accent color #522398; sans-serif fonts; consistent spacing/typography.
- Accessibility: WCAG AA contrast, visible focus states, keyboard navigation, reduced motion option.
- Persistent progress bar across all simulation steps.
Tech/Platform
- Supabase for Postgres, Auth, Storage, and optional Realtime/Edge Functions; wire the app to Supabase for sign-up/login, session, and data CRUD.
- Responsive app for desktop and mobile; no push notifications.
- Store data securely within Supabase; do not rely on external APIs for v0 (stub AI-only areas).
…
Implementation Notes
- Autosave strategy: onChange debounce + onBlur + on step navigation; include word count and running timer per step.
- Time tracking: start timer on step mount, stop on unmount or navigation, accumulate into time_spent_ms for that step.
- CSV export: server route or Supabase Edge Function to stream CSV from filtered query.
- AI reasoning (3.3): stub the UI now; wiring to an Edge Function with an external model can be added later when keys and policy are approved.
On hold until approval
- Do not apply migrations, modify schema, or change code until the plan and SQL are explicitly approved in this chat.
- After approval, implement in small, reviewable steps, confirming each milestone.
Because the prompt is long, I cut the middle of the prompt out to shorten it.
With the amount of detail going into this first prompt, the platform had developed a firm foundation to build on top of.
Back-end to Front-end
Before improving the UI of the platform, I tested and improved the user-flow first, making sure buttons led to the right pages and correcting navigation errors.
After that, I started to iterate on each page, improving it to be as the initial wireframes I created. Here's the first version in comparison to the 10th version of one of the pages on the platform.

Version 1

Version 10
Next, I integrated the established UI properties to the wireframes.
Wireframes aren’t born with good design, that’s why it’s important to continually iterate on every screen until it evolves into a more functional, attractive design.
This is an example of the Dashboard screen design iteration process.
V. 1
V. 2
V. 3
Olivia's user-flow
Commentary:
Olivia first opens and loads the app, and is greeted by Luma and welcomes her. She heads over to the connections tab where she can view any devices she has paired to Luma. She frequently visits the Real-time sleep adjustment page. Here she can control what Luma does if she starts waking up in the middle of the night.
There's an auto-calm response option where Luma automatically responds to Olivia's needs, using her connected devices. But if she wants to set anything manually, she can create conditions for her connected devices on the right.
Lauren's user-flow
Commentary:
After an exhausting day, Lauren logs on and opens the relaxation tab and starts a breathing exercise. Luma will ask her to choose a specific exercise, and once she does, Luma immediately starts the activity. Lauren also likes talking with Glow because it helps her declutter her stressed mind and calm her nervous system before sleep.
Andrew's user-flow
Commentary:
Andrew visits the sleeping trends page to look at his sleeping levels of the night before and look at the insights tab, mentioning personal sleep facts. The Trends section are patterns that Luma determined and the AI reasons with why those patterns exist.
If Andrew wants, he can look at any suggestions the AI has for him regarding the trends. On the Waking Up page is his alarms that he has set as well as a clock to show the current time. He can adjust settings to each of his alarms as you can see on each of them.
Some slides may be word-less. Keep in mind this presentation is just a visual aid as I presented Luma as a product.
THANK YOU
I appreciate the time you took to check this out. :)
Back to top







