Artifact

Village Builders

Alyx Jam Session

People
Simone Torrey
Adam Rose
Iván Lopez
James Redenbaugh
Banner
Links & Files
Summary

Meeting Summary: Alex Chat Interface Design Discussion

High-Level Discussion & Agreements

The team discussed the UI/UX design for the Alex chat interface, focusing on:

  1. Chat Interface Design: Ivan presented a Figma design inspired by Claude's chat interface, incorporating a friendly, conversational feel with messages from the user on the right and responses from Alex on the left.
  2. Agent System Approach: Adam clarified that while the backend uses different "agents" (reflection agent vs. suggestion agent), to the user it should feel like one consistent persona (Alex). The goal is to have Alex first listen and reflect before offering suggestions, rather than jumping straight to advice.
  3. Suggestion Request Mechanism: The team agreed on implementing a suggestion request feature similar to Rosebud's approach—a dropdown or button that appears when appropriate, offering preset prompts like "suggest some ideas" or "offer a different perspective."
  4. Authentication & Session Management: The team discussed implementing user authentication and handling conversation continuity, with a focus on creating a clear flow: login screen → conversation selection (resume or start new) → chat interface.
  5. Future Visualization Features: The team was excited about James's vision for a relationship mapping feature that would visualize relationships and their health, serving as both a guide and a tool for users.

Specific Takeaways

  1. The chat interface will follow the familiar messaging app pattern with user messages on right, Alex responses on left
  2. Instead of visually differentiating "agents," the interface will present as one consistent persona (Alex)
  3. The team will implement a suggestion request system with preset prompts that users can select
  4. Authentication will be added with a clear user flow between screens
  5. Buttons in responses (interactive elements) were discussed as a future feature
  6. The team will explore adding a breathing exercise or other grounding activity as part of the onboarding/welcome experience
  7. Ivan will develop three main screens: login, conversation selection, and chat interface

Action Items

  • Ivan: Continue UI development with a focus on the three main screens (login, conversation selection, chat)
  • Adam: Work on backend support for the suggestion system, sharing relevant libraries with Ivan
  • Simone: Share screenshots from various reflection/therapy apps as inspiration
  • James: Schedule a 30-minute check-in for the end of next week
  • Simone: Reach out to Kirill about potential backend AI development support
  • Simone: Share Shasta Nelson's relationship frameworks as inspiration for the future relationship mapping feature

Next Steps

The team will maintain their current workflow rather than establishing regular meetings, with a 30-minute check-in scheduled for the end of next week. They'll continue working on the basic functionality before exploring more advanced features like the relationship mapping visualization.

Initiatives

AI Prototype Integration

Integrating the alyx prototype into their Webflow website using APIs and custom code.

Meeting Transcript

00:00:00

James Redenbaugh: It was amazing. It was really great.

00:00:03

Adam Rose: This meeting is being recorded.

00:00:05

James Redenbaugh: Other than getting pretty sick at the end and still being pretty sick, it was. It was wonderful.

00:00:13

Adam Rose: Cool. Well, sorry you got sick. Was it just a standard. We used to call it Bali Belly.

00:00:19

James Redenbaugh: Yeah.

00:00:21

Adam Rose: Yeah.

00:00:21

James Redenbaugh: That's the cost of going to Asia, you know.

00:00:25

Adam Rose: Totally worth it though.

00:00:26

James Redenbaugh: Yeah.

00:00:27

Adam Rose: Yeah. Cool.

00:00:30

James Redenbaugh: Yeah. Is Simone. Simone joining us or just. We just got you today.

00:00:36

Adam Rose: Hold on. Are you hearing that?

00:00:40

Iván Lopez: No.

00:00:43

Adam Rose: Hamilton is playing through my headphones. I'm guessing it.

00:00:46

James Redenbaugh: Hamilton.

00:00:48

Adam Rose: Hold on. I hate when this happens. You don't know where it's coming from, you know. Wait. There you go. Spotify decided to start playing in the background. Sorry. Hear anything? Cool. Ivan, are we reading for the first time? I think so.

00:01:17

Iván Lopez: Have we seen.

00:01:19

James Redenbaugh: Good question. Have you guys met?

00:01:21

Adam Rose: I don't. I don't think so.

00:01:23

Iván Lopez: Yeah, well, sorry to bring the bad news. We have.

00:01:27

Adam Rose: No, I thought. Okay, that's what. I was confused because I wasn't recognizing you. But I remember you working on our. On our website.

00:01:35

Iván Lopez: Yeah, the website was like from the first meetings.

00:01:38

Adam Rose: Yeah.

00:01:39

Iván Lopez: When we will sell the final things then. I think you. You work mainly with James and Moenjamindi.

00:01:46

Adam Rose: Yeah. Okay. You didn't. Maybe it's the palm trees in the background was throwing me off.

00:01:52

Iván Lopez: I just didn't recognize the Caribbean vibe. Is the throwing you off?

00:01:58

Adam Rose: There you go. And it's also that the image is this small in the corner of my laptop, so I wasn't sure if it was the same. Ivan.

00:02:10

Iván Lopez: Yeah, nice to see you again too.

00:02:12

Adam Rose: Yeah. Cool. Well, where do you guys want to start? Oh, it's. By the way, Simone is in transit. She had a dentist appointment, but she'll join probably in 10 minutes or so.

00:02:24

James Redenbaugh: Cool. Yeah. And I'm not sure if work was planning on joining or not, but we're. We're good without him. So. Yeah, I think we should start. Yvonne's already sharing his screen here and he can walk us through the UI that he's working on in Figma. Curious to get your. Your take on that. We can talk about possibilities and take it from there.

00:02:58

Adam Rose: Sure.

00:03:00

Iván Lopez: Okay. So I'll take the lead. Thank you, James. So, as we talked. Previously on the messages. I think this is mainly like to get things started off. So I liked a lot the interface that Claude uses for their chat. So that's something. There are some elements that I rescue from that I like that Alice has this intention to be like more friendly, be very relatable with the user. So I had like these things in mind. Like where, for a mother. Messaging apps where you have like a different direction from the messages. So I think it will be a good idea to implement something like that. This is something that people are already used to. So to confirm or to work on this feeling of friendliness, that's something they have used already.

00:04:02

Iván Lopez: That's why the different direction here of the messages, the user on the right and the answers will get for Valix to the left. Something that I was very curious about. I don't know how we'll be working with that, the implementation of the agents, if that will be like the different chats or will be implemented in the same conversation itself.

00:04:28

Adam Rose: So, yeah, do you want me to speak to that?

00:04:33

Iván Lopez: If you want. Maybe just let me finish this point and then we can go on. I think it will be a good idea if the agents will be in the same chat that there. There's some visual cues that we can show maybe to like link some of the brand colors to different type of agents and we can make those changes visually so the user get an easier or a clearer idea of the type of agent they're talking in the moment and making those changes visually so they can still understand, okay, which type of response or agent I'm talking right now. So those were like the main things that I had in mind, like in the UI from cloud, these interfaces from other messaging apps, and the implementation of brand colors for different agents. Now I already finished, so you can start.

00:05:33

Adam Rose: So go on. Yeah, I'll just comment on the agents thing because we're still experimenting a bunch. In fact, I'm literally just trying to get two agents running at the same time and they can coordinate with each other. I think were not thinking of it, at least to the user that it would like to the user they're talking to. Alex. Right. They weren't. We weren't thinking that this would even necessarily get surfaced. What were trying to do though, is make a very clear. Like, we want the main. When you're talking to the reflection agent, it's really more of a backend paradigm to sort of organize different. Like it's prompted differently, but they share the same message history and memory and everything. Right. So they don't. You don't want it to feel like you're talking to two different people. Right.

00:06:22

Adam Rose: It really needs to feel, but it's just prompted very differently. Not, not from a personality standpoint, but from a. The reflection agent is literally just supposed to listen, echo back what it thinks it's hearing, ask questions. Right. But what we're trying to do is to get it not to go into fix it mode, which is what it, you know, all of these things tend to do. They immediately come back with like 50 suggestions. And, and so we're trying to really make it. It's more from a user experience point of view so that they say, look, okay, I've reflected enough, I've talked enough, I've kind of fleshed it out. Now I want some suggestions. Right? And so that's really where were thinking. Like, it's like a very. It's. It's almost like a consent button. Like, yes, okay, I'm.

00:07:08

Adam Rose: I'm now ready to hear some feedback on what I could do. Because nobody likes that when you go to talk to somebody and they immediately just give you advice, right? You're like, no, I actually want someone to just listen. Like, I'm just trying to process. Right. I'm not looking for you to fix it. I just want to share. And so that's what we're trying to experiment with the two buttons. But we definitely aren't trying to make the user feel like it's too different. Like, you know, you could imagine like a team and like, oh, I'm going to talk to the advice person now versus the. And that wasn't. At least that we've. We contemplated that, but I don't think that's what we want to do for now. It's hard enough to figure out one personality. We want to figure out three.

00:07:53

Adam Rose: So from a design standpoint, we still need a way to indicate, you know, and a button was maybe the starting point. It could also obviously happen conversationally, like, are you ready to hear some advice or suggestions? But I think we're thinking a button or of some kind. It just makes it a very clear signal. There's no nuance. We don't have to sort of interpret from the text or anything. So I don't know if that. In my little mock page, I literally put a. I just put in, you know, you have a send button or, you know, right here. It's like the little triangle. I literally had a suggestions button just next to it. But that was just because it was quick and easy. So I don't know. I haven't thought about, you know, what's a good visual way to do that.

00:08:51

Adam Rose: But it's possible we might end up here, like where you're going where. If we do want to treat it like a team. But I think for now, we're not thinking that's the approach that it's sort of one, you know, you're talking to Alex. So we just need. But that makes it simpler, at least from a design standpoint. You don't need preferences and all that. We might still want preferences for just the colors that they're seeing. For sure.

00:09:21

Iván Lopez: Yeah. On that lane, I was. Okay. Because I was curious how that will work on the back end of those different, like, type of answers. Maybe as you say, it's not needed or necessary to, like, point like, different agents.

00:09:40

Adam Rose: Right.

00:09:40

Iván Lopez: So to speak. But I think it could be interesting at least, like, later in development, we may be able to make some visual cues on the type of responses you're getting. For example, if we get a bunch of text is larger and in the same message, we get a response from the reflection. And all that is advice. Just those simple cues. Like, okay, this part of the message is where Alex is specific to you giving you advice on something. And this other part is like reflection. But again, it's not. Maybe it could better like, again, visual cues instead of. You said, like, having it, like different people or team. Yeah, with you, the ones.

00:10:33

Adam Rose: Well, so one of the. One of the things I'm trying to play with right now is just even for internal debugging, right? Like, if I'm sending requests in and I'm trying to figure out, like, who's actually responding. Because there's different. There's different ways to coordinate multiple agents on the back end. Like, you can. You can have like a team. And then it sort of coordinates and says, oh, I think you're asking for advice. I'll send this part over to the advice guy. And then, you know, so you can do that. But then you kind of want to understand, like, who responded with what and how does that all come back? You know, how even understand it? So maybe for now I can. At least. This is probably a clean way to do it is in the response.

00:11:14

Adam Rose: We can either add a parameter or something because it's JSON in the API response. And we could add. I can. We could put an explicit indicator. This response came from, you know, the reflection agent. This response came from the advice agent. I think I'm calling it the suggestion agent now because advice feels a little heavy or something. But then whether we want to queue off that. Right. And key off that and do different colors or something, it's an option. Then it's okay. So, yeah, I'm still not sure. I don't know if I have an idea in my head of a good way to say no, give me some Advice now.

00:11:58

Iván Lopez: No, it's really okay. Again, this is something that we are still working on. So maybe this is something that we can plan on. Letters for future development, as you said. Right now maybe we should focus on the things that we can really get. Is like, okay, like getting things to work and then like polish through that and. Okay, trying to see how can we implement these ideas of making it easier for the user.

00:12:31

Adam Rose: Yeah.

00:12:32

Iván Lopez: So I'm gonna take it back to design. This is some of the visual cues that I had in reference. Is there something else where maybe you're a little bit more sure that you would like to see from the interface or what feedback can you gave us from the thing that you are looking right now in your screen?

00:12:54

Adam Rose: No, I mean, I like the general, you know, idea of it feeling like a dialogue. It's a familiar paradigm. Right. With at least I assume that Google does it on the same way where you're on the right and the other person's on the left. But that's how Apple does it. And yeah, I mean, I think it's a pretty straightforward, you know, sort of paradigm. I'm wondering, I guess again, the big thing for now is like, how do we want to add a button or something to say, okay, I'm ready, please give me some advice now. Or some suggestions whether that is another button next to the send button or whether we do it as a. You know, I don't know. We need to think about that, like, in a way that's like.

00:13:49

Iván Lopez: Okay, so just to get it very clear, you would like to see visually an option where, for example, me as a user, I'll start to talk. And then another option where, okay, I am ready to get feedback from Alex. That's something you will like to see very clearly on the interface.

00:14:11

Adam Rose: Yeah, so that's rather than like, you have a. Looks like a dropdown to choose which agent you're talking to. Right. With the message you're going to send. I assume that's a. Yeah, like, you know, right there. Right on top of the box. Yeah. Okay. So some. Rather than picking. I think it's more of a good question. I guess it's. Is it sort of a, you know, a different sun button? You're either sending it to the. Normally you're sending it to the reflection agent. That's kind of your default, who you're talking to. But then it's like, okay, give me some actual feedback suggestions. And it would just be like three bullet points, you know. You know, it's really more of saying, yeah, I'm actually ready to hear some, like, advice, you know, and so whether, you know, it could be a picker.

00:15:05

Adam Rose: But I think, you know, it's more likely that want the default to be. You're just talking to the reflection agent, but then you have some explicit. And maybe it's over to the right side so it's not confused with the normal flow, you know, like, you've got this box, this text box where you're typing. Maybe it could be outside of the box or something to say, you know, I don't know, we could play around with just layouts or something. But.

00:15:34

Iván Lopez: Sorry, I don't mean, like, to make it more complicated on the backend. I think if that could be like, flags or checkpoints or. I don't know, if the user sends, let's say, three messages and you get like a response from Alex, like, okay, this is something that I've been hearing from you. Would you like to receive some feedback right now or would you like to keep, like, sharing with me the things that are going through your head right now? Again, I understand that this is outside of the visually, right? This might be more on your end. That's what I said. I don't want to make things complicated for you or just an idea that cross my.

00:16:19

Adam Rose: Yeah, no, I totally get. Totally get that. And I think that's going to be. That's going to happen naturally. So I guess the question is, like, part of our prompting has to sort of figure out, well, okay, if the conversation gets to that point, right? Or they just say in the text box, like, please give me some suggestions, right? Like, that's another way that they're going to indicate, you know, even if they didn't know, they should hit this button or something. So I think we need to play around with it a little bit right now. Like, if you were to hit that right now, I'm. I put a. A new endpoint that's sort of like the suggestion endpoint. It's, you know, ask for suggestions. But while it's doing under the covers, it's using the same.

00:17:07

Adam Rose: It's using the conversation history and what it knows about the user to infer what it thinks it wants suggestions about. We need to be. What the hell? Now? More music. What is happening? Sorry, guys. This is where all of a sudden music is blaring through my headphones.

00:17:34

James Redenbaugh: Simone's here.

00:17:36

Adam Rose: Oh, Simone's here. You guys can't hear this, can you? It's not in the. Coming from the zoom.

00:17:43

Iván Lopez: No.

00:17:45

Adam Rose: Nope. Okay, let me figure out where this is coming from. Hi, Simone. Hold on. There you go. Found it.

00:18:05

Iván Lopez: Hi, Simone.

00:18:15

Adam Rose: You might be muted. So. No, there you are. Thought you were muted.

00:18:23

Simone: Is there Danae or something?

00:18:31

James Redenbaugh: Hi, Simone.

00:18:33

Simone: Hi.

00:18:38

Adam Rose: We were just talking through some of the interactions around the reflect versus ask for suggestions or get advice things and clarify that. The way Ivan had thought about this was that they're almost like two different agents you're talking to. And I sort of clarifying, like we're not. We, we contemplated that as a possible thing, but right. For now, we're very much thinking you're talking to Alex. It's one person. It's just a way of separating. We want clear. Like the reflection agent is just there to listen and ask questions and get you to, you know, just open up and not fix it. And then you want to give explicit consent when you say, hey, okay, I'm ready for some advice now. Can you actually give me some suggestions? And then Ivan's pointing out as well that, you know, that could also happen naturally.

00:19:38

Adam Rose: Like you could have a button, but someone may also say in the chat, like, hey, can you. What can you suggest? Right, yeah. So we do need to think about. We would want that to work, obviously. So we want to figure out, you know, how does that flow through? Like, you know, I don't think the front end would necessarily determine that you just asked for suggestions, but I think the reflection agent would have to get that and go, oh, they just asked for suggestions. I'll hand it over to the suggestion agent to get a few clear suggestions. So it's sort of a different path.

00:20:17

Adam Rose: One is they hit the button and what I'm doing right now is just sort of looking at the recent history and inferring like, okay, this is what you clearly are asking for suggestions about, but they could also maybe type in the thing and hit suggest. So there's sort of a few different ways that the input could hit us, and we probably just need to get clear on each of those. But for now, it's a separate API endpoint. But you could also pass. So I'm not really thinking it's just an alternate button. Right. And it's like you either send it to this endpoint or that endpoint and you pass in the message. And I think then the back end will figure out how to provide the response.

00:20:59

Iván Lopez: Very quick. As I understood you correctly, isn't it possible from the front end or where, from the actions of the user to tell Alex which kind of message you are sending? Because as were talking about, like, maybe Having two different buttons. I just went berkook into Google to see an interface where we have two buttons. In this case, these are different actions. But it doesn't seem to me so crazy where we can have two different buttons. We need to work in the icons or the direction in the design to simplify. Okay, this is me sharing with you and this other one is expecting from a suggestion and real quick about that wouldn't be possible from.

00:21:46

Iván Lopez: To send on the API where okay, this type of response is again like a message or sharing on being able to define that from the buttons that the user is clicking. Like, okay, I'm sending this string literally. But right now the thing I'm expecting for you is like just like hear me out. And then the other button was sending that instruction of okay, now I'm ready to receive the feedback. I don't know if I made sense.

00:22:16

Adam Rose: Or I made sense. That's exactly what I've been suggesting that. So it's one. The default button would just be to send to the reflection agent. If they type and they hit enter or something. They didn't click on that. That should be the default. But there could be a second button and we could do it a couple of ways. Right now it's just a totally separate URL endpoint. Right. But it could also be a query parameter or something like. But you're basically saying, here's a modified like, this is the type of response I want.

00:22:51

Simone: Can I show another option which is from my. From my Rosebud Reflection app? I don't know if you guys can see this.

00:23:00

Adam Rose: Probably not.

00:23:01

Simone: Probably not.

00:23:04

James Redenbaugh: If you stop sharing your screen, we can see it bigger.

00:23:07

Simone: Oh, but you can also just make your. You could pull your thing over. Okay, I'm just gonna show myself you. Okay, so. Okay, so you see here, there's the. The jest here and then there's like options pop up. And usually this button is. If I type, the button transforms into you see go deeper. Which is basically reflect. But if I don't type, if it's the last word was from the AI, then I have an option to get suggestions. I don't know how that works in the back, but basically when I say so here's like a few ways to suggest. One of them is like suggest some ideas. When I click on that, it's just a prompt basically.

00:24:07

Adam Rose: Right. It's just sending that in as test back into the same place. But that only shows up. Simone said if you have not typed anything like.

00:24:19

Simone: So the button. The button switches. If I haven't typed anything yet, then it's got it. I have suggest options.

00:24:28

Adam Rose: Got it. And they're doing that rather than explicitly sending it to a different thing. I think they're just sending that in as text back to the same. And whether they could be keying off that text and doing something totally different, but it's sort of just sent in the chat, which it would totally work too.

00:24:47

Simone: Like, you know, I think it's basically what I. What I imagine it is that somewhere in their prompt, in their systems prompt, they have. If somebody sends in these, respond in the following way.

00:25:03

Adam Rose: Right. Yeah. In fact. And what I think we're trying to play with is it's almost like it's sending you to a totally different prompt. It's like I want you to forget all the normal prompt or some of it might be relevant, but we might build a different. It's really. Right now, the agents are. The only thing that's different about the two is that they're different prompts. Like, they share the same memory, they share the same conversation history. So it's really just about changing the prompt based on what they're looking. You know, you're. You're just listening or you're suggesting. So. Yeah, well, we could do it that way. I mean, if that feels. Because that is one of the things I'm realizing is if you just send in to a separate endpoint that says, give me suggestions, what do I put in the text?

00:25:45

Adam Rose: Like, there needs to be something that the user sent. Right now, if they didn't send anything, the conversation looks kind of funny. It's like either a blank entry for the user or I have to fill something in because Claude complains if you send in a query with no prompt, like no user query. So, like, we have a totally different system prompt, but you don't if we didn't send anything in. So I have to say something like, please give me some suggestions.

00:26:13

Simone: Yeah. So here the options are. Suggest some ideas, help me think through this, offer a different perspective, suggest next steps and analyze further.

00:26:23

Iván Lopez: Yeah. For example, Adam, very quick. I'm thinking if these are like preset options, we can figure out where we can set like a specific prompt to get it from the button. Isn't it that we can send that so that the response we're sending cloth is not empty, but the user is not actually like giving an input.

00:26:49

Adam Rose: Yeah, the button typing it for them.

00:26:51

Simone: Yeah, that's how it works here. That like the suggest opens these different prompts that then get inputted into the chat window.

00:27:02

Iván Lopez: Yeah. Like we are still giving it a prompt, but we are getting that from the button they are clicking, not from the text they are not entering.

00:27:12

Adam Rose: Yeah.

00:27:13

Iván Lopez: Did that make sense?

00:27:14

Simone: Yes.

00:27:15

Adam Rose: Yeah. So maybe a way to think of it is the button is just a convenience way, but if they were to type the exact same text. Right. They should get. It should have the same behavior.

00:27:25

Simone: And it does. Like I've tried it out. I said make some suggestions and it understands that.

00:27:31

Adam Rose: Yeah, I like that because it solves it solves for the sort of different cases we said like, well, if they hit a button then you get an empty prompt. What do we do versus just saying, well, no, the button is going to. They. The button might fill in text, but then we could do different options over time. So it kind of solves and it just keeps it all in the same conversation. And I can figure out on the back end how we handle that. Okay, I like that. I don't know what you guys think. I mean, I don't want to completely copy Rosebud, but.

00:28:07

Simone: No, but it helps when, you know, for example, when somebody feels a little stuck and maybe the question didn't quite land so well, then it's like instead of leaving, you have the button you can just push and it's, you know, you can get unstuck that way.

00:28:25

Adam Rose: Yeah, it's very much like the, you know, when you start a brand new chat often there's buttons that are just some, you know, common questions so that you get them unstuck. Like, yeah, you know, it's like blank page syndrome. Like, I don't know, where do I start? Well, here you go. Just click this button. It'll start the conversation.

00:28:45

James Redenbaugh: Can you. Do you think that we could also achieve buttons in responses from the agent?

00:28:57

Adam Rose: We could, we would have to define like some way and it like some proprietary like way to encode variables or something in the response. But it's just a response. So. Yeah, definitely conceivable. I think that's where we'd want to go. Like in other words, longer term, if it said suggestions and here's three bullets, you'd want to highlight it in an app and say click on this and turn it into a task like I want. Okay, great, awesome suggestion. I'm going to go do that. Yeah, so we'll need to take it and put structure around it and whether that's a button or whether that's some way that the actual front end can grab onto it.

00:29:36

James Redenbaugh: Yeah. Or it could even, you know, it could ask yes or no questions or give different options, which are ways to support engagement and take it beyond the traditional text interface, make it feel more special. And if the responses could include some kind of variable or if it knew to put buttons into brackets or something like that on the front end, we could take that, turn that into a button and say, you know, if this is clicked, then it sends this response back.

00:30:16

Adam Rose: Yeah, it's a great like way of. I think what you're getting at is like interactivity, like if, if Alex wants to ask a question, a clarifying question, or say pick from these three options. Right. Rather than having them have to even type anything, if they just click on number one or, you know, that sort of thing, you can give them interaction about active elements. And then the front end just has to take the text from number one and put it back. Like we don't. The backend doesn't actually have structure, it's text on the back end.

00:30:47

Iván Lopez: So I think will depend like how complex can the JSON get from the backend? Right. I mean, like right now we are getting adjacent. It's just a response, but if I understand correctly, it's like an object. So we could integrate maybe already some preset responses from the message the user resend. I don't know. I'm not so used to working with JSON. That's the way I started thinking because at the end it's an object where we could implement these fields. This is for the message and these are some of the options. And in that case I see it possible. Okay, we receive the JSON, we identify this field that is responses, and we get three strings or something like that, and then those convert them into buttons on the front end.

00:31:43

Adam Rose: Yeah, I think we need to take very specific examples that we'd want to solve for. We don't want to over engineer something, but it's more of like, great. Are there ways that we could simplify the interaction? I think to your point, James, it's like just make it easier to engage. Right. They don't have to type so many words or they don't have to. It's just, you know, we're probably very quickly also going to try to go to more of an actual audio verbal interface. So some of these things, you know, like, good, great idea. We may or may not end up going that way where we need that kind of interaction, but totally get it. But I think we would come up with some really simple things. Like you can just say like option, like an option list, 1, 2, 3.

00:32:29

Adam Rose: And it's just an easier way to Put some structure so that you want to put buttons next to it or something. Cool, cool. All right. But let's start with. I really think that example. Yeah. Maybe it's. Then if it's a click a button, it just drops down a few different text options for suggestions. Right. And we can start with just one or whatever, but couple. But it's just going to send text back into. And I'll. Then I will ignore that. I'm creating a new endpoint and I think we just. I'll key off specific text. Like, we'll have some predefined phrases that are like, oh, they actually want some suggestions that's going to get routed over the suggestion agent.

00:33:23

Simone: And I. Can I show you guys something that's maybe for, I don't know, second or third version? Just some inspiration. This is a different app, which is visually a lot better. Hold on, let me show. Do you see what it's doing? It's doing like in. Yeah. Breathe with me before you start the session. And. And it just keeps going. But, like, you know, it's. The idea is three breads. And then I. I like that as a, you know, as an inspiration. And then these. These guys look totally different. So they have their, you know, whatever you can do there in like, cards. And then when you click on it. Oh, yeah, that's when you click on it. Then there's the inhale.

00:34:16

Adam Rose: Yeah.

00:34:18

Simone: And then it's pretty much a normal just chat conversation. But should I be. Should I be sending. I've, like downloaded a bunch of those apps. Should I be sending you guys some screenshots just as inspiration?

00:34:32

Iván Lopez: Sure thing. That I was thinking all reference or inspiration that you guys have. It's very useful. And then we make it polish, like the flow that we get. We want the user to follow, for example, that screen that you share with us where they can breathe in or breathe out or select from a card. Maybe this is something that we can work later. The flow that we want the user to follow. And from there start to designing those screens.

00:35:08

Simone: Yeah.

00:35:11

Adam Rose: And I love the idea of like, some. I'll use the word trademark, but like some thing that comes up right when you start. And whether that's an invitation to breathe or, you know, an inspirational piece of art, like something that, you know, that's kind of consistent as well, because we're really trying to, you know, behavior change and build habits and stuff. So I think that could be cool. Well, there's the Alex breath.

00:35:39

Simone: Yeah.

00:35:43

Adam Rose: Cool. Great. So, Ivan, you actually managed to get a response from the API?

00:35:54

Iván Lopez: Yeah, I finally did it.

00:35:56

Adam Rose: Awesome. I'm glad I didn't. Do you need anything in terms of. I'm not. I looked in a very little bit on how to integrate webflow into APIs. Because there's all kinds of ways to generate JavaScript client side JavaScript libraries based on your open API spec. Just like the schema for the actual API is that you need to do a little digging. I mean, I'm happy to help work on that or provide some pointers. I think it's. There's a bunch of libraries out there and I just didn't want to go too far because I don't know enough about webflow to know what.

00:36:32

Iván Lopez: Yeah, well, thank you for thinking the initiative on that. Thanks. I was talking to James. I think right now as we are still like making tests and see how it works, but it will be fine if we just get it to work locally internally just making the AP calls and then we are thinking of using an integration. It's called wised. Sorry James, if you can jump in because I don't know if I'm pronunciating it right.

00:37:08

James Redenbaugh: I don't even know if it's wised or wised.

00:37:11

Iván Lopez: Okay, well, it's an interface that allow us to work with external APIs. Because the main thing is like you know, the security and all that stuff of working with APIs. This interface allows us to work with different APIs in a secure manner.

00:37:28

Adam Rose: Okay.

00:37:28

Iván Lopez: We need to make that those integrations very specific. So again, right now that we're just like started enough, I think we can work it from webflow itself to not make it that complicated and then making sure how the best integration will be to work with the API. Well, that's the project I was having in mind. But again, if you have found something that you think it could be very useful like libraries, please feel free to share it with us and I'll give it a look.

00:38:02

Adam Rose: Yeah, I can send you a pointer. How do you spell this?

00:38:09

Iván Lopez: Wiz or wise is W I C.

00:38:16

James Redenbaugh: E E D ed. Yeah.

00:38:20

Adam Rose: Wait, W I C E D Z like zebra.

00:38:25

Iván Lopez: Okay.

00:38:27

Adam Rose: All right. I'll just take a look. I'm just curious about it. Cool. All right. All right. We can also circle back when you guys, you know, if you're just sort of playing around at this point. But I'll send you point. There's just a whole bunch of open source libraries that will generate JavaScript code or react library, like whatever client labor you might want.

00:38:54

Iván Lopez: Yeah, it's very useful that right now, as you said, we'll try to keep it simple for now. From the response that we got from my links, for example. Indeed. Scenario. What I was thinking that maybe the different agents can intervene for us from there. We're saying that for the moment that's not the case. So we can make like simpler steps to make it work. And those little details like okay, just having like in the back of the mind to think how will we realize those in short future or hopefully.

00:39:33

Adam Rose: Cool, cool, sounds good.

00:39:41

Iván Lopez: I think I'm curious about this. In this case of the design or like the user flow, even if we're just beginning, would you like us to have just a page where they can put their response or will you have. Would you like to have like a previous page, maybe some introduction and the input where, okay, just get the input from the user and then make those little transformations where it starts to convert into a conversation.

00:40:19

Simone: You mean like onboarding, Like a few onboarding questions or.

00:40:24

Iván Lopez: Yeah, I'm trying to look for an example because I know my English is not the best.

00:40:33

Simone: When you started.

00:40:37

Iván Lopez: Hearing Claude, you get like this first screen, good morning island. And then I can add an input and then whatever I input it makes it leads me now to a screen where it's not all like all conversation. But that's what I know I was trying to see or to talk with you guys if you would like for the moment like at least try to design like this onboard screen or even just informational.

00:41:07

Adam Rose: It's like, hi, I'm Alex. This is what I'm here for. You know, like that starts a conversation. So what I was going to say, in relatively short order, we're going to want to add authentication. Right. And that's where we would have some notion of like you might say, oh, welcome back, Ivan, like good to see you. If you just re logged in or something like that. There isn't right now it's all one big session on the back end and we're going to need to figure out do separate logins get delineated? How do you actually. You don't want to just be like one long stream of everything. And so there could be like time based, like break it into weeks. But there might also then be, okay, well what were the different conversations during that week?

00:41:55

Adam Rose: And maybe it's login might be the way to do it because otherwise it's just sort of some time based like okay, we haven't talked for over half an hour. I'm going to assume it's a new Conversation, but that may not really be true. So again, that's how we'll figure out, like how do we want to break up the conversations and kind of organize them over time? Because this is going to be a long running. Ideally, if it's working well, you know, it's a long running friendship almost at something. Right.

00:42:22

Simone: I think what would help is like if your creative brains can, you know, collaborate with us and we're still trying to find like what is its personality or like what is its identity. Right. So like this thing I showed you is a journal. So like every time I open it up, it's like I open my journal page, but instead of a blank page I get a greeting, but it's like a new page. And then I can go back my, into my old entries, my history and it's flow, but sort of like we turn a page, Right. So with this one, what would it be? Yeah, Is it a new conversation? Is it a new chapter? Is.

00:43:06

James Redenbaugh: Yeah, yeah. The feature that most excites me in the pipeline is the relationship mapping. And when that's a reality, I feel like the Alex becomes not only a guide, but a kind of.

00:43:37

Adam Rose: Like.

00:43:37

James Redenbaugh: A guide with a crystal ball that can show you visually things that.

00:43:45

Adam Rose: Were.

00:43:46

James Redenbaugh: Previously hard for you to see. And it's a very unique new kind of thing. You know, it's not like a digital journal or a, you know, a smart reminder system. It's. It's an entity with these abilities to understand you and reflect things back to you. So I'm kind of thinking it of it in that way as both a being and a space or a tool or a portal or something.

00:44:27

Adam Rose: Totally.

00:44:28

Simone: I like, I like that relation to the map. And then the guide makes so much sense, right. Then suddenly it becomes.

00:44:35

Adam Rose: Right.

00:44:36

Simone: That makes sense. Yeah.

00:44:38

Adam Rose: And imagine too is like if you were clicking on, oh, here's a visualization of my friends or something, right? I want to click on this one that is undernourished right now, this relationship. And it brings with it the context of that friendship, right? So all of a sudden you're teleported into this. Oh, I'm talking about this friend right now. And it knows all your history that you've shared about that friend. So it's like you could literally like dive into different areas to go work on that. It's not just this open ended, like, hey, what happened today? Reflection. It's like you can get proactive with.

00:45:09

Simone: Your relationships and that's something. Rosebud, my Journaling app is still terrible at. It will sometimes like reference the wrong people with the wrong thing and that. Still terrible, right? Like, oh, isn't this just like when you talked about Adam? Blah, blah. That was my husband.

00:45:28

Adam Rose: You're confusing my husband with my work husband. Yeah.

00:45:35

Simone: It'S not good.

00:45:38

Adam Rose: Yeah, that's super exciting and I don't think I thought of it quite that way, James, so thanks for that. Like, that's a really cool way to think of it. Like, oh, and you could highlight like, oh, this is orange. Now this relationship is being underfed. Like it, you know, it needs some care. Right?

00:45:55

James Redenbaugh: Yeah. Yeah. And eventually with the voice interface, that'll be really amazing to just talk and then, and see this model evolve over time. And there's not even like I can prompt Claude and get it to create graphics for me in a canvas. But very few people have had a, like a text or voice to persistent model or visual interface experience at all yet. Is very new. But it's very, it's already possible. It's very within reach right now, but really exciting to imagine because there's all these image generators where you put in a prompt and an image comes out. But it has nothing to do with any image that I've made before. Maybe it's like it starts to understand the kind of style. But to have this model, this visual that's persistent, it's the same.

00:47:18

James Redenbaugh: You know, it's going to be there when I open up the app again. But the model will help me update it. I mean, that alone is amazing no matter what we're talking about doing. And it's the perfect use case for something like that. Because if you're trying to do that on your own, like, oh, you know, I can make. I'll draw these connections and stuff like that. Unless you're a super visual and creative person with a deep understanding of psychology, you know, you're not going to get a lot out of that exercise. But having a guide there to help you with that is huge.

00:47:59

Adam Rose: Cool. Where do I buy it? Like, sign me up.

00:48:04

James Redenbaugh: You should talk to these guys.

00:48:13

Adam Rose: Subscribe. Smash that. Subscribe.

00:48:19

Simone: This is really exciting. Thanks for making this connection, James. That's awesome.

00:48:23

Adam Rose: Yeah, sure. Yeah. And it's not a generative artifact. That's the thing is like, we're just doing analysis of the conversation, pulling out tags or keywords or whatever and then putting it into a structured model of the relationship. So like an app can just very directly read it like you would any other. There's actual. Just a traditional web app. It doesn't have to be in that necessarily. It's really the reflection part. That's the part where it's conversational.

00:48:56

Simone: But it will feel like magic.

00:48:58

Adam Rose: It'll feel like magic. Yeah. Yeah.

00:49:03

James Redenbaugh: And that's another thing that makes it better than, you know, I could just go talk to GPT or Claude or PI about my feelings, but there's no concrete model behind it. It's just been trained on a bunch of stuff. And this. Yeah. This feels like a different kind of thing.

00:49:22

Simone: Yeah. I don't know if we shared with you guys, but we managed to get as an advisor, author who's written multiple books on relationships and she's consulting with big companies on, you know, relationships and she's writing another book right now focusing specifically on people who moved in the last five years and working together with. Called the Chamber of Connection. And they're. They're kind of wanting to be a center to welcome people when they move to a new city and they're like trying to start in multiple cities at the same time. So. And she is giving us basically a book and a video course and you know, a lot of IP to like a specialized. Have these suggestions be specialized and not just pulled from the web or generic advice.

00:50:23

James Redenbaugh: Cool.

00:50:24

Iván Lopez: Sounds great.

00:50:26

Adam Rose: Awesome. Yeah. Yeah. James, what your. This thing you bring up, like the relationship map that it's almost like a separate thread. Like when you know, cycles permitting and know in terms of our engagement as well, hours permitting or creative units permitting, it would be cool to just start mock like even from a fundraising point of view. Because to me that's one of those like light bulb features. It's like that people are going to be like, oh yeah, like I, I see the value here. Like if I had this, my relationships in that picture. Right. With a nice overlay, like overlaid and indicators of where I actually need to do some work. Right. Like that could be a real, you know, even if it's a mock. Right.

00:51:11

Adam Rose: Like a picture that, like I would love to spend some time to just get a visualization of that or start to, you know.

00:51:19

James Redenbaugh: So yeah, you know.

00:51:21

Adam Rose: Well, let's be a separate thread.

00:51:23

James Redenbaugh: Yeah, yeah.

00:51:24

Adam Rose: Because I don't think we'll be ready to build like to make that work for a while. But to have a visualization of it I think could be really cool and I'd love to throw it in.

00:51:32

James Redenbaugh: Yeah. And that'll help you dream it into being too. You know, just being able to See it. Yeah. Well, let's. Let's keep talking about that and also talking about what. What medium makes the most sense for that, because we could do it as a video. You know, you can showcase somebody.

00:51:51

Simone: Yeah.

00:51:52

James Redenbaugh: Or make it more interactive. What were you going to say?

00:51:57

Simone: Well, the. The cool thing about this woman, like our advisors work, Shasta Nelson is her name. She has a bunch of frameworks how you can basically like, measure or she asks people to like, you know, measure certain dynamics and dimensions of her friendship, of your friendships. And so that already, to me has like, map, like qualities. You know, he just has it in different frameworks. But I will send pictures of those to you because maybe that can start, like, kick off some thoughts and conversations. If we put some of these together, maybe there's suddenly like a two or three dimensional map emerging.

00:52:47

James Redenbaugh: Yeah.

00:52:48

Adam Rose: Cool. Yeah. Like, I think a lot of those are like the qualities with which you want to assess, you know, a relationship. And that might be what. What surfaces up almost like, hey, there's not. There's not enough consistency to this relationship where there's not enough positivity. Right. Like, depending on her, what framework we're tapping into, I could totally see that.

00:53:14

Simone: Yeah. She has like different friendship circles and then she has like, what Adam just mentioned is called the triangle, the friendship triangle of positivity, consistency and vulnerability. So that's how you kind of can calibrate your friendships, that triangle.

00:53:34

Adam Rose: Cool.

00:53:34

Simone: And. Yeah, various other things.

00:53:37

Iván Lopez: Yeah. I mean, sorry, guys. This sounds very cool. I would like you to take it.

00:53:42

Adam Rose: Back to the reality right now.

00:53:46

Iván Lopez: I was taking the design.

00:53:48

Adam Rose: Yeah.

00:53:51

Iván Lopez: For example, now that you said it. And I think it's needed. We already talked about the authentication from the user, so I think we'll see that as a screen where they need to log in. I'm thinking kind of that flow that. Okay. The first screen is login. If there will be a screen in the middle, or it will take you directly to like a conversation where all messages are just being pulled off. Because I saw from the API that you can get all messages. So I don't think it will be like, at least for the moment, so easy or so comfortable to. Okay, I just log in and then take me directly to the longest conversation that I had. So I'm thinking maybe a screen will be needed there.

00:54:40

Iván Lopez: I don't know, maybe something like we talk previously a little bit, like maybe an introduction and onboarding, something like that. Or maybe like, how are you user? What would I talk about? Et cetera.

00:54:55

Adam Rose: Yeah. Well, the first time they log in when they sign up, there absolutely will eventually be some kind of onboarding flow. Right. Answer a few basic questions. Right. But then maybe it's like now is sort of do you want to resume where you left off or do you want to start a brand new conversation or something? Right? So it's, you populate anything or do you know. And then we obviously want to come up with a way to have those conversations accessible if you want to go back and look at them. But you may not, you know, you. That's another way to solve it. We don't necessarily have to magically figure out, oh, this is. They logged in again. So it's a brand new conversation. Maybe they actually just had to go do something, but they really wanted to.

00:55:33

Adam Rose: You know, we're going to just need to think about some of that. But maybe that's an easy way for now to just say, do I bring back the history of what they were talking about or do I just start with a brand new thing? And a brand new thing could maybe have a little introduction, like, hey, I'm Alex. Here are some sample questions to get started, you know.

00:55:50

Iván Lopez: Yeah, yeah, like that. In that case, two extra screens will be needed. Of course, I'll tweak this one of the conversation, the chat screen. But yeah, I like it. I think it's a little bit more clear and the steps that each screen works at the login that do you like the, like start a new conversation or just help from what we talked and then lead you to the third screen that is like the conversation, the chat. Okay, Okay. I, I like it also because I think we already like, I don't have like those steps defined. One, two, three, which screens? Okay. I think we can work with that specific flow again. If there's any ideas or something you would like to add in that flow or later, every idea is welcome.

00:56:43

Adam Rose: Right.

00:56:45

James Redenbaugh: Cool.

00:56:46

Adam Rose: Cool. Great. Well, we're off to an exciting start. I don't know if we would want to. Maybe we keep it just as needed at this point. Like if we want to actually get on a call or look at any visuals, we can just, you know, versus like having a standing actual call. I'd be fine either way. Whatever you guys think works best for you. But.

00:57:11

James Redenbaugh: Up to you guys.

00:57:13

Adam Rose: Yeah.

00:57:13

James Redenbaugh: What were you going to say?

00:57:14

Simone: Like having a regular touch point, it just gives everybody a focus point, right. To like have things done and check.

00:57:24

Adam Rose: In maybe like every two weeks or something maybe feels like a. Or I'm worried, you know, I think you're Going to pretty quickly be in front of where I'm at in terms of, you know, back end stuff. But.

00:57:44

James Redenbaugh: Maybe we could put a like a 30 minute check in at the end of next week.

00:57:52

Adam Rose: Sure.

00:57:55

Simone: Do you, do you guys know any backend AI developers by any chance that you could connect us to have a chat?

00:58:07

James Redenbaugh: Have you talked to Kirio?

00:58:11

Simone: Well, last time I checked, he was full time employed. But.

00:58:18

Iván Lopez: He'S kind of.

00:58:19

James Redenbaugh: I mean, you should talk to him because.

00:58:22

Adam Rose: Okay, he.

00:58:25

Simone: This guy's a genius, Adam. He's a friend of mine.

00:58:29

James Redenbaugh: Yeah, Kirill's amazing. And he knows a ton of people, too. I mean, he's okay, probably more than you need right now, unless he wants to take on a fun project. But last I talked to him, the company he was CTO of has been bought. And he's still working for the company that bought that. But he. He's not nearly as busy as he used to be.

00:59:03

Simone: Okay.

00:59:06

Adam Rose: Okay.

00:59:08

Iván Lopez: There's a chance.

00:59:10

Simone: Yeah, might be. Might be right Timing all right.

00:59:14

Adam Rose: Yeah.

00:59:19

James Redenbaugh: Yeah, he's who I would. Who I would think of right now.

00:59:23

Simone: Cool. Yeah, I'll connect.

00:59:28

Adam Rose: Cool.

00:59:28

James Redenbaugh: Great. Well, we can follow up on sack about a time.

00:59:33

Adam Rose: Sure.

00:59:34

James Redenbaugh: Next week.

00:59:35

Adam Rose: Cool.

00:59:36

James Redenbaugh: Okay, guys, great. Good to see you all. We'll be in touch and talk to you later.

00:59:42

Adam Rose: All right, take care.

00:59:44

Iván Lopez: You have a good rest of the week.

00:59:47

Simone: Me too.

00:59:48

Adam Rose: Yeah.

00:59:49

James Redenbaugh: See you guys.