Video Summary and Transcription
Failed generative UI talk explanation, exploring beyond text display with React. Vercel's generative UI tool calling. Demo challenges due to CloudFlare, GPT-5 differences. Interactive artifact generation, AI News newsletter, and ChachiBT concept. Unique superpower of developers: modifying UI for needs, tool generation challenges, pair programming concept to understand codebase better, importance of human involvement in coding, control in developer experience. Importance of control in developer experience, negative impact of AI autonomy, illusion of productivity in Vibe coding apps. Combining human and AI attention, interactive planning for coding agents, parallelizable collaborative planning. Non-interactive limitations, good interactive planning, async value of depth analytics. Increased flow disruption, semi-async failure-death, focus on code-based understanding. Hierarchical code maps for better code understanding and prompt engineering. Interplay between human and AI in code control, enhancing coding ability, and investing in coding skills. Illustrating the constant human-AI interaction and the cost of syncing with AI for different modalities. Discussing the cost of multi-modality sync with AI, encouraging a balanced approach to leveraging AI for productivity. Discussing the importance of user-friendly interfaces and the potential future of generative UI. Discussing the future of user-controlled dashboard creation and the shift of power from designers to users in generative UI development. Discussing the evolution of frameworks from developer-centric to user-centric for creating customizable UIs and the concept of malleable software coined by Jeffrey Lait. Pushing for industry adoption of the Codemap concept to enhance code quality and readability, exploring its usage with WinSurf and potential new dev tools enabled by a shared language for interacting with code bases. Discussing the impact of project size on AI effectiveness, emphasizing the importance of breaking down problems and utilizing code maps for better code management and context understanding. Discussing the importance of code map guidance and keeping code maps up to date in fast-moving projects through regular refreshing and regeneration based on git commits.
1. Challenges in Generative UI Talk
Failed generative UI talk explanation, exploring beyond text display with React. Vercel's generative UI tool calling. Demo challenges due to CloudFlare, GPT-5 differences. Interactive artifact generation, AI News newsletter, and ChachiBT concept.
So hi everyone, you're here expecting a talk on generative UI. This is not that talk, because I failed. I'll give you a little bit of a reason why I failed. One of the things that really bothers me about generative UI, and the potential that all you React developers have, is that you can actually start having more than just text. We're all here to display more than just text on a website.
One of the things that I think was really interesting that Vercel pioneered is tool calling as generative UI. So it's like, how is the weather in New York? Partially, I think my demo failed because CloudFlare is running the 3.5 model that I was going to show you. But I can still run this on GPT-5, it's just going to be less interesting. And I'll show you a little bit of the differences.
Here you have an interactive and visually interesting thing. Chachi, Ruti, Aaklod, and now Jem and I also have various forms of this artifact generation and live coding things. I also have, for the last year or so, been running this little newsletter called AI News. It basically just goes through all the AI discords, Reddit, Twitters, and summarizes all the news every single day. So it's basically just a glorified newsletter. I was wondering, what if I have my own little ChachiBT thing?
2. Empowering Developers with Pair Programming
Unique superpower of developers: modifying UI for needs, tool generation challenges, pair programming concept to understand codebase better, importance of human involvement in coding, control in developer experience.
So each issue now has its own little interface where I can run into, let's say, what's going on in modular land. Or if I really care about cursor, I can jump into cursor and see what's going on. And also ask questions about it. What is Composer like? And it can just kind of go straight there. And I think being able to modify your user interface in order to fit your needs is such a unique superpower that only developers have.
I did... Basically what you ended up needing to do is creating a tool call that generates other tools and then dynamically calls them. And so this is basically what I had got out to. But right now, it's failing for a number of reasons that I cannot really debug. The issue, I think, really comes from the stack of extractions.
And so I'm going to bring you to the core of this talk, which is actually what I've been calling pair programming, but not in the way that you think. And so the mission that I've been fighting recently is just trying to get people to understand their code base more. A lot of people think of pair programming with AI as bringing AI into your workflow, like tabbing and sort of assisting you as a co-pilot or sort of like a cloud code terminal agent. But really, I think this is actually relatively obvious. And most people... I think the AI coding industry, a few people out there, are going to actually just solve this in a while.
What we really want is actually to put humans back in the loop. And I think this is the same issue that people have with frameworks and over abstraction. Pre-AI, as we have post-AI, you have this issue with self-driving, right? I think a lot of the self-driving people want to take away the steering wheel and just want you to sit in a bus that just takes you everywhere. That's good when you're in the happy path. But I think a lot of developer experience that I try to preach is also being able to take control when you're in the unhappy path. And a lot of the tools that we make do not allow this kind of taking control. And that's really sad.
3. Challenges of AI Autonomy in Vibe Coding
Importance of control in developer experience, negative impact of AI autonomy, illusion of productivity in Vibe coding apps.
That's good when you're in the happy path. But I think a lot of developer experience that I try to preach is also being able to take control when you're in the unhappy path. And a lot of the tools that we make do not allow this kind of taking control. And that's really sad. So I think one of the champion ways in which people talk about this is the long distance of autonomy. This is the chart of Meter, which is a non-profit that studies coding agents and the increasing autonomy of AI agents. Over time, people are increasingly bragging and measuring the long and autonomous nature of these coding agents. But this is really just trying to cut out the human from the workflow.
So Anthopic and Replit are very excited about this. But I think the reality is quite different. I'm just going to select some of these examples. Because you might not have come across them, but the volume of evidence is really overwhelming. So, you can't really explain to your users when a Vibe coding service just deletes your production database. You can't really explain to your users when you just have low quality work. The joke is that two engineers can now create a tech debt of 50 engineers. This has gotten so bad that there's actually entire services. Here's a start-up that specializes in fixing your Vibe coded apps. This is called Vibe Code Fixers. It's just there because there's so much slop that's being created.
I can keep going, but really I think there's this illusion of productivity that I really want to emphasize. A lot of people, when they use Vibe coding apps, myself included, when I thought I could finish this feature for this conference, was that you feel like there's an extra 20% of productivity. But I think when Meter actually did this study in a randomized control trial, they actually saw that the observed productivity was lower by 20%, not higher. I think that's a really interesting comparison. How is it that you feel so unproductive and people report and tweet about their increased productivity, but when you actually hold them to account and watch them in an RCT, they don't actually become so productive. It's because of the spikiness of AI. This is a fantastic chart that I got from Forest Brazil. You have all these huge wins from just simple prompting, but then it leaves a lot of the gap and your understanding of the code that's generated falls behind because you didn't write any of that code. Before AI, you would have this steady progress, but with AI you have all these leaps and bounds forward. Every now and then, depending on where you are and where you checkpoint, you actually have decreases in the sort of productivity that you have. So I think that's the reality and the difference between all the reports, all the hype versus the reality.
4. Collaborative Interactive Planning in Coding
Combining human and AI attention, interactive planning for coding agents, parallelizable collaborative planning.
The realization I've been working on on coding agents is that human attention with AI attention is better than either of them alone. It's a simple observation of truism that attention is all you need, but I think you want to combine the human and the AI together. I want to present three ideas in this talk and then we'll sort of Q&A and talk about everything that we've been working on.
The first idea is that you want to have interactive planning. You want to always read the plan that the coding agent is executing and then be able to modify or discuss the plan either before it's working or while it's working. Before it's working is kind of basically what Kuro and the others are pioneering, which is spectrum and development. While it's working as well, I think you should always be sort of keeping an eye on the plan and monitoring it just like you monitor an employee. You've seen a lot of non-interactive planning, right? I think it's very common for these coding agents to just kind of lock your input and not let you answer until it's done with its first stage and it's hands over control to you.
I think the ideal interactive agent is you and them working collaboratively on a plan that is agreed upon and also parallelizable. You can see my little attempt at visualizing how you can spin up multiple agents to break on independent little tasks and sub-agents. That's something that we explored in something that we're going to bring up next. So that's idea one, interactive planning. Idea two is something that we call the async value of depth. This is actually an abstraction of some of the analytics queries that I ran from inside of Windsurf. You can see that there's a sort of arbitrary cutoff of the sort of five second flow window which is something that I'm sort of holding the team to. Which is basically when the model starts to run more and more autonomously, the distance between the sort of end of session, or the first engagement and the second engagement, is a lot lower.
5. Exploring Interactive Agent Collaboration
Non-interactive limitations, good interactive planning, async value of depth analytics.
So here's non-interactive, right? When you type it into your coding agent, it blocks. You're trying to, you're like, oh, no, that's not exactly what I meant when I typed it in. But you're not allowed to say anything. At best, it can cue your messages before it completes its response and then you can actually start responding again. So that's non-interactive. Bad interactivity is when it doesn't actually, like you have to keep monitoring it in order for it to keep any action going.
Good interactive planning is a much more tighter sequence of back and forth between you and the AI, but I think the ideal interactive agent is you and them working collaboratively on a plan that is agreed upon and also parallelizable. You can see my little attempt at visualizing how you can spin up multiple agents to break on independent little tasks and sub-agents. That's something that we explored in something that we're going to bring up next. So that's idea one, interactive planning.
Idea two is something that we call the async value of depth. This is actually an abstraction of some of the analytics queries that I ran from inside of Windsurf. You can see that there's a sort of arbitrary cutoff of the sort of five second flow window which is something that I'm sort of holding the team to. Which is basically when the model starts to run more and more autonomously, the distance between the sort of end of session, or the first engagement and the second engagement, is a lot lower. And actually that starts to break flow. So we actually start to measure this. Down in here. And this ties to the actual research that is being done from sort of human computer interaction research where basically your ability to wait on things is related to the complexity of the task. So here for example, complex tasks, people can wait up to 8 to 12 seconds because they deem it to be complex. But if it's just simple typing, they expect 50 milliseconds.
6. Enhancing Flow and Code Understanding
Increased flow disruption, semi-async failure-death, focus on code-based understanding.
So our estimate is that your probability of breaking flow geometrically increases 10% every second that passes while you wait for an agent response. Meaning that you probably break flow if you have a close to 100% probability of breaking flow if you wait for more than 10 seconds. So this is something that we started to actively design for. The profile of SONET and the other longer models started being hard to use. We noticed that it declined in productivity. And we're wondering why that is. Because these things are more agentic, they're still relatively fast. Why is it that we need to sort of shift that profile?
We coined this the semi-async failure-death. Basically as you get up to the middle part of the chart, it's not enough to delegate to the background agent, but also not fun to wait so you start context switching a lot. And so I think a lot of the model labs are focused on the right side of the chart where it's extremely parallel, extremely background, and longer and longer autonomy. Up to 10 hours and more. I think something that the agent labs are trying to focus on now is syncing the human and the AI, like mind melding with the machine to focus on deep work, focus on hardest problems. So the general idea is that if it's commoditized, if you know that it's been done before and you know that the LLM's going to be able to achieve it, obviously shoot it off. But if you're not confident or if you don't know what you want, you actually probably need to sync back and forth between you and the AI. And probably there you want to have as fast latency as possible.
This is the most interesting one is the general idea of improving code-based understanding. This is the most recent thing that I shipped. It's called code maps. I think the tagline is my favorite part of it. Basically the principle that you should try to understand code before you vibe it. I don't really care if you use a coding agent or you don't use coding agents, you probably want to understand the thing that you're trying to do. Because if it starts to get away from you, then you start to have a code base that you don't control. So what a code map roughly looks like is this. It looks like a visual representation of the modules and the flows of your code. And you can switch between that and a hierarchical demonstration of an annotation of your code base. And you can click around and sync with that. I could probably show you. I'm not sure if I can, actually. Yeah, I could probably show you in here.
7. Improving Code Maps for Efficient Engineering
Hierarchical code maps for better code understanding and prompt engineering.
Yeah, I could probably show you in here. This is an example of one of the hierarchical code maps. I can also represent it visually here. But the idea is that you don't have to read the exact text, but you can understand the flow of code from, let's say, front end to back end, or your one service to enter a database or anything else. And it's nicely annotated as well.
I think the general idea, or the learning that I had, was that you end up prompting better as well. So whether or not I'm going in there and I'm like, okay, I just needed to edit this part of the code base, I can just jump straight to that and manually edit it. Or I can just refer to it and say update this to the latest, and just chuck that into the AI chat. And it basically becomes much better automatic prompt engineering, but you know exactly every single thing that went into that code base, because you can navigate it as a file system on the left.
I think I really like that pattern and I try to recommend it. The general idea, though, is that the further you are from code, the more sloppy you end up shipping. This is my attempt to illustrate it. I just love abstract illustrations of this. Before AI coding, you were basically in red. You're in charge of the intent of the code, reading of the code base, thinking about what to write, and writing that code. A lot of coding agents start moving you towards just, all right, we'll take it from here. Just give us a prompt, we'll take it from here all the way.
8. Optimizing Code Control and Skill Investment
Interplay between human and AI in code control, enhancing coding ability, and investing in coding skills.
I think the nice mind meld of human and AI that lets you fully stay on top of your coding agent and control it to do exactly what you're intending to do is this nice interplay, alternation between red, blue, red, blue, red, blue, of you should take a turn and I should take a turn, but we're in this conversation, constant conversation about the code. Even if you don't use a coding agent, you can still use a coding agent to read and not write. Every single line of code can be human authored, but it can be accelerated a lot.
I think my favorite chart of this, I forgot to bring it up here, is this understanding or scaling of your coding ability. If Cloudflare will just cooperate, I can't bring this up. Okay, Cloudflare is not cooperating, which is really screwing my demos today. Just try to understand the more sync you have with your code and coding agents, the better.
I think my last tip is something that is relatively controversial, because a lot of people are trying to optimize their $20 a month plans for coding. You should probably... You're a decently well paid software engineer, and you care about your craft, and you care about being part of the future by coming to conferences like this. You should probably be spending a lot more than that at this point in time. This is not from a point of view of me trying to sell you anything. You choose whatever you want, but it is actually super cheap.
9. Human-AI Interaction and Sync Cost
Illustrating the constant human-AI interaction and the cost of syncing with AI for different modalities.
Let me just do a little mental exercise for you to illustrate. These numbers will not change as long as humans are recognizably human. I can reasonably be sure, as long as humans are roughly the same a thousand years from now, these will probably still be the same. Our typing word rate is 40 words per minute. Our reading is 200-ish. Speaking is 150, 180 if you're like me. Listening 300. These basically do not change. This is the input and output of humans to AI.
If you do the math, if you just left this on, and if you just transcribed everything and translated and put yourself into the machine 24 hours a day without sleeping, the cost to sync you 24 hours a day into GPT-5, for text, 150 times 60 times 24, 200,000 tokens. Anything that you type fits, anything that you say fits into the context window of one API call. And that costs 9 cents to ingest. If it's voice, it's a little bit more multidimensional, that costs $4 to $8. If it's vision, that can get really expensive, that can cost up to $200 a day.
Balanced Approach to AI Productivity
Discussing the cost of multi-modality sync with AI, encouraging a balanced approach to leveraging AI for productivity.
But just on a very basic level, if you add up the basic tier, the mini tier of 9 cents plus $4 plus $8, you get roughly $12 a day of full multi-modality sync if you just left it on at all times. And translating your human output and your intent into the code that you write, I think it's a really interesting idea.
The tools that we have don't exist today, I expect it to exist this time next year, and I hope that you've previewed that a little bit. I'll leave it there, I have more stuff to do but I'm going to run out of time very soon. I think the main idea I hope to take away is that a lot of people are going to tell you that you should be paraprogramming, you should be giving stuff to AI.
I want you to pull it back. Pull AI back from the brink, from the back of taking away everybody's jobs, into augmenting your own and making you much more productive. And that's something that I'm very pro doing and I think that we can do if we just let ourselves spend tokens like it's water. That's it. Woo! Thanks so much for the talk, thanks for being here with us. Yeah. It's a pleasure. It was really unfortunate that even my website wasn't loading, I noticed that Cloudflare is a sponsor outside, so I should go hit them up. Oh, it's a fun day when we realize how much of the internet goes through so few companies.
User-Centric Generative UI Future
Discussing the importance of user-friendly interfaces and the potential future of generative UI.
When I have a demo that I didn't know relied on Cloudflare, I'm like, okay, everything relies on Cloudflare. We're doing this stuff live. It's always an adventure. Can we give it up for all of the speakers and especially Swigs who've been dealing with live demos for us this whole time? Woo! Yes! Thank you. Live demos. It's like inviting chaos into your house. It's great.
Okay, we got some questions flowing in. And I guess we can start with the big ones, like what do you think is the ultimate future of generative UI? Yeah, I think that people should be creating the interfaces that they are familiar with. And I think a lot of us are familiar with Excel or Google Sheets and I hear a lot of these stories that are really impressive. I can't remember who exactly said this on a podcast that I was listening to recently, but they're like, we spent so long creating this custom, beautiful dashboard of Tailwind and Radix and all the beautiful, gorgeous stuff, only for them to export a CSV, dump it into Excel and then make a spreadsheet of their own.
Because it's their own. Because they made it. They understand every part of it. And I think that's something that we basically, as an industry, haven't really killed the spreadsheet, nor maybe we should give that freedom, put that freedom back into the hands of users. I definitely don't see enough of the React community doing this. When I go into, I don't know, unnamed React infrastructure company dashboard, and I see all the lines of options here and all the lines of options here, and I don't know which to click, I wish I could Command K, but even the Command K doesn't really give me exactly what I want.
Empowering Users in Generative UI
Discussing the future of user-controlled dashboard creation and the shift of power from designers to users in generative UI development.
I wish I could just make my own dashboard inside the dashboard that you already gave me. And I think that probably does exist in our future and we, collectively in this room, could probably make that happen. We just have to decide that it's worth working on. Mmm. Mmm. Something for us all to think about. And I love this idea also that we can all be kind of, we don't have to fit into the boxes of other people's tools, we can kind of create the tools that we need. Yeah. Yeah.
I think like, to me the hot take is that designers are too powerful. We should take their power away and give it to our users. Because designers will be like, oh, let me decide what info density there should be on this page. Let me decide how wide this thing is. Let me decide the colors or even just the arrangement of the information. Well, at this point, no. It's actually very, very cheap to just generate everything for each user, store it and just re-display it every single time. People have this fake strawman argument that generative UI is unreliable and people need reliable dashboards. Well, yeah. Have you heard of caching? It's not that hard.
I love this. I think there's going to be plenty to continue this discussion in the Q&A spot after this session. But meanwhile, we got a lot of upvotes here for a question around the UI demo that you were hoping to show and that folks can presumably follow you to find out more about. It's basically a tool to create tools. I think like a lot of our process right now is we have an app and it has a set of tools that people can call and that's great. And when people have new feature requests, well, it has to go through us in order to make updates to the code base for then the users to use. But why don't we just cut the developer out of the question altogether. There's also some water in the podium. I'm sorry. No, you're good. You're good. I'm like not the healthiest right now.
Evolution of User-Centric Frameworks
Discussing the evolution of frameworks from developer-centric to user-centric for creating customizable UIs and the concept of malleable software coined by Jeffrey Lait.
Oh, no. When is AI going to be able to fix that? Just kidding. If you listen to Mark Zuckerberg within the century. I have a podcast with him that I am still really impressed by. Okay. So, yeah, it's a tool to create tools. So the general idea is that you, the concept of a framework evolves. A framework today lets developers improve this sort of bake in their best practices into a nice little cozy rail so that they don't go off and forget things and also they can build an ecosystem. But I think the framework for end users looks very different.
And I think us as developers should stop being so precious about our jobs and let end users create the UI's that they want. I think that's definitely, if people want to learn more about this concept, I didn't invent it. Jeffrey Lait coined it as malleable software, software that you can sort of mold to fit you. And I think that's something that people have always wanted to do. The best that we as an industry have come up with is light mode, dark mode toggles. And even then we don't even like manage to do that very well. So, yeah, it's been going too slowly and we need to accelerate it way more and we have the tools now.
Awesome. Awesome. You heard it here first, folks. Stop being so precious about your jobs. No. No, but it's true. I think we, yeah, we also have this kind of prerogative to make sure we're thinking about who this tool is for and not just what the life of someone who's maintaining this tool is going to look like, which I think we all feel deeply in our bones. Great. OK, we have some more specific questions around some of the tooling that we just saw. Yeah. The Codemap plugins you were using. What? Can you tell us more about those? So, Codemap is part of WinSurf. It's the editor that we work on. It's part of Cognition.
Industry Adoption of Codemap Concept
Pushing for industry adoption of the Codemap concept to enhance code quality and readability, exploring its usage with WinSurf and potential new dev tools enabled by a shared language for interacting with code bases.
I think it's a generalizable idea. I think that's part of my role is just to extract ideas that are just generally good for the entire industry. I think Cursor and Cloud should absolutely adopt it. I've actually been pushing for us to open source the whole format so that it becomes like the MCP of code. Because this is basically automatic context engineering for any coding agent. And I think that is worthwhile and it improves the code quality a lot that is both generated, but also then it's human readable, which is an underappreciated part of engineering this whole thing.
So, exploring the concept of Codemaps, does one need to use WinSurf or other ways? Right now, you should use WinSurf. You can also just kind of scroll this stuff. Codemaps are publishable on the website. So even if you don't download WinSurf, you can just scroll the website. Stay tuned. Like, yeah, I haven't got the sign off to open source the whole thing yet. Gotcha. Well, perhaps with the enthusiasm of the rest of us behind you. Yeah. It's a proof of concept of like, well, okay, if you had a .codemap file today that you could generate and use, what would you do with it? Right? Because we kind of already did the mapping. I think that the interesting thing is new dev tools that don't exist today that would be enabled by a common shared language of how to interact with a code base.
And so, basically, Cognition has started to show a little bit how it thinks internally and expose that in a human readable way. And I'm not really sure, like, yeah, what kind of new dev tools does that offer? I'm very interested in AI SRE. I think that if you basically tied a codemap to any outage incident, you will probably just have like a resolution playbook ready for you when you sort of open up your PagerDuty thing. And I think that's probably a good thing for outages like the one today. I guess there will be some interesting follow-up chats on that. Fabulous. I think the challenge for us is that we've done this for single repo and like small, like N, let's say N smaller than five. We've had codemaps requests for 10,000. So like one mapping for the entire organization enterprises, all the repos. And I don't know what that looks like.
Code Maps and Project Complexity
Discussing the impact of project size on AI effectiveness, emphasizing the importance of breaking down problems and utilizing code maps for better code management and context understanding.
We've had codemaps requests for 10,000. So like one mapping for the entire organization enterprises, all the repos. And I don't know what that looks like. Gotcha. Gotcha.
Well, I guess that brings us to a next question, which is sort of, you know, first of all, do you agree that without fail, AI becomes less and less effective as the size of a project grows or I guess maybe the surface area of a project or the size of the code base, a parentheses this.
So the classic answer that every coding agent company will give you is to break stuff down into smaller problems, which is so obvious. It's like barely even worth mentioning, but yes, it should be done if you haven't thought about it. Actually the question is like, yeah, very interesting because I feel like code maps is kind of the solution, like just have recursive hierarchical mappings of code basis for the LLM to traverse.
And I think then you start managing your code, your context a lot better and you start stop like stuffing junk in there. Like right now the state of the art is concatenate everything and just chuck it in there and hope it works. And you can do sort of recursive summarization along this sort of conversation thread. That's not as good as basically producing a wiki, is what we call it, like a deep wiki of the code base and understanding symbolically which connects to what. And then we productionize that as code maps, but you can do that same approach for any of your coding as well.
Code Map Guidance and Maintenance
Discussing the importance of code map guidance and keeping code maps up to date in fast-moving projects through regular refreshing and regeneration based on git commits.
Gotcha. Gotcha. So like code map, charting it all out, as it were, not to use the technical term of code maps, but and breaking that context down into those smaller chunks. Would that be your guidance? Yeah. I think it's what you would do if you took the time to study the code base, but your pace of reading is limited by the sort of human constants that I identified there. And the pace of AI reading is much faster. And so you can actually just like throw money at the problem. It's fantastic.
Great. All right. I think we have time for just about one more question here. But don't forget there will be a speaker Q&A spot in the glass globe room outside of the planetarium here. Yes. And so one last question, I guess we will let the people decide. Yeah. So how about in large and fast moving mono repos? How do you keep code maps up to date and in sync with what's happening at project move? Right now, code maps are snapshots. So they're basically you can think of them as just tied to your git commit. And so every single time you update your code, they should probably be just be invalidated and generated from new.
So right now you click the refresh button. That's how you keep them up to date. We haven't really come up with a better one than that, because it's like the simplest one. It's kind of like the philosophy of React, like just regenerate the whole UI and it's consistent, right? Like, well, yeah, I mean, it would be more efficient to do this, but like consistency really matters when you're trying to map something. And if a concept change due to due to some minor change, we don't know ahead of time, we just should probably just re-execute everything. Gotcha. Gotcha. All right. Well, thank you so much for being here with us and for this very interesting session and discussion. So folks have more questions or, you know, to get all of these questions we didn't have time for answered. And you all can find Swix at the speaker Q and A spot after this talk.
Comments