1. Introduction to Pictorial Programs
This talk is about making pictures of programs and how visuals can make programming concepts less abstract, easier to understand, and more accessible. Visual representations bring invisible abstract programming concepts down into the embodied world, where we live and interact with physical objects. Programming concepts are abstract ideas that exist in a liminal space, separate from our highly visual, spatial, and physical world, which makes programming difficult.
♪♪ ♪♪ ♪♪ Hi! I'm really glad I can't really see you all because of the blinding light that's really useful but this is going to go well. Good, so, yes, we're up. This talk is called A Picture Worth a Thousand Programs and it's going to be about making pictures, and specifically about making pictures of programs, and hopefully about making pictures worth a thousand programs, as the traditional saying goes.
So, before we dive in, I want to quickly introduce myself and let you know I am actually up here on stage talking to you about pictorial programs. So, my name is Maggie. I'm a designer, art director, illustrator, metaphor nerd, and tangentially I also build things with React. I spent the last five years working as the art director at Egghead.io which is an education platform for web developers but as of next week, I will be moving on to a new role, leading design at Hash.ai. But I spent my whole career so far creating visual representations of programs. During my time at Egghead, I taught hundreds of cover illustrations for the courses we taught. Each of these would start with an abstract programming concept and I had to find a way to visually represent that in a single image. I learned to rely heavily on visual metaphors for these and styling CSS became painting a house and organizing types became suits in a deck of cards. I have made a lot of illustrated diagrams to explain how JavaScript prototype inheritance works or what happens when you flatten an array. One that became popular was a visual essay on what APIs are and how they work which was told through small robotic waiters that bring you the data you ask for. The goal was to make complex technical topics relatable and easier to understand. I also recently collaborated on a project called Just JavaScript with Dan Abramov. It's a JavaScript course that teaches the core mental models of the language through visual diagrams and animations. So we worked together to develop this whole visual language that was a bit more formal than some of the other illustrations I showed. Every piece of syntax is correlated to a specific visual shape and we used the system to explain concepts like assigning properties or object mutation. I'm not running through all this work to show off, I just wanted to give you context around the kind of programming visuals I've made in the past so you have a sense of what I mean when I say pictures of programming.
So in this process of transforming programming concepts into visual images hundreds of times, I've been forced to think a lot about the way we represent and communicate complex programming ideas and I've come to believe that visual representations have a lot to offer us here in the land of code. In this talk I'm going to show you how visuals can make programming concepts less abstract, easier to understand and more accessible to more people. I'm also going to show you why visuals are so special and it turns out to be relatively simple and it's because visuals bring invisible abstract programming concepts down into the embodied world. The embodied world is where you and I live. Everyone watching this talk has a body and you use it to interact with physical objects around you and move through space and experience events over time. Everything you know about the world is mediated through your body and this fact is so fundamental that we sometimes completely forget it. And programming concepts are abstract ideas that do not live here in the embodied world with us. They are imaginary objects and functions that exist in the liminal space that feels like it doesn't obey the same laws of physics as we do. And we can only interact with them through this disembodied experience of typing linear text characters into a code editor. This is precisely what makes programming so difficult. We are trying to reason about and work with things we cannot see or touch when we are creatures who are evolutionarily adapted to function in a highly visual, spatial and physical world.
2. The Limitations of Text in Programming
And I think that visuals are a big part of bridging that gap between our embodied human world and the disembodied machine world that we're trying to program in. We're going to explore this topic through three questions. First, what's wrong with text? All of our current programming, languages, tools, and documentation are overwhelmingly text centric. If we look at the history of programming, it's fairly clear how we ended up in a text heavy world. There are also plenty of logical reasons why we rely so heavily on text in programming. The abstract nature of text is what removes it from our embodied experiences in space and time.
And I think that visuals are a big part of bridging that gap between our embodied human world and the disembodied machine world that we're trying to program in. So here's the plan. We're going to explore this topic through three questions.
We're first going to ask what's wrong with text, then we'll explore what can visuals do that text can't, and finally we'll go on a very brief history tour to find out haven't we already tried this? After all, there are no new ideas under the sun and many, many people in the past have explored ways to make programming more visual. We're going to look at what's already been tried and what opportunities still lie ahead.
So first, what's wrong with text? This is an important question to ask because everything we do in programming is expressed in text. Every single app you've ever worked on looks like this, right? It's text arranged in lines going from left to right and top to bottom. Here's every documentation website you've ever used. Here's every blog post you've ever read. All of our current programming, languages, tools, and documentation are overwhelmingly text centric. You sometimes get diagrams here and there, but it's really slim pickings. If I had to guess about the balance of text to visuals in our industry, I bet we're at 98% text and 2% visuals. This is not based on an official survey and I couldn't find anyone who has done an official survey, so this is just based off my personal experience in the web development community. But if you take a minute and think over all the code and documentation that you interact with on a daily basis, I'm betting you're gonna land on a similar estimate.
If we look at the history of programming, it's fairly clear how we ended up in a text heavy world. This is a computer, circa 1970. You'll notice the lack of screen. You had a keyboard and a stack of punch cards, and the only thing you could do was type linear text to create programs. This design constraint meant that all our early programming languages were text based, and once you establish text as the primary paradigm of a field, it becomes really hard to break away from. Especially in an industry where we rely so heavily on lower level abstractions created by all the programmers who came before us. There are also plenty of logical reasons why we rely so heavily on text in programming. Written words and syntax are an ideal medium for expressing abstract logic. It's quick to create, it's flexible, and it's easy to move between applications through copy and past without worrying about compatibility. You can pack a dense amount of information into a very small space, and you can be very specific about what you mean, which obviously matters when we're talking to computers who have no sense of nuance.
So far, text has been working great for us in programming. But some of text's greatest strengths are also its greatest weaknesses. The abstract nature of text is what removes it from our embodied experiences in space and time. When we code, we're writing a set of hypothetical instructions to run on someone else's machine at some point in the future in a time and place that we'll never know about. This level of abstraction removes the physical, spatial, and embodied qualities that humans rely upon to understand the world around us. This can be good in some ways, right? If we want to write a function like fetch user data, we don't have to define the size, shape, or color of it.
3. Visualizing Programming Concepts
Visuals like boxes and arrows can help explain programming concepts, like the fetch user data function, to beginners. By using familiar physical qualities, such as size, shape, color, and spatial relationships, we can make programming more comprehensible and accessible. These visuals leverage our pre-existing embodied knowledge to demonstrate how programs work, which linear text alone cannot achieve.
It's just a simple function floating in machine land. If you already know what this function is and how it works and where it's located, this level of brevity is great. But imagine someone new to programming who has never written a function to fetch user data and has no pre-existing mental model of how it might work. One of the easiest and most effective ways to make it comprehensible to them is to explain it in familiar terms by using physical qualities they already understand like size, shape, color, and spatial relationships. Which might give us something like this to help explain what a fetch user data function does. The visual doesn't have to be crazy complex or beautiful. Boxes and arrows work great. We are certainly allowed to use text labels to make the imagery clear. Visuals like this allow us to use our pre-existing embodied knowledge to show how programs work in a way that linear text can't.
4. Visuals and Cognitive Metaphors in Programming
Visuals reveal three aspects of programming that we can't see in linear text: fundamental metaphors, spatial mappings, and program behavior over time. Metaphors in programming are cognitive metaphors based on our embodied experiences of the world. We use metaphors to understand and communicate abstract concepts. The field of cognitive metaphor and embodied cognition, developed in the 1980s by George and Mark Johnson, explores this topic extensively.
So I have just started a hint at the question we are going to look at in part two which is what can visuals offer us that we can't get from linear text? I think visuals reveal three aspects of programming that we are unable to see in linear text. They reveal fundamental metaphors embedded in our programming languages. They reveal spatial mappings that we use to reason about how our programs are structured and how data moves through them, and they reveal how our programs and data behave over time. These things are all implicit in the programs we write but they are not shown explicitly in the medium of linear text.
So let's start with metaphors. Just to make sure we are all on the same page, let's establish that metaphors are thinking tools that allow us to understand one thing in terms of another. So let's say we have thing A here, which we understand, and thing B, which we don't. We map the qualities of thing A onto it. If we say corruption is a disease, we understand that corruption spreads, is difficult to overcome, and if left unchecked, can kill. Similarly we say life is a journey. There are many paths our life might take and they vary in length and they all have a beginning and a final destination. When I talk about metaphor in the context of programming, I don't mean the creative fanciful metaphors you find in poetry like taking the road less traveled and wandering lonely as a cloud. Those are called figurative or poetic metaphor and they're the sort that we're often warned not to use in technical tutorials since elaborate, poorly chosen metaphors can be more confusing than helpful. I'm talking about a much more fundamental type of metaphor that lies at the heart of all abstract thinking including programming. These are called cognitive metaphors since they enable cognition on a much deeper level. These cognitive metaphors are based on our embodied experiences of the world. We have all of these non-physical things we need to communicate to each other like emotions and thoughts and ideas and programming concepts. In order to understand them and talk about them, we use our experience of the physical world around us as a metaphor. If we look at the way we talk about abstract things, this becomes obvious. We talk about ideas in terms of light when we say that's a really bright idea or that really illuminated the problem. We talk about emotions like there are objects. We'll say, he hid his jealousy or she doesn't handle anger very well. We can also use force and motion metaphors to describe experiences. We can say I found your talk moving or your talk really touched me. programming in terms of temperature. We have hot reloading in React or the JavaScript landscape is heating up. This isn't a theory I've just come up with. These principles come from the field of cognitive metaphor and embodied cognition. They were first developed in the 1980s by George and Mark Johnson who have written numerous books on this topic and it's become a major area of research in cognitive science. These two books, Philosophy in the Flesh and Metaphors Who Live By are two of the major But I can't go too deep into it here.
5. Physical Embodied Metaphors in Programming
Programming relies heavily on physical embodied metaphors to bridge the gap between the abstract machine world and our tangible human world. We use metaphors and abstractions to simplify the complex process of programming and make it more accessible. These metaphors serve as a bridge between the microchip's logic gates and the higher-level languages we use.
So hopefully you can do a Ph.D. on it. And just like every other abstract topic that we can't see or touch, programming relies heavily on these physical embodied metaphors and we have to because programming itself is a game of abstractions, right? We write a JavaScript file and what we're really doing is telling a microchip to flip a bunch of logic gates using tiny electrical pulses. is tedious and difficult for humans so we've developed a stack of elaborate metaphors that make it faster and easier. Some people might prefer to call these abstractions but for the purposes of this talk, assume metaphors and abstractions are roughly the same thing and we can debate the differences on Twitter later.
6. Visual Metaphors in React
We simplify our binary code into machine code into higher level languages like JavaScript into GUIs. Components in React are containers that hold sets of UI elements. In React, components are structured in a hierarchical tree, similar to a family tree, allowing for easy interaction with machines. Visual mediums make it easier to represent and explain React by leveraging our pre-existing knowledge of the physical world. Spatial concepts, like up, down, left, right, in, out, big, small, are used as metaphors to understand programs and their structure.
We simplify our binary code into machine code which we simplify into higher level languages like JavaScript which we simplify into GUIs. And at every step of this process, we're trying to make the abstract machine world resemble our tangible human world. Because the closer we move towards intuitive embodied knowledge in the upper right-hand side of the scale, the easier it becomes for us to understand what's happening in a system.
Components in React are a great example of these embodied metaphors in action. So components are essentially containers, you know. They hold sets of UI elements for us. So a card component might have an image, a button, and a paragraph inside it. The CPU that's going to render this component on the screen knows nothing about containers. It only knows machine code and how to make the right pixels light up. The container is a metaphor we humans need in order to manage and organize the code we write. The only reason you know what a container is, is because as a child, you dumped sand into a bucket and dumped it back out again. And through this physical world, where you're doing embodied experience and learning, it containers hold things. They have insides and outsides. They have boundaries. All of these concepts are essential and necessary for you to understand how components work in React. So let's take another one. In React, we structure our components in a hierarchical tree where everything is connected back to a single root component. You know what a tree is from seeing thousands of trees, and you get that they have many branches that spread out from a single root, an understanding that allows you to work with component trees in React. The tree metaphor is also kind of a double metaphor since it's based on the idea of a family tree. We have parent and child components that inherit props in the way children inherit qualities from their parents, and these concepts that we're borrowing from the human world allow us to interact with machines in a way that is easy and natural for us.
Since your understanding of React is based on your pre-existing knowledge of the physical world around you, it would be helpful if we could make that explicit in the way you represent and explain React. And visual mediums allow us to do this. You can certainly read about React component structures and passing down props, and then form a mental image in your mind based on your understanding of physical trees and family inheritance, and then apply that knowledge to writing linear code, all without ever seeing it explicitly visualised. But that's called making it harder than it has to be. So let's move on to the next visual quality, space. As humans with bodies, we inherently understand a large array of spatial concepts like up, down, left, right, in, out, big, small, and we use metaphors just like we use physical concepts to understand programs. We also use spatial concepts to talk about the structure and behaviour of programs. If you think about the way we talk about the internet, there are very clear physical directions to it. We upload data to the cloud above us and download files to our desktops.
7. Spatial Principles and Change Over Time
We use spatial principles from our embodied experience of the world to talk about and understand React. Visuals might come in handy because they allow us to see multiple points in time within a single frame. Let's look at the syntax of the use effect hook as an example of this. Here are four different versions of the use effect hook which are each going to behave differently when they execute in the browser.
We upload data to the cloud above us and download files to our desktops. We look through a browser window. We browse web pages moving from left to right. So pages you visited in the past are to the left and pages you're going to in the future are in the right. This format is based on our western cultural mapping of time to space. We think the past is on the left and future on the right but not every culture does that.
Specifically in React, we use our understanding of vertical directions to think about how data moves. React has a hierarchical order two components and parent components are above children and they pass data downwards which means we have gravity in React land. The idea of prop drilling is another one that suggests drilling downwards so you have to pass data deep into your tree so it also has depth. When we talk about side effects we are using our understanding that there is a center and a periphery where things happen on the side so when we run a function like a set timeout inside a use effect hook we understand it is accessing something outside our central component. I could go on. Our apps have front and back ends, you shall emerge objects, you get the point here. We use spatial principles from our embodied experience of the world to talk about and understand React. When we create visuals that show these spatial principles explicitly, it clarifies what we are already doing in our heads.
Our final element here is change over time. When we are working in a linear text editor, time is essentially invisible. We are looking at a static representation that describes a whole array of future events that may or may not happen depending on what button a user clicks or whether our data request resolves. We are forced to use our imaginations to predict what is going to happen in all those potential futures, rather than being able to see it in some form. Our current best technique for trying to see how things change over time is console logging data along the way. Console logging feels like trying to get a program to send signals up to the surface of a dark ocean where everything is being executed out of sight. We can't see anything happening down there and we have to keep asking for clues about how the data is changing which doesn't feel like the best developer experience. This is again where visuals might come in handy because they allow us to see multiple points in time within a single frame. They let us compare things side-by-side in a way that we can't with linear text. They essentially allow us to place spot the difference. Let's look at the syntax of the use effect hook as an example of this. Here are four different versions of the use effect hook which are each going to behave differently when they execute in the browser. The first has no dependency array. The second has an empty one. The third has the value count as a dependency, and the fourth also has count as a dependency and it runs a cleanup function. By just looking at these four versions of syntax, do you have any way of knowing how this function is going to run once it loads into the browser? Specifically, do you know how often it is going to be called and when? Given the demographics of this audience, I bet you do.
8. Visualizing the Behavior of the use effect Hook
To better understand how the use effect hook behaves over time, I've created visual diagrams that demonstrate the differences in behavior based on different scenarios. By visually representing the changes over time, we can gain a deeper understanding of how the use effect hook works.
It's because you've memorized the syntactical meaning here rather than knowing because the answer is visible in our syntax. What we need to do here is compare four things that behave differently over time. The best way to do that is to visually show what changes over time. I've made a simple set of diagrams that try to show the difference between these four on a timeline. Here's our first version with no dependency in the RRA. The use effect function is called on every render regardless of whether our count variable updates or not. Here's our second version where the dependency array is empty, so the user effect function is only called on the initial render, and we still don't care what the count variable is doing. In our third version, with count in the dependency array, use effect is called only when count updates which triggers a rerender. In our final version, use effect is called on count updates and runs a cleanup function after. Even if we remove the labels on these, this simple visual representation of how use effect behaves over time gives us a much better understanding of how it works than the linear syntax is able to.
9. Exploring Visual Programming
Visual programming has been explored in the past by incorporating graphical user interfaces into IDEs. However, these attempts have faced design challenges and skepticism. Visual programming is still relatively niche, but the no and low code movement shows promise. Rather than building visual programming languages, the focus has shifted to developing visual interfaces for specific use cases in programming. While building a true visual programming medium is challenging, there are easier ways to advance this effort in the short term.
So part three, haven't we already tried this? Obviously, I'm not the first person to realise that visual mediums enable us to understand and reason in ways that are worth exploring and programming. The primary way people have tried to incorporate visuals into programming in the past is by sticking graphical user interfaces on to IDEs. These efforts fall under the umbrella of what gets called visual programming.
There have been many, many past attempts at this with varying degrees of success. I'm going to quickly whip through a few examples so you get a sense of what's already been tried. The very first visual programming language was Ivan Sutherland's sketchpad in 1963. This is grail from 1968 where we put text into boxes for the first time. This is Pygmalion from 1975 that built on that box and arrow model. This is lab view which came out in 1986 and is used for systems engineering and we got a lot more intense about boxes at this stage. Here's a more modern example. This is blueprint in Unreal Engine, which is used for 3D game development. This is Max MSP, which is widely used for building audio visual experiences. Here's touch designer, also for interactive multimedia. This is origami studio, a prototyping tool built by Facebook. You'll notice the nodes and wires design pattern is very popular in a lot of these.
So, there's lots of promising stuff in these examples, but visual programming is still relatively niche. We've also discovered a bunch of really wicked design challenges that are hard to solve. These systems don't scale well, they sometimes use ambiguous symbols and unfamiliar interface patterns. They try to turn everything into a box which takes up way too much space on the screen. It can lead to literal spaghetti code. These issues have led to a lot of scepticism over the feasibility of visual programming. Being an advocate for it often feels like being Gretchen in Mean Girls. Visual programming certainly isn't dead, though. Funnily enough the new no and low code movement looks like visual programming under a new name. Many of the interface patterns that visual programming helps develop are visible in tools like IntegroMet and Zapier and Webflow. But rather than trying to build visual programming languages that can reach industrial scales, we have tactically moved on to developing visual interfaces for specific use cases in programming with sensible constraints. I'm a huge fan of the visual programming agenda and finding ways to add more visual affordances into the current developer tools and there are lots of smart, impressive people working on the problem but in many ways it is taking the hard route. Building a true visual programming medium will require overcoming a ton of design and culture and engineering challenges and frankly it is going to take a while. But there are easier ways to advance this effort in the short term.
10. Advancing Visual Programming Efforts
We can sprinkle visuals into our existing textual world by adding diagrams, illustrations, and plugins. More visuals are needed to reveal metaphors, spatial meaning, and change over time in programming. Developers should learn from the history of visual programming, build visual affordances into their tools, and use visual explanations and documentation. Check out George Laycoff, Mark Johnson, Barbara Javersky, Brett Victor, futureofcoding.org, and the book Visual Explanations by Edward Tufte for more information.
But there are easier ways to advance this effort in the short term. We can simply sprinkle some visuals into our existing textual world which means adding more diagrams and illustrations into blog posts, documentation and learning materials and if we're feeling brave, building plugins for our editors and developer tools that visualise very small scoped elements of our programs. This is essentially the low-tech paper prototype version of building a fully fledged visual programming interface.
So to wrap this up, here's the thing I want you to take away from this talk. We need more visuals that reveal metaphors, spatial meaning and change over time to make programming easier for everyone. It will make it easier for you, since you are an embodied human who needs to learn complex abstract programming concepts in order to do your job well. It will make it easier for all the people who don't currently know how to program but are trying to learn and it makes it easier for people who aren't developers but need to understand what we do, like product managers and designers who can't read all the jargon in our text-heavy documentation.
So what's a developer to do? I'm sure you are a humble but skilled React developer and want to help advance this goal. First, I highly suggest looking into the history of visual programming and some of the past attempts in this field. There is a lot of good previous art to learn from. If you are a current or future creator of tools for other developers, you should consider ways to build visual affordances into your libraries, plugins, apps, or frameworks. And finally, is you should use and advocate for visual explanations and documentation and tutorials. That might mean making them for your own blog posts or collaborating with designers if you work on larger projects with a lot of documentation. So we can set the bar low for now. We are trying to budge this fake statistic up to, like, ten per cent. So that was a lot of information packed into one talk. This was definitely a taster session more than a complete meal, so I've put together a list of some things you can Google to learn more about this topic. These will be up on my website. George Laycoff and Mark Johnson are the ones that worked on cognitive metaphors. Barbara Javersky's work on embodied cognition is great. Literally everything Brett Victor has ever made is worth your time. And there are some really great resources on the futureofcoding.org, they are sort of a visual programming enthusiast community. And the book visual explanations by Edward Tufte is really great for advice. So thank you very much for listening. I will post these on my website, maggieaffleton.com, where I have more writing on metaphors and visual programming and that kind of stuff. And you can tweet me at mappletons. Amazing. Thank you, Maggie. Please come and join me in the office. That was truly so, so incredible.
11. Arriving at Visual Metaphors
When arriving at a visual metaphor, I verbally reason through it and rely on linguistics to analyze the language used around the concept. I pay attention to the physical words used in the documentation, such as moving, passing, and spatial terms, to design the metaphor.
And folks have tons of questions. So let's get right into it. L, the letter, asks how do you arrive at a visual metaphor? Is it something that forms unconsciously as you engage with the concept? Or do you verbally reason through it? I definitely verbally reason through it and research. So I rely a lot on linguistics and the kind of techniques linguists use to analyze the language we use around things. So when I'm trying to design a metaphor for a certain technology, I read the docs and I really pay attention to the words they're using. And they're almost always using physical words. They're moving, passing, you know, spatial things. They're talking about how things are arranged in space even though they're just writing in linear text. But I pay a lot of attention to language.
12. Stickers, Collaboration, Tooling, and Metaphors
There's interest in stickers and merch, but I don't have a store. Collaboration with the React docs team is in progress. For further exploration, visit my website. I use an iPad with Procreate for illustrations. Metaphors can be interpreted differently by different cultures, but in the web development community, we have a shared understanding.
There's more than one question asking if you have any stickers or merch of your illustrations. The people need more Maggie! I don't sell anything personally. Egghead has a swag site where they sell some of the course illustration stickers. But we tend to also give them out for free when we show up at conferences. But I don't have an organized swag store. Sorry.
And similarly, again, the people really are into it. Any chance of a collaboration with the new beta React docs team? I think I'm allowed to say. I might have some visuals coming up in the new React docs. Amazing. They're in progress.
Uh, yeah. I know, right? What resources would you recommend for someone that wants to explore these concepts further? I know you mentioned a couple. But is there anything else you would like to add? I think it's best to maybe go to my website where I have that list on the programming pictures slug. I'll add more on there. Because I have a long list. But it would take a while to read them all out.
Couple folks are interested about tooling. What do you use to draw, to add your illustrations to the deck, et cetera? All the illustrations I draw nowadays are on an iPad with procreate. I used to draw more in Photoshop on a Cintiq. To be honest, they're almost interchangeable and the iPad is just better. We just had much better advances in hardware and software over the last five years that the iPad is kind of state of the art.
This is an interesting one. Do you think that ... I'm sure there is. Is there potentially an issue of metaphors being interpreted differently by different countries and cultures and how do we think about that? There definitely is in that metaphors are incredibly cultural. Almost all the metaphors we use are scoped to the country we're in and the language we're speaking. Given the fact that React and the web development community is a fairly unified place and we all have a fairly shared understanding of metaphors and symbols. I find it's not that difficult to find metaphors that relate for everyone. Although, I'm always aware that if someone in another country who doesn't have a lot of the same associations that we do sees it, there's a good chance they could miss nuances of it.
13. Leveraging Existing Metaphors in Programming
Should we leverage existing metaphors in programming languages and frameworks? The design of programming languages incorporates metaphors that are deemed appropriate. When there is no clear metaphor, inventing one becomes necessary.
That is something you need to consider on a case by case basis. We'll need Maggie's in all countries and all time zones just in case. Should we, do you think, be leveraging existing metaphors that are already in our programming languages and frameworks? Or trying to optimize for ones that perhaps communicate something well? That's a good question. It almost gets into a programming language design question. I like to assume that the people who designed the programming language thought enough about abstractions and kind of the API language that the metaphors they have chosen, either consciously or unconsciously, are the correct ones. I don't try to invent brand new ones on top of it if I'm trying to draw something that is genuinely meant to explain the point. But sometimes if it doesn't seem like there's a clear metaphor, that's when you kind of get into the more inventing one that fits the needs.
Comments