
Pybites Podcast
The Pybites Podcast is a podcast about Python Development, Career and Mindset skills.
Hosted by the Co-Founders, Bob Belderbos and Julian Sequeira, this podcast is for anyone interested in Python and looking for tips, tricks and concepts related to Career + Mindset.
For more information on Pybites, visit us at https://pybit.es and connect with us on LinkedIn:
Julian: https://www.linkedin.com/in/juliansequeira/
Bob: https://www.linkedin.com/in/bbelderbos/
Pybites Podcast
#160 - Unpacking Pydantic's Growth and the Launch of Logfire with Samuel Colvin
Join our Pybites Community for free here
We coach people with their Python, developer and mindset skills, more info here.
---
This week we have an exciting interview with Pydantic's creator Samuel Colvin.
---
NOTE that it's best to watch this episode on YouTube, because Samuel demos Pydantic's new Logfire product as well as a bit of FastUI. π‘ πͺ π
---
Delving into the origins, Samuel shares with us how Pydantic was conceived to streamline data validation, drawing inspiration from similar tools but quickly exceeding expectations in popularity and adoption. π
Samuel touches on the monumental speed improvements in version 2, achieved by incorporating Rust, shares insights into the transition of Pydantic into a company and its future vision. We also touch upon front-end development for which he developed another library called FastUI. π
And last but not least Samuel demos Pydantic's new exciting Logfire product that was just released. π‘ π
Chapters:
00:00 Intro Samuel
01:57 Win of the week
02:23 Pydantic framework and company backstory
05:10 FastAPI's part in Pydantic's grow
06:20 Adapting to framework dependencies
07:50 Making Pydantic faster with Rust
12:15 Learning Rust or not as a Python dev
14:16 Pydantic as a company
15:46 Open source ideas vs business requirements
17:20 Introducing Pydantic Logfire
19:15 Live demo (YouTube)
25:12 Resource / energy measuring with Logfire
26:41 Pydantic's vision for the next years
29:46 Doing front-end with FastUI (short demo)
31:13 Using FastUI at Pydantic and the team
33:56 Pybites ad
34:12 FastUI vs Streamlit for fast prototyping
36:46 Key skills for Python devs / open source in 2024
38:10 Work life balance / build things as a customer
41:00: Advice for entrepreneurial minded developers
42:48 Hobbies / interests outside of work
43:26 Podcast recommendation
43:50 Wrap up and outro
Links:
- Check out Pydantic Logfire here
- Reach out to Samuel on X
- Pybites Ad segment: The PDM Program
---
Connect with us on LinkedIn:
- Julian
- Bob
And to get our weekly developer / mindset emails, sign up here.
Pip, install logfire, and you can get going in like, one line of code. And effectively, the idea is it's like what I always hoped Python logging would be, but the standard library logging interface can't move forward like we can. And there's a whole bunch of innovation that's happened since that was invented in the, I guess, nineties. And so the idea is you can log anything from, like, sure, a string, but then you can log a pyramidic model or date time or a data class or even a data frame or something, and you can then go see it in the interface. Hello, and welcome to the Pibytes podcast, where we talk about Python career mindset. We're your hosts. I'm Julian Sequeira. And I am Bob Baldebos. If you're looking to improve your python, your career, and learn the mindset for success, this is the podcast for you. Let's get started. Hello and welcome back, everybody, to the Pivots podcast. We have an exciting episode this week. With me is Robin Beer and Samuel Colvin. Hey, guys, welcome to the show. Hi there. Thanks so much for having me. Really exciting. Likewise. Yeah, we are excited because all the work you do pydentic, and we have a whole bunch of questions lined up. But yeah, we always start with little introduction to the audience, who you are and what you do, and a win of the week. So I'm Samuel. What do I do? I maintain. Well, I don't do all of the maintaining of pine antique. We have an amazing community both inside the company and outside, who do lots of the work on Pynantic. Up until beginning of last year, I was maintaining pylantic with some help, but mostly on my own as a side project. But then the beginning of last year, I raised money from Sequoia and started a company around Pydantic. And so we'll come onto it later in the episode. But like, I've got some exciting news about our first product that we're launching a week today. So on the 30 April. So I do, yeah, I run Pydantic, the company I work on, Pynantic, the open source, and I do a bunch of, a bunch of other related things within the company. That's awesome. And does the new product, which indeed we will get to count as the win of the week, or do you want to share something else as well? That's a good question. I don't know yet. We're not that far into the week. I've been having the commercial stuff, some big successes with data fusion in rust, so I don't know whether that counts, but yeah, having some exciting time working on rust and databases, which I'm really excited by, so I'd call that my win of the week. Really nice. Yeah, sounds good. And you already said that there's Pydentic. Pydentic company and so on. So can you tell us something about how Pydentic came about and on the one side for the framework, but also in the company, and what was your initial aim? Any influences from existing projects? And did you ever expect it to become such an important and widely adopted library? Like, was this the plan all along with the idea to make a company on top of that? Or did it just grow naturally? Hydantic started back in 2017, and I might have thought more about the name if I had known this was going to become a big part of my life. It was just like another side project. I didn't think much would come of it. I was trying to parse HTTP headers and I was frustrated that I could set up these type hint. Things were just coming to fruition. They existed, but they didn't do anything. They just sat there and they told you what the data should be. And that was great. When you were within your python world where you're calling a python function, and, you know, everything's typed, but the boundaries of that function, whether you're reading like user input or a CSV file or HTTP headers or whatever you want, API requests, they mean nothing because they're basically informational. But there's. But there's no guarantee. And so it started off as this experiment to see whether I could use type ins at runtime to enforce the type and coerce the data, kind of. Luckily, almost by mistake, though, when they were developing type ins, they had left this dunder annotations attribute hanging around, which no one was in. You know, we weren't supposed to be using it for runtime type hinting back then it was kind of disapproved of. Now I think it's more mainstream. But like, yeah, it started off as a, like, as an experiment, could I use them? And that experiment kind of worked and put it up on Pypi and hacker news, and it got a bit of attention back then, and then it just, you know, just carried on maintaining it. So it carried on being like, initially a few minutes a week to reply to one question, then an hour a week, and then it was like an hour a day and. Yeah, built up over time. Nice. An hour, right? Yeah, yeah, I think you could, you could work full time. Sydney, who works with us, who does lots of the replying to issues on pedantic. I think it could eat your entire life. I think we have like, yeah, 30 to 40 issues a day or comments a day on the, on the repo. So yeah, something, something weird happened around the beginning of 2021. I don't know what, but basically the rate of downloads, there was like an inflection point in the downloads. So we were at like 5 million downloads of pedantic a month before that, which is like healthy number for an open source project. And then the. Yeah, I don't know what happened, but the gradient changed and now we're at 170, nearly 180 million downloads a month. So it's been a bit wild. What happened? What happened then? I don't know, but yeah, there's been this like strange inflection point and it's been, yeah, growing fast ever since. That's massive. And I think what also has to do with it is that frameworks like Fastapi adopted it pretty quickly, but that might not explain the spike. Right, because that already happened before. Exactly. So Fastapi had happened. Fastapi represents about 20% of python six downloads. So it's definitely. My impression is that Sebastian does an incredible job of promoting fast API and making Fastapi really easy to use. And it's obviously super attractive. It has way more stars than, than pydantic, but then almost any other library. But then what I think people do is through Fastapi, they discover Pydantic and then they use it not just in their API, but in all of their other bits of code, which is why it's so widely used. Yeah. So it gets pulled in a lot like a fourth party dependency as well, right? Yeah, I think so. I think we're like, it's hard to know exactly without adding them all up, but I think if you count like the big libraries that use Pyrantec, obviously fast API is probably the biggest one, but then you have like OpenAI SDK, anthropics SDK, a bunch of orms that use Pydantic, you probably get to somewhere around like 30% to 40% of Pydantic's downloads are like, as a dependency of something else, and then the rest of people installing it manually to go and do some tasks. Yeah. And on this topic, one question I had was, so first it was working as a standalone package and then became part of all these major important frameworks. And did that influence, was it just working as is, or did you have some technical challenges. Did you have to make some major changes to make it more compatible with all these frameworks and different use cases? So a few things changed. For example, I was not a big fan of JSON schema. I didn't quite gather its value before. And then Sebastian was the one who came along and implemented JSON schema on pedantic v one. That was something he obviously wanted. And I think it's a great idea to stick to standards as much as possible. Type hints are kind of a standard. Obviously, JSON schema is a standard. And actually, I think what's funny is that these standards, although they're not particularly sexy, they've become more and more relevant because LLMs and AI are so good at generating them and understanding them. So JSON schema, SQL, Python like being around for ten years and having lots of stuff on the Internet written about these things is incredibly valuable for the sexiest thing of all, generative AI, as well as people who are like geeks about standards. So, yeah, not many changes, as in, yeah, I talked to Sebastian quite a lot, and other people who maintain libraries, and we've done a few things to make their lives easier, but mostly there's been no particular, like major changes specifically to support those things. Gotcha. Yeah, it's nice to hear. And also I think that allows you to make these improvements pydantic internally, that then seamlessly propagate through the tool chain afterwards leveraging it. For example, v two became 17 times faster, which is amazing thanks to using rust at its core. And the question being there is, what got you into rust on the one side or in general into improving the performance? Was that a pain? Is that something that users describe? So that you said, I need to integrate that with pydantic. I'm not sure everyone thinks it's been seamless. We've definitely had some friction around pedantic v two, actually, ironically, not particularly related to the rewrite in rust, but related to fixing a bunch of APIs that we knew that were clearly wrong in v one. Probably if I had done it again, I would have fixed them in a v zero release and then the release v one. But hey, that's easy to say now. Back then I was working on it in my spare time, and it's different, but actually a lot of the friction from the v two migration was fixing stuff that was, that needed to be fixed. Where did the rust thing come from? I'd say it was a number of things, obviously a bit like people who buy fast cars, even though they spend their time driving around town. There's a bunch of people who love performance, and I have definitely, I enjoy trying to make pedantic faster. And I think that you think about how widely used pedantic is downloaded 170 million times a month used. I don't think it's secret that it's used within like most big banks. It's used throughout OpenAI, for example, making pynantic a bit faster. One, everyone wants it to be faster, whereas other features that can be a debate. So everyone wins to some degree, or at least some people is neutral. Most people win. Secondly, I do think the environmental impact of all the validation that pydantic does is non trivial, and so we can help the world a little bit by making pedantic faster. And sure, there are use cases where that performance is really critical, as well as lots of scenarios where people care a lot about performance. And actually it doesn't matter. They want their validation layer to become like 0.1 milliseconds instead of one millisecond, and their whole request is 300 milliseconds because of the database query where validation doesn't matter. But the other reason for Rust is you can be just that much more deliberate about what you want to do with each error and how exactly you want the behavior to be. And in particular, you can add configuration with little or no overhead because it's implemented in rust. Whereas if you have a like an if clause in python where you're like calling this other function and you've got another stack frame, and you've got the if logic that like starts to build up, whereas you can do that in rust, either you can literally compile that out, as in you can use a generic and compile that piece of code, say two different ways when you're compiling it to be faster, or even if you have an if switch in the hot path, it's still much cheaper than doing it in python. So you can, is that capacity to add more and more configuration safely and without impairing performance that I think is so important, yeah. And how was the decision making for Rust? I mean, could have also been c python theoretically, or maybe even going like combining it with some c binaries or so in some way. Like was this a clear, easy decision? Or in that case, rust made sense because of some circumstances. I don't think anyone is writing greenfield projects. Obviously some people out there are writing greenfield projects in c, but I would have said Rust is a far better choice. I mean, even if you ignore the performance and you ignore the type safety and all that. Just the packaging ecosystem of cargo is like world beta. Even if you were ended up, all the code you write was in c. I'd still prefer to have cargo around. I am not good enough to think that I could write this in c and it'd be, be credibly memory safe. And I think that like, you can see the move of like, you know, Python's just been approved as a memory safe language by the US government. Sure, you can be cynical about that, but like, the fact is it is important. And that, you know, much as people like to think that they're good enough that they can write c code that is memory safe, most of us can't. And that's the history of the last, like, you know, 30 years of our industry is that, that's not the case. And that's why Rust is so important. Yeah, yeah. Even the White House was talking about it. Right, right. But once the White House has realized it's talking about tech, it must be reasonably mainstream, I think. So maybe one question we like to ask these days also for Python developers in general that are interested in learning rust, do you have any resources you would point towards or any suggestions how to dig into it and maybe also whether to do it, or whether to just keep the rust part for core libraries or so that leverage it under the hood? I don't. I think that the whole idea should be. I remember I talked, I've talked about this a few times in talks. There is a world where we basically have Python as a control layer on lots of things written in performance languages. So you think about the lifecycle of an HTTP request, and it comes in and the TL's termination is done in c or in rust or whatever, then you do the routing that should be in rust, then you do the validation that's pedantic. So that's happening in rust. Then you go off and you make a query to the database. All of the data, even the database client, might be written in some more performant language, but definitely all of the processing that goes on in the database that's going to be in a more performant language. You get it back, you serialize with pedantic or whatever else that's happening in the fast language, and you have this situation where all of the application logic is written in Python, but it represents a tiny fraction of the amount of logic that needs to be executed. And therefore you can have something really fast that's easy to build in Python. Should people go and learn rust? Yeah, sure. It makes you a better developer. It's really enjoyable. It allows you to go and do stuff that if you tried to do it in Python would just grind to a halt. So it's great fun, but I don't think that most applications need it like web applications. I would say, yeah, so do it for the curiosity to learn concepts and so on. And maybe also to see how a proper package manager would look like or could look like. But then maybe you will not need it in work or so for quite a while. I mean, obviously if you want to execute a line of code on every row in a parquet file, you kind of want to do that in something like rust. Not in Python, but for most people that's not necessary. That's why we have abstractions, high level languages like Python and SQL to basically prepare the decide what rust code to run. Yeah, that's clear. Cool. So pivoting a bit to pydentic as a company, congrats for making that happening. You want to talk a bit how that happened? Was it an organic thing? I mean, I guess the widespread use has to do with that. How did that happen? So yeah, I had had this plan to finish pydantic v two. I had this originally I was ambitious. I thought it was going to take three months. I was like six months in or eight months in and I was like halfway done and I was like, oh God, this is going to take much longer than I thought. And then I got this amazing email out the blue from Bogomil at Sequoia, basically saying, I've heard from Sebastian again. Sebastian is my kind of guardian angel. Got me the introduction. Boggemel had spoken to him and he had said, you should go and speak to Samuel. And yeah, spoke to Bogomil a bunch and then had spoke to a number of other VC's and had the big meeting with sequoia where all of the partners are there. Kind of scary thing. But yeah, basically they, I mean, I think it's fair to say they backed me to go and build a company rather than particularly the ideas I was pitching then that we could leverage both the, the, like the install base of pydantic, the mind share of pydantic and the kind of some proof that I could build. I was good at building things that developers like using and yeah. So basically from then on I emailed all the people who had contributed significantly to pedantic and said, do you want to come and work with me? And I was lucky enough that some of the best maintainer, best contributors, yeah. Agreed to come and join. And so that was, that was how we started the company. That's awesome. And how do you match the open source ideals with business needs? Is there any tension there, or is that just going pretty well? I mean, as I say, we haven't released a product yet. We haven't made any money. So I can't say it's going too well until we have lots of revenue. But I never had any intention of turning, of trying to commercialize pedantic the open source library in the sense of having a, like, you know, having a free tier or having like, insiders, which came out first. It's too infrastructure in terms of its position as a library to want to do that. And we've all seen what's happened with companies like redis. I mean, only time will tell where we get to in the end. But I, you know, not to get into that debate too far, but my belief of how, like, I think David Kramer, the CEO of century, said, no one's built a billion dollar business off open source alone, and they've gone and changed their license. My belief is my, like, pitch at trying to do it is leave pedantic completely MIT licensed, completely available for anyone to use under permissive license and then use effectively the marketing value or the pr value or the like, basically the trust people have in us as maintainers as a way to launch commercial things which are like, you know, openly charged for. They're not free. We're not claiming that they're free and, like, avoid the ambiguity that way. I have respect for everyone's other routes to do that, but, like, that's the way we're trying to do it, to basically go and go and build open source and as a way to basically encourage people to use commercial things. Yeah. Yeah. So a couple of weeks ago, we had Marcello on the podcast and he already said that there may be something released soon. Is there something you have today for us that you could showcase? Yes. So I think by the time this goes out, we will have announced pedantic logfire. So it's observability platform built on top of open telemetry, like kind of python first, so you can use it with any language because it's built on open telemetry. But, like, we have a very opinionated SDK pip, install logfire and you can get going in like one line of code. And effectively, the idea is it's like, it's what I always hoped Python logging would be, but the standard library logging interface can't move forward like we can and there's a whole bunch of innovation that's happened since that was invented in the, I guess nineties. And so the idea is you can log anything from like sure a string, but then you can log a pylantic model or date time or a data class or even a data frame or something. And you can then go and see it in the interface. You have logfire info and logfire warning in the same way that you would on the logging module. But you also have logfire span which is a context manager. And that allows you to basically nest, put logging within a scope. I can show that on screen for those people who are watching on YouTube. But the power of that then is that you don't only have logging, that is actually then a trace. So then you can start measuring the performance of different bits of your application. You can imagine in the context of an HTTP request, you wrap the whole HTTP request in a span and now all of the logs related to a particular request are nested there. And you can get therefore record as much information as you want without it basically becoming overwhelming to try and try and read it. And because of open telemetry we get lots of really nice things for free. So they have integrations with most of the popular orms or database connectors so that you get information about every request you're making, every database query you're making, and how long that's taking with no extra work. Yeah, it sounds exciting I would say. Let's have a look at it, right Bob? Yeah, yeah, totally show it. So this is just a demo project that I'm loading now. Let me share my screen. So you see requests coming into this app now, but the, this is just a relatively traditional web server with some LLM endpoints that you see take quite a long time. So some of these are taking like 13 seconds, 5 seconds, and then some more traditional like normal database queries that are taking like a few milliseconds. But the idea is we look here within this one here, which is a simple like getting a list of items from a table view. We see the initial HTTP request, we look in details, we get information about all of the headers, but then we have what's called auto tracing. So basically we can automatically put a span around any functional, any slow function so that you get some idea of where within your application your given stuff is happening. And then you can see here that we have a query. So this is select star from cities, order by and limit. And we can see exactly how long that's taking and where that's happening within the journey of an HTTP request. So what I need is the old fashioned view of an incredibly complex log file of everything happening in one enormous deluge to compare to. Because for me this looks like the obvious way to look at it, but I have spent a lot of my career trying to debug issues like why is this particular query taking a long time? Or why is this endpoint slow? Which query is it that you get without even thinking by looking at logfire? Yeah, exactly. Nowadays you can maybe get this kind of information from databases like Snowflake or bigquery or so, but you need to dig down into it and you need to make the link between the python code and the respective queries. So that's not straightforward. And so that seems like a very intuitive way to directly see, okay, this is the part where it takes long. Is there the possibility to link this with a cloud databases or so to additional information? So in general you don't need to do anything more than what you would do in the Python code. So you'll literally do import logfire, logfire configure, and then you'll do log for instruments like a PG instrument first, API instrument, OpenAI, whatever else you want to instrument. And you also enable capturing of standard library logging. If you've already got that in your app or log guru or struct log, and you'll already start to. In general, that's enough to basically get enormous amounts of information. So you'll see in this request here we have no, in this one here you'll see that we have a mixture of where am I looking in here? Some of these. This comes specifically from the async PG instrumentation. You can see it here, but this one here is just from a standard logfire call. So we're just basically doing cost equals. In this case the cost was zero, the cost of having run OpenAI on this particular day. There are some more advanced features. So for example, there are some flags you can set within postgres, do auto explain. So you basically get the explain output for every single query you run effectively for free, that you basically have to set some twists and knobs in postgres to get. I think that's not switched on for this demo project, but yeah, so there are a few things more advanced that you can do like that. And because it's open telemetry, we also get distributed tracing. So I think if we look at, is there an example here? There isn't an example, particularly on this screen, but this bands can go across different services if you were instrumenting other services that were also using open telemetry, you could effectively see it all within this one screen. Or if you had multiple services that are calling each other instead of having to go to a different log view and basically try and count the milliseconds of where should this request have come? It should all just be here on the one screen. Nice. That's awesome. Yeah. And also, I mean, so in that sense, there's the, when you came into, or when you came to develop a pydentic, you said there was some pain around. Yeah. Just making sure that the data is validated and so on. And now this is like a next step. Let's say that you just make anything that uses pedantic or in that sense, log file more loggable and solving that pain, let's say with the setup. Yeah. And I don't know if I have a particularly good example on the screen. I don't know that I do. But one of the things that logfire, I guess you could predict this gives you is, I think this is actually. Where's an example? I know where it is. It's on table here. What you will see is we can also do. No, is it this one here? Oh yeah, here we are. Have pedantic validation. So this, you know, this endpoint of tables is returning cities. And here you see a view of the particular cities and their information. This is coming from the instrumentation of pydantic. So as you can imagine, logfire does a good job of instrumenting pydantic. So if you have a bunch of validations going on, you can see the input data, the output data, and if there's an error, the validation error shown on screen here, because that's obviously, errors are great. Validation that fails is great. Validation that fails. And you can see why is even better. Yeah, that's interesting. And the last thing, I won't show it here actually, because I don't want to alienate those who are listening rather than watching. But the way we've built logfire, we basically expose the SQL interface. Instead of having to learn a whole new DSL to go and query the data you effectively have, you can go and write SQL to investigate your data. The idea is that logfire is more than just observability, it's also effectively an analytics platform because it's, where would you, what's the simplest place to record data that you then want to go and analyze? It's in your code. And instead of having to make some, put the data into s three or build a parquet file in s three, you just do logfire info and then you've got data that you can then go and query with SQL. Nice. Do you already have ideas? Because time, now it showed time about also maybe linking the compute usage with that. I mean, if you set up your infrastructure, you could see for any point in time how much the cpu usage is, or maybe even memory. And like this, you could at least correlate it. It's already being recorded. So it's another thing that open telemetry gives us effectively for free, is like system metrics. They're all recorded. So we have the three traditional parts of observability are logging, tracing and metrics. We basically combine logging and tracing into one thing. So you basically get logging but with context. Then we also allow you to record metrics directly. For example, how many users are logged in at any one time would be like an up down counter, but we already have basically out of the box system metrics being recorded. One of the things we need to go and do is basically be able to show that on that live view. What was the cpu at this point we don't have a pretty display of that yet, but it's coming really excitingly. I was also speaking today to the code carbon team about having a, basically a metric system for recording energy usage of your particular services. So you could get a view of how much your application is costing in terms of energy usage or carbon in the end. Yeah, absolutely. Yeah. Theoretically you could even optimize your code to be running when the carbon intensity is low. Right. Something like that. If it's not crucial to run it right away. Yeah, nice. Very exciting. Yeah. Maybe we can go down to the, the vision question like this is a big deal for pydentic and also, as we saw, kind of a logical extension. Yeah. What is your vision for the next five to ten years with Pydentic? Are you targeting any new markets or applications? And how do you prioritize new features requests? So pedantic. The open source library, we have a bunch of things that we're still working on. We're going to stay on v two for quite a while. We will release v three at some point, but it won't be a complete rewrite. It'll be basically a bunch of mostly quite trivial things you would never think about unless you were deep in the internals of a deep user of pydantic, where we want to change the default and that will basically be V three. We have some more performance improvements coming along, so we have, I think, an idea to store data in rust for longer and basically defer constructing the Python objects. So in the case that you basically do validation of a Python model and then you serialize it to, say, JsoN, you never have to instantiate the Python objects, which should improve performance significantly. In terms of the company, obviously excited and nervous to see how logfire lands. And depending on how that goes, that will obviously influence where we go as a company. As I said earlier, I believe in open source as a way of basically bringing people in to use commercial products. One of the reasons I believe in that is I think it makes economic sense. But also I love doing open source. There's a whole bunch of open source things I want to go and build and open source libraries that I already maintain, like ARQ, that I want to breathe new life into. So there's an issue pinned on ARQ to basically rebuild that project. I started it back in 2015 when no one was doing async queuing and there was no good library. Now everyone's doing async and there's still no good queuing library. So I want to go and rebuild ARQ. I think we're in a good position to do some better things in terms of like, how do you interact with generative AI? Obviously pedantic is very widely used in that space. The whole, the fact that llms give you back unreliable data has, you know, has been another, I think, reason for pedantic's enormous growth. And so, yeah, there's lots of open source we want to do. There's even more, there's more commercial things we want to go and build. I don't know exactly what order we do them in. There's not enough hours in the day or to do all the things I want to do. So I don't know quite the order. Yeah, interesting. And on top of all that, you also have fast UI. Right, Ravi, you wanted to ask about that? Yeah, exactly like we love, of course, as pibytes being a little bit. Yeah, let's say biased towards Python, we love the idea about pythonic JavaScript. Right. So if you want to call it like this. So the question being underlying and also a little bit visionary, maybe, will we soon be able to do everything in Python? You already mentioned it that a lot of lower level, performance related things will happen in different languages, but maybe you have like the orchestration language of Python that can do everything on the, on the highest level. And so far there was always like the mix between Python and JavaScript. For example, for backend and front end. But fast UI could be one of those tools that brings them more together, right? Absolutely. So I'll just show you. So this is the, this is the app that we were looking at here. This was me wanting to put something together quickly to demo logfire and I built it with fast UI and it worked. It was incredibly easy to get going and get it all working, and we can even build a reasonably good clone of chat GPT using Fastui. This would have taken us. So this is all just set up with Fastui, and it took me a few hours to build. Keen on fast UI, it started off as an experiment. We've been a bit busy, as you can imagine, and so it hasn't had as much love as it should. But I think we're back to working on that now and I think we'll move forward with fast UI and doing more stuff with it. I have some big ideas of how to make it even better as a way of building for building web applications, whether we have time to work on all those things or whether we keep it with the react based front end. I know there are lots of the HTMX fans of which I am effectively one, who think you shouldn't be running web apps, don't need to have as much code in JavaScript or in the browser, and I agree with that. So I have some plans to basically get a lot more fast UI to be running in Python and less of it to be react. Yeah, but again, it's another example of where I think we don't need to go commercialize it, we're not going to have fast UI premium, we're not going to offer a hosting version of it. We're just going to say, here's an open, awesome, open source library. By the way, you might want to try our observability platform and whatever else we go and build. Yeah, amazing. That also brings on another question. So you mentioned that you brought up Fast UI. I also hear it on the Talk Python podcast, actually earlier this year that you developed this, had this idea and also challenged, or got it challenged by your team. And the idea was to see when will it be picked up? Will it be good enough to replace other workflows? Or so could you say something about how does your team look like? And how do you bring up these new frameworks and challenge them? And where's the state right now using Fastui already, or maybe a little bit later? So we use Fastui for our admin interface for logfire. So we have an admin interface that lets us see, for example, who has accounts, how much data each account is recording. And that's all built with Fastui. Also allows us to run ad hoc migrations and stuff like that. That's built with Fastui. My original hope was that we could use it to build a bunch of the crud components of logfire that are like login screen, the list your projects that add a user to a project. Stuff that was basically the team have quite a lot of autonomy and the team effectively said no to that. The front end developers would much rather build that stuff their way. And that makes sense. And I think there's an open question as to whether or not Fastui would ever be right for building user facing applications like that. What's not in doubt is that it wasn't mature enough yet to do that for us. So what's the, how's our team made up? Like, like I said, most of the people who joined I knew from, with, yeah, almost everyone except apart from the two awesome front end developers who, one I found for a friend and one replied, saw, like, saw us talking on Twitter, but like everyone else, basically came from the open source community. So Marcelo, who you spoke to, he hadn't done so much work on Pynantic, but I knew him from, from, you know, lots of other libraries that we had worked on. He obviously maintains Uvacorn, I maintain watch files which is used by uvacorns that we had interacted. All of the other guys had contributed one way or another to pedantic and that was how I knew them. So, yeah, we're strong in Python. Lucky to have David Hewitt. He's obviously a rust magician who maintains PI zero three. We do a lot of python and we will continue to. I think we'll be, depending on how fast logfire grows, we'll probably need to be doing a lot more rust for the performance side of things. We are probably one of those spaces where you can't get away with everything being Python. Maybe that's not true. Maybe we can stick with Python in some places for quite a time. But in just twelve weeks, pivots elevates you from Python coder to confident developer. Build real world applications, enhance your portfolio, earn a professional certification showcasing tangible skills, and unlock career opportunities you might not even imagine right now. Apply now at Pibit Es PDM. Yeah, hopefully that answers the question a bit. Yeah, for sure, for sure. Yeah, and so, but maybe it's also, maybe now, it's still a bit early. Maybe in one year you could already say, okay, you start with fast UI, or maybe either way, if you're a young developer now or you have an idea, instead of starting with, let's say streamlit, which we like to do in our coaching program a lot, it's very simple, but it's also very limited. So we like to start with Streamlit, for example, to have some quick wins, then create a Fast API backend to have a proper database and backend setup. And then you could switch to fast UI, maybe soonersh you could even directly start with fast UI like you said it, and then maybe the move from fast UI to having fast way for some components of the app and then pure JavaScript or typeScript, whatever. For the more sophisticated user facing backend. It's a more smooth transition then, right? Yeah, there's some work going on in Fastui that I just haven't had time to work on hard, which is to basically make Fastui completely generic around custom types. And so obviously the thing that Fastui gives you to most people, it's basically a way of building apps without having to write any JavaScript. But like in its limit, it is a basically typing system, which means that you get the same you can basically use types that cross from python through to typescript. At the moment that's fine if you're using the standard components that Fastui gives you, but if you define your own custom components, you're basically like there's a blob of JSON, let's document what that is. But it's not guaranteed typing time. One of the ideas with Fastui is to make it generic around the custom component union, and therefore you get typing on all your custom types and so custom components, which would allow you to use it not only even if you're not using its pre built front end components, you're still able to use it for your whole application or for like big chunks of it. But yeah, not exactly sure when I'm going to get time to work on that, so. But it's definitely one of the pipe dreams for Fast UI is to get that typing to go further, as it were. Yeah, and it's also a shout out to all the listeners. If you're considering contributing to open source, have a look at Fastui. You may be able to contribute there. It will be appreciated, I guess it will be much appreciated. Actually, we're working on documentation at the moment, so it should be getting easier to get started with really soon. Yeah, awesome. We're getting towards the end and we still have a couple of questions. This one can be a pretty rapid fire because there's some rust in there and we spoke enough about that. But I do want to get this sub question out of here. Key skills you recommend for Python developer in 2024, especially with the eye on contributing to open source, which can be overwhelming for people. So do you have one or two tips? Yeah. How to make that easier for people, especially when they're new to the space. I'm not someone who reads books or particularly, I just try stuff and see what works. And I'm sure that in some spaces has meant I've banged my head against the wall for longer than I should have done, but I've just learned by doing, by trying things. And so be curious, do that, be friendly and accept that, like, open source maintainers are overworked and will occasionally be grumpy. And we as open source maintainers should try not to be grumpy, but occasionally we are. So, yeah, nothing, nothing. No particular pearls of wisdom, I'm afraid you have to say. Like, the tech skills are learnable, but it's the mindset, right, of getting in there and accepting that you have to make mistakes and be vulnerable. That's really important. But I think that is learnable too. I think you can learn to like be better as an open source maintainer in the non technical bit, or as an open source contributor in the non technical bits, as well as in the, in the technical skills. That's a lot. Communication as well, right? Yeah, yeah. Basically that introduces a little bit the next question, which is, do you, do you yourself ever feel overwhelmed by the immense responsibility crucial library of your hand has in the python ecosystem? So basically having all the requests from the different stakeholders, let's say, managing all that. And how do you maintain a sound work life balance? Because I think that's something that many python developers struggle with. And, yeah, I mean, I don't maintain a healthy work life balance right now. In the week or two before we launch our first product, I'm not going to claim I have a healthy work life balance, but in general I try and do things that I enjoy. One of the reasons we built logfire and not any of the other things we could have built from the point of view of pedantic, was one of my rules for starting a company was I want to be the customer, so I don't want to have to go and ask someone else, would you use this thing? Right. There's tools we could have built in there like anomaly detection and data cleansing space in the great expectation stuff, which one? Space where pan antic is used and I'm not saying those things aren't useful. I've just not needed them in my life. So I would be going to someone else and saying, is this what you want? And I didn't want that. I wanted to build the thing that I wanted to use, so I never had to ask someone else whether it was right. And that, you know, and the same is true of pyramidic for all. It's like myriad of uses. In the end, it's mostly decided by what I and the team want it to be able to do. And I like instinct on what makes sense. Yeah, that's actually. So keeping this a little bit more general for everyone. Build something that you would like to have as a tool, and that's everything. You said that. Armin said that Sebastian said that they just built something where they felt there was a lag. They tried all the different tools that were available, but nothing really solved the problem like that. And also David said that David Kramer from century, basically, really the developer as a customer, let's say, or as a user, which we are ourselves. Like you also now said, if you can use it yourself, then you can just align all the interests much better. You want the developer to be happy and therefore developers will use it. And if that works, then everybody wins. Yeah. And it's a lot easier as a guiding light. And it's easier in those moments of like, is anyone going to use this thing to look at it and go, would I use it? Yeah, sure, I would. Whereas if you're, if you're unfortunate enough to be building for a market where you're not the customer, it must be scary because you have moments of like, would anyone use this? I don't know. One person said they would, one person said they wouldn't. I don't know. I'm lucky that the person I really trust in the end, in a sense, me, would find logfire useful. And same with Pylantic, same with Fastui, same with ArQ, like all these watch files. I built these things because I wanted them. And out of some degree of curiosity of whether it could be done. Yeah, that kind of lines. Also, with the next question we had, what would be your advice for entrepreneur minded developers that would like to turn their projects into businesses? I mean, that could be it, right? To scratch your own niche? Or is there something else you want to say to that? So I'm unusual in that pedantic, have been around for like five, six years before I raised money. And I know that there were lots of products or companies, particularly in the AI space where they're like an open source project, but they've been around for like four months before they start the company, and they're obviously building it very specifically for to raise money. I mean, yeah, my advice would be to scratch your own itch, but my other advice would be to say it's easy to think that innovation has finished in certain spaces, and actually it hasn't. And I think that AI is now roughly where the best corollary for generative AI is. The web, in my opinion, as in people thought it was going to be massive and it was even bigger than they thought. And I think that the same is going to be true of generative AI. But it would have been easy in 2001 to think all the innovation in the web is done, like, you know, like XHR is out, like people are building these responsive websites. Microsoft have built like MVC two, whatever, the innovation has stopped. But actually we were like still twelve years away from someone inventing react, and we were like, whatever it is, 1516 years away from someone inventing fast API. And so I don't think that the, like, the abstractions on top of generative AI that we have today are going to be the same ones or in the same shape that we are using them in ten years time. And I think there's still enormous space for innovation in that space, as well as lots of other things. It's all been done well, not so much. There's always room for. Maybe it's all been done, maybe it has big outside of the box. Yeah, maybe. One of the last questions, any interests or hobbies outside of programming work that you may be not able to do right now, but when the crunch time is over, you're looking forward to do again. I have to say, put in mind like, a lot of my intellectual curiosity has taken up with coding. So yeah, I enjoy listening to quite a lot of politics podcasts and being quite involved in that. Well, hearing about that a lot, but I have a young daughter and I have a busy job, so not as much as I would like, I'm afraid. Is there still time for some reading or book recommendation, even if it's technical? I could do a podcast recommendation, but it's not. The rest is politics, which is a UK podcast, but it talks about a lot of international politics. I'm a big fan of that, so I would recommend that. Yeah, we can put that in the show notes. Yeah, sure. Yeah. Good, good. That was it for the questions. Yeah. So thanks for sharing all that insight, and thanks for all you do it's really inspiring. Awesome. Thank you very much. Well, yeah, thanks a lot for having me on the show. It's been really fun. Likewise. Awesome. We hope you enjoyed this episode. To hear more from us, go to Pibyte friends. That is Pibit es friends, and receive a free gift just for being a friend of the show and to join our thriving community of python programmers, go to Pibytes community. That's pibit es forward slash community. We hope to see you there and catch you in the next episode.