The Architecture of Bubbly

Find out how we built Bubbly, from tech stack to design

Published on Sun Mar 21 2021

Authors: Valocode, Jacob Larfors, Ben Marsden, Oliver Frolovs, Nate Ham

Welcome to Bubbly Babble

A podcast discussing the challenges around continuous integration, continuous delivery and release readiness. In our first episode the Bubbly development team talk about the challenges around release readiness, having the confidence that your software is indeed ready to deploy. What are the challenges, and how do we manage them, and why it led them to develop a new tool to tackle this.

Episode 2 - The Architecture of Bubbly

For all the tech enthusiasts out there! We’re lifting the lid on how we went about building Bubbly, a tool designed to give you confidence in your release ready process. Why did we choose certain technologies over others? How did we decide on the product architecture? And is there anything we would’ve done differently?

During this episode we discuss

  • What was the inspiration behind how to build Bubbly and the initial problems we were trying to solve? [01:15]
    • Hashicorp architecture
    • Golang
    • Terraform
    • HCL
  • Why we decided to use HCL (HashiCorp configuration language) instead of an SDK [07:46]
  • Schemaful vs Schemaless: do we go NoSQL or SQL? [11:37]
  • Why we chose GraphQL as a Query interface over the alternatives [14:53]
  • Monolith vs Microservice: how and why we found a balance between the two [20:23]
  • Why we used NATS messaging system [24:52]
  • Why we’ve fallen in love with Svelte for building the Bubbly UI [32:59]

Mentioned in the podcast

Bubbly is now in Open Source Beta. Check it out on Github and join us on Github Discussions!


Jacob: [00:00:00] Hello, welcome to our second episode of Bubbly Babble, where we discuss all things, CI/CD and Release Readiness. Last time we spoke about why we built Bubbly and the origins about Bubbly. Today, we’re going to talk a little bit about how we built Bubbly and the architecture of Bubbly. I don’t think we’ll be speaking so much about Bubbly itself, but more about some of the technology choices and decisions we made, architectural choices and this sort of thing. So if you’re a general tech enthusiast, then we hope that this will be interesting for you to listen to. And maybe you can share with us your thoughts afterwards. And as per last time, I’m joined by the rest of the Bubbly team. So we’ve got Ben.

Ben: [00:01:00] Hi, Jacob. Happy to be here.

Jacob: [00:01:03] Awesome to have you here. We’ve got Oliver.

Oliver: [00:01:06] Hey, happy to be here. Jacob: [00:01:08] Great to have you. And we’ve got Nate.

Nate: [00:01:12] Hi how’s it going Jacob: [00:01:14] Cool. So let’s let’s get stuck in. Just a quick recap on Bubbly. So when we talk about the problems and things that we needed to solve, there’s some context. Yeah the problem of release readiness is what we were trying to solve. So people running CI pipelines and practicing CD and wanting to release quickly. Generating a lot of data, putting a lot of data in different tools, in different dashboards, different places, and wanting to aggregate that data together. And being able to make smarter decisions faster, and then ultimately release with more confidence. That’s the agenda of Bubbly. So we had a clean slate around October time. So six months ago, and we wanted to, write a nice tool that people enjoyed using, one thing to mention here about when we started doing this, is that actually all the people working on Bubbly are essentially Bubbly users as well. Which is a nice situation to be in. It felt like when we were developing this tool, building this product, that we would be the ones who were using it. And that made one very simple and important requirement for us, which we keep asking ourselves almost every day, do we actually want to use this tool? So we look back at what we’re building and okay, we’re still solving the problem. And do we want to use this tool to solve the problems? And regarding the inspiration. We work a lot with the HashiCorp stack and while we like using HashiCorp tools, and I think there’s a few reasons for that. Maybe I could hand this over to Ben. I know you’re a HashiCorp fan boy. You want to maybe say a few words about what it is about the HashiCorp stack that you liked so much.

Ben: [00:02:56] Sure, it’s good to formalize my fanboy status into the record books! Yeah. I think there’s a lot going for it around HashiCorp and the various products that they have out there. At least for us when we were deciding or when we were taking inspiration. We loved the idea that HashiCorp, Nomad and Consul have, where you basically have a single binary, but that is capable of doing different things depending on the different parameters that you give it. And then on top of that, of course, there’s HashiCorp, Terraform and HCL, which are really clean ways of declaratively defining your infrastructure, and really friendly approaches basically to a world that is not actually that simple - cloud provisioning and this sort of stuff.

Jacob: [00:03:39] Then the kind of inspiration we drew from those tools would be this architecture of a single binary that you run with different feature flags, so different modes almost to build your cluster or to build your deployment. And then the way that Terraform handles HCL and being able to declaratively, quite like high level conflicts, but quite dynamic and powerful conflicts to then do stuff.

Ben: [00:04:05] Exactly doing stuff!

Jacob: [00:04:07] Yeah. Nice. Cool. Yeah. So I think that was the starting point, we were like, all right. We want to do the same thing. So with those ideas in mind as well, Golang became, I don’t think any of us had that much experience with Golang before we started this. I had used it a little bit for a couple of projects, but nothing big. So we took a little bit of a gamble there. To just say it, “okay, we’re going to do this in Golang”, and it’s not really regretted at all. And I think we’ll talk a little bit more about the Golang ecosystem and things that we’ve managed to achieve with it as we talk a bit more. But one of the main things, as well as the whole HCL approach, the HashiCorp configuration language, which is the language behind how you write Terraform code and how you configure some of the HashiCorp tools. The parser for that is written in Golang and there isn’t a parser in any other language. So if we wanted to use HCL, then we’d have had to use Golang to some extent anyway. s . If we talk a little bit about the main challenges that we had with what we actually wanted to do or what we wanted Bubbly to do, we wanted to be able to first of all, get data from lots of different places. We’re not talking big data here. We’re talking about, test results, static analysis results, Jira tickets, pull requests, pipeline runs, this kind of data. But it’s coming from a whole bunch of different tools. Maybe coming from Jason files, XML files, sadly, and just about anywhere else. We wanted to get this data. We needed somehow to make relationships about that data too. So if we have two different testing tools we would need to know how to query the test results from these two different testing tools, even though the format of those results might be a little bit different. We’d also want to know that if we ran a specific version of a piece of software, so basically a Git commit for a repository. And we ran different testing tools, we’d want to know how to query, to say, okay, are there any like failing tests for any of the testing tools? For any of these versions of the software. So we needed to be able to make relationships across those different repositories, across these different versions and the different tools So yeah we want the ability to make relationships about the data so that you could have a Git commit, a version of the repository, and we’d be able to associate test results from any tool with that version of the repository as well. And that way we can form complex queries, basically ask the database or ask Bubbly, ” Hey what about this?” And that raises point 3 as well, which is the need for a query interface as well. So we’re talking about putting data in. point 1; making relationship to the data, being point 2; and then point 3 is having some kind of queryable interface so that we can get the data out that we want, to build some dashboards and tables and graphs, and it will do all the good stuff and help us with the release readiness. If we start with the first point, so the idea of pulling data out, we needed a way to… we need to create a way that people could define how to extract data. And we didn’t arrive at the idea of a data pipeline on day one, that was a natural progression. We started off just by hacking away with some different conflicts. But we needed a language to do this in a way to expose this to the end-user. And continuing on the HashiCorp theme, I guess I could give this one over to Ben to talk a little bit about that. We ended up going with the HashiCorp configuration language and yeah.

Ben: [00:07:46] Yeah, sure. I can certainly talk a bit about HCL, the HashiCorp configuration language and some of the logic and the reasoning behind why we opted for this. I’d say our biggest reason for choosing HCL was that we really needed something that was human readable and easy to pick up by pretty much a myriad of people. We didn’t just want to focus on people who are familiar with Terraform, familiar with Hashi Corp. And along on their DevOps journeys. And we also didn’t want to exclude developers from picking it up either. So we needed something that wasn’t scary basically. I think JSON and YAML while they’re great languages, they aren’t exactly the most human readable. And as a result, I think it scares quite a few people off the idea of writing things like data pipelines, or pretty much anything in pure JSON. And I think the amount of times I’ve heard someone shout about how they missed a space in their YAML file and spent three hours debugging was enough for us to know that JAMAL was not the right decision or not the right language for us. But I’d say another important aspect was that it couldn’t just be too simple. We needed it to be powerful enough that we could still write proper data transformations and things like this. So we wanted to bear in mind that later down the road, when people want to manipulate the data that they grab, they need a very flexible mechanism to do that. And HCL was pretty much the natural conclusion to that in that it’s very human readable, but also is very powerful when you dig deeper into it, it still provides functions and looping capabilities that you can use when you want to, for example, loop over data that you want to transform.

Jacob: [00:09:44] I,guess, it should be mentioned as well, maybe that we did have some challenges working with HCL, and writing a parser. And I think we’d used Terraform and we saw the amazing things it was doing, how you could define variables and how you could define locals, how you could create modules and these kinds of things. And when we came to implementing this, we, I think we just assumed too much, maybe that there would be like this framework already out there to achieve these things. But it turns out that HCL is really nice, but it converts into some data type, like a struct in Golang and that kind of everything else you have to code yourself.

Ben: [00:10:24] Yeah, I think we weren’t quite expecting the amount of effort from our side. Which well hindsight and all that.

Jacob: [00:10:31] Yeah, I think the funniest thing in hindsight too, is that we built quite a comprehensive and complicated resolver and basically a simple table to track all the variables and things so that we could do the same things that Terraform does. And then when we came back to the parser a month or two later, we actually just removed everything because we realized that we didn’t fully need it to just achieve the simple thing that we did. Yeah, lesson learnt using HCL. And I hope the support for it, for onboarding to using HTML is more than just an alternative to JSON, but actually using it in a dynamic way, like you said, I hope that improves. Maybe we should actually write some things on this to start that, to start helping with that. So we provide a descriptive way then to define these data pipelines, to extract data using HCL. If we continue then onto the second topic that we needed to solve - the relationships. And we had some discussions internally about how to build relationships within data. And I guess the first question that we came across was should we make a schemaful model so that there actually is a strict schema within Bubbly, you have to put the data into the schema. Or should we work with something that is more schemaless, but then the schema is implied afterwards. So SQL versus NoSQL type of discussion, but not limiting ourselves to using SQL or NoSQL. Potentially we could use a no SQL database, but the schema would be enforced through Bubbly. So it was more of a, do we want something a bit stricter? Or do we want something that’s a bit more lenient and we’ll gloss over any errors and maybe report them back. And we decided to go for the schemaful approach so that you would have to define a schema in Bubbly, and you would have to follow that schema when you put data in and out. And I think the reason behind that is that even if you go for the schemaless approach at the beginning and NoSQL and things like this, you end up having to have a schema at some point anyway, So we felt like it was a better idea to have a schemaful approach, and we try and alleviate the headaches or the cumbersome tasks of working with a schema. We try and alleviate and fix those things rather than go schemaless and then try and make that work. . So I don’t I don’t think there’s a right or wrong with this. But yeah. Any comments on this from anyone? I see Oliver raising his hand here.

Oliver: [00:13:10] Yes. There was an episode on Gotime where they talked about it. It may have been one on CockroachDB and the guy from CockroachDB said relation to no SQL databases with working without a schema. In any sufficiently complex project, you end up implementing half of the SQL anyway. So you end up having some kind of enforcement, which schema gives you because it’s just, if your needs are sophisticated enough, you just can’t get away with simple queries.

Jacob: [00:13:44] Yeah, I think that’s very true and there’s probably the feeling we had right inside that we’re going to need this anyway, at some point. So let’s just do it upfront and make it easier than delay it or postpone it until a later point in time. But I think it’s nice as well that we’re not saying that you can’t use it as a NoSQL database to store things in, but the whole Bubbly pipeline in technology will take care of the schemaful approach to doing that. So it’s maybe a best of both worlds. I don’t know, maybe being a bit too optimistic about it, but yeah. So far, so good. And maybe Oliver you want to continue and you on this, because I think the third challenge we had about being able to extract the data after we’ve made these relationships is very closely tied to this too. And probably was another thing influencing the decision too, that if we have this schemaful approach, we have a strict schema, it makes it much easier to generate some kind of queryable interface on top of that. Whereas if we didn’t have a schema. Maybe you want to talk a little bit about the queryable interface that we’ve created and technologies behind that.

Oliver: [00:14:53] So the interface is based on the GraphQL language. And we had to find the technology basically to do the interface. And we decided to go with GraphQL rather than say REST APIs because first of all, there isn’t much choice. As far as I know, you can go either GraphQL or REST APIs. You don’t want to be inventing your custom proprietary interface to the data. So you have to do something industry wide, which people know how to use. And with REST APIs I have this opinion, that they’re well-suited for situations where you know your data pretty well, and you don’t really have to pull different bits from here and there together in a single query. REST API, they take a lot of designing. API design is a big thing. If you don’t know upfront what your data requirements are going to be, or you want to mix multiple things and requirements change, you just end up with lots of endpoints, which is not very useful. So we picked GraphQL because GraphQL gives you the ability of being a bit more flexible with the queries. So you don’t have to know as much upfront about the structure of what you want to get out of your dataset. And with respect to GraphQL, we also have this ability to generate a GraphQL schema from the database schema. So the GraphQL interface is being generated from the database schema.

Jacob: [00:16:29] Bubbly schema, you mean? .

Oliver: [00:16:30] The Bubbly schema. Sorry, yes. This is an interesting application of GraphQL, in fact, because a lot of examples and most tutorials and books even on O’Reilly library is all centered around designing your schema, GraphQL schema manually, and then writing resolvers. And there is not a lot of information out there on this next level of inception. So to speak. When you want to generate your GraphQL schema from something else more dynamic, like in a very general way and offer the resolvers in a more dynamic way. So the resolvers get generated automatically, and this is one of the problems we’re still solving. I think relatively successfully. It’s an interesting problem to solve because we can’t just have a typical GraphQL implementation, a single well-written resolver for all kinds of queries. We have to be able to generate those resolvers based on what the dataset is like. So it’s like a meta programming of GraphQL, so to speak.

Jacob: [00:17:36] It’s been an interesting journey with GraphQL, because it’s a bit of an open book, that getting the really basic query- like schema written for GraphQL is simple and then writing a query for that. But when you start getting to more complicated GraphQL queries and wanting to start doing filtering and stuff, there isn’t really a golden rule about how you define arguments, for example in GraphQL. There are some standard conventions that other GraphQL APIs use, and that you should follow, but it’s not really okay, here’s how you name your arguments, or this is the type of the arguments. You can pretty much create whatever you want to suit your purposes, which yeah, to me is a little bit dangerous, but also in our case it’s really good because we’re not vanilla GraphQL in that sense. We’re GraphQL with some Red Bull. Because I know you like Red Bull Oliver!

Oliver: [00:18:25] I used to! Yes, this is interesting because GraphQL to me is a very light touch spec. So it’s not very prescriptive, so it doesn’t touch a lot of aspects of it. It defines general foundations of how GraphQL queries work and every sufficiently complex, sufficiently sophisticated GraphQL product. It starts adding extensions to it like filtering, for instance, you’ve mentioned. It’s implemented differently in different products, the kind of filters that are supported and the way even the syntax works is different. And I think it’s at the same time, it’s a strength and a weakness of GraphQL. You’re quite right. It’s a danger, but it also gives you flexibility. It’s the case with any sophisticated tool. In order to have flexibility, you have to expose some of the more dangerous parts. So GraphQL is very flexible and you can do a lot of stuff with your custom extension, or custom filters, but then you have the difficulty of having to implement them.

Jacob: [00:19:24] Yeah. Cool. Thanks Oliver. And would you say we’re still happy with the decision of GraphQL?

Oliver: [00:19:32] I would say yes, I would say yes, because I just can’t see how we could have a general method for generating REST APIs out of Bubbly schema, which would be flexible enough to accommodate all the use cases. GraphQL does what it’s meant to do very well, which is not over providing the data and not under providing the data. And not just having so many endpoints, which people tend to forget what they even were for. Endpoint hell, like DLL hell!

Jacob: [00:20:12] Yeah. awesome. Okay, we’ve covered the three problems, but we didn’t actually talk about so much how we solved them. We just talked about technologies, but we figured that’s more interesting to people listening. And now we’re getting onto a really interesting topic which I’m going to hand to Nate to talk to us about. And that is about the architecture of Bubbly in terms of deployment. The big question, is it a monolith or is it a micro service? And I guess it’s neither of those really, or more like a combination of the two. But Nate, what’s your take on this?

Nate: [00:20:43] Yeah, we’re actually just having this discussion before we started here. Because, we wanted to have that single binary philosophy in mind, but then also we needed to stay away from that more outdated and monolithic approach. So we take the idea of microservices like breaking things up into an agent, so that depending on the configuration the binary can behave as different parts of the microservice. Because we didn’t want to go full microservice because then we didn’t want to have to configure a service mesh, and worry about all that. I think Ben’s going to talk about our NATS streaming service and a little bit. But to just break it down a little bit, the binary can now behave as either like an API server to handle those types of requests. It can behave as a data store. It can behave almost as if a parser. We had introduced the idea of an agent, which would give the binary different behavior. So we’ll get the best of both worlds. We can have a coupling in a good way for the monolith. But then also the distributed sort of separation of concerns that microservice architecture would bring. So we wanted to use the term service oriented architecture, leaning a bit more towards microservice if we had to choose which one it was.

Jacob: [00:21:48] It’s monorepo, macroservice, maybe, or like a,

Nate: [00:21:56] Yeah, exactly. So it’s taking the best of both worlds, I’d say.

Jacob: [00:22:00] Yeah. So we have a single binary, which when you run in the server mode it’s called an agent. And then that agent has different features that you enable, and those different features then run it in the different, yeah, to fulfill the different features or services of the Bubbly deployment. And then you need at least one of each of those services to complete a deployment. But then you can scale how many, like I want 20 workers to handle the pipelines, and you could say okay, give me 20 workers, but I only want one API server because the API load is not going to be that high.

Nate: [00:22:34] Depending on your needs, maybe you have a lot of pipelines, but not necessarily a lot of data. So you scale up the workers, but maybe only have one or two stores deployed.

Jacob: [00:22:43] Yeah, and this is going to be really interesting because for those who don’t know, we will be running Bubbly with our own SaaS service. So scalability and multi-tendency and these things have been in the architecture designs right from the beginning. But I’m really looking forward to seeing how this is going to play when we start getting big loads and lots of users. And can we really create a StatefulSet in Kubernetes with auto scaling and things like this for the different services? And it’ll just balance itself out because the features reflect the different things that Bubbly does. So if Bubbly is going to start doing more of a particular thing like running pipelines. If it’s going to start running more pipelines it’s going to need more workers and then it will just scale the number of workers, but not need to scale anything else.

Nate: [00:23:28] So using the NATS architecture, because as long as you have that central node, you can scale it without necessarily having to let the other parts or the other agents know about the scaling. So it’s microservice in the sense that you could add 10 stores and not have to inform the API server that that change has taken place. So in that way it’s decoupled very nicely.

Jacob: [00:23:53] Yeah. Sounds really nice. I guess one last point before we pass it over to Ben, the HashiCorp fanboy, sorry, I’m really rubbing it in now. But yeah, one last point about this is that, and I think it’s another reason why the architecture we’ve chosen is nice, is that if you want to run Bubbly in dev mode or locally, just a local small deployment. Maybe on premise or something. You can just run the Bubbly agent in single server mode, and it enables all of the features and it will run those in different Go routines within the same process.

Nate: [00:24:29] Yeah, it’s actually pretty handy for testing and deploying. If you’d have like your local setup, you don’t have to worry about running, like maybe a giant distributed setup. But if you just run it, like you said, in a single mode that it runs quite nicely and you can set some of those environment variables locally, depending on your needs.

Jacob: [00:24:45] Yeah. I’d like to see a microservice architecture being able to achieve that easily without bringing in an orchestrator.

Nate: [00:24:50] Yeah, It’s really seamless.

Jacob: [00:24:52] Yeah. Cool. All right. Thanks Nate. I guess we’ll move over to NATS then, which has been spoken about, and people who know NATS are probably happy to hear the name. But there’s probably people who don’t know what NATS is. So maybe Ben, you want to kick us off by telling us about NATS, and maybe why we arrived at NATS.

Ben: [00:25:15] Sure thing. Yeah, I think, in some ways the problems that you described and the things that you and Nate just talked about are a really nice summary as to why we ended up on NATS. For those who don’t know, NATS is basically a messaging system. And in our case, when we were debating what kind of architecture we went for - monolith, microservice or whatever this hybrid that we want to call ourselves. We also needed to bear in mind, okay, how are the different independent services going to communicate with one another inside each of these respective architectures. And so we’d heard a bit about event driven architectures and this sort of stuff. So we took a bit of a deep dive and unexplored what our options were. And eventually we came up across NATS. And the things we loved out of the bat with it was that it was clearly super lightweight. I don’t know exactly how big the NATS server is independently. I think it’s something crazy, like seven kilobytes or something. So yeah extremely lightweight and easy to chuck around. And it was very simple to actually embed within Bubbly itself. So Nate and Jacob talked a bit about how we’ve actually implemented this. But pretty much, if you have a single server instance you embed a NATS server into that deployment. So there’s no worrying about running it within a Docker compose or within Kubernetes as a separate service. You just simply have it embedded within Bubbly. And I think that’s really powerful. Another thing that people talked about or, sorry, Nate and Jacob, you spoke a bit about, was this idea of scaling and how we handle that within an event driven system. And I think NATS has a really healthy approach to this and that they have a specific type of subscription, a queue subscription, whereby any event that is basically sent, published ,to use the correct terminology, that has more than one subscriber will actually only get picked up by one. So the idea would be, or use case for us is, for example, is if we want to run a specific pipeline, but we have many workers because we’ve identified a need to scale up because we have a whole bunch of pipelines that need to be run. We can just subscribe using this special queue subscription type, and we can always be safe in the knowledge that only one worker will pick up that pipeline to be run. And again, that is a super simple interaction in NATS. It’s very well documented and pretty much fit our use case completely for how we wanted to approach an event driven architecture with Bubbly.

Jacob: [00:27:56] Sounds really nice, Ben, thanks for the summary and explanation as well. I guess some alternative to NATS. I guess there’s one very well known, I want to say alternative, not really competitor in any way because Kafka, Apache Kafka, is the industry Pub/Sub streaming service that most people seem to use. And I’m really glad that we discovered NATS actually. Because, I’m not saying that NATS is better than Kafka in any way, but this ability to embed NATS in Go, and how simple it is to work with and use, I think really supported our needs and our use case. Because we’re not really streaming, we’re not building a streaming service, but we want all the nice bits of a streaming service within Bubbly. So we shouldn’t need to invest lots into infrastructure and lots into knowledge, of learning this complex beast, which is how I see Kafka, to achieve something simple for us . Do you agree with my view there between the two? And do you think NATS was the better choice for us?

Ben: [00:28:58] I think so. Without testing Kafka, and I know there’s a well-regarded Go implementation Jocko by Travis Jeffery, which I did have a scope around initially. Without trying these, I couldn’t say for sure, but I can’t help, but feel as though NATS really fits our use case in that it’s very simplistic in a way it has a few core features and they fit our use cases pretty perfectly. And I have no complaints either with how we use NATS and its performance. So it feels to me pretty perfect. And I can’t see why I would recommend going a different way.

Jacob: [00:29:34] Yeah, that’s really nice. And I guess the other topic then, I think Nate mentioned a little bit about building a service mesh, and usually when you have a microservice type architecture or this kind of hybrid architecture that we have, that you have different services fulfilling, a different purpose. And you might have an API server sitting in the foreground and that’s the one that external things talk with. You’d have some kind of service mesh that would route the messages through. And make sure they reached the right service, and you might put some rules in there, and you might implement some things that like, if the service or the request isn’t fulfilled, then it gets re sent, and this sort of stuff. We haven’t needed to do any of that now because we’ve put NATS in the middle. So NATS is our service mesh or fulfilling our needs for a service mesh.

Ben: [00:30:24] Exactly. Yeah, I think NATS is also a perfect fit because we decided to go this slightly hybrid architecture approach. And yeah with NATS streaming, for example this notion of at least once delivery and things like this, which perhaps is a use case for traditional service meshes, is not relevant or at least is solved by NATS.

Jacob: [00:30:45] Yeah, which is really nice because my experience with service meshes, when I’ve been setting them up briefly for things has been well, I’ll just say that NATS was easy. We ran a proof of concept with NATS. I think I ran the initial one actually, and it was like half an hour and I had our own Golang binary built, starting an HTTP server, and the ability to create NATS subscriptions and publishing. And it was like, wow, let’s do this. We already talked a little bit about deploying Bubbly as well, which was another other topic we had to discuss. And I think we covered this with the agents and the really nice rule, that it is becoming a rule now. I think we need to write this everywhere. You need at least one of each agent type. Or each feature of the agents to, build a Bubbly cluster. But then you can scale those individual features or as individual agent types, however you want, really. I think it’s a really nice deployment model. And we haven’t needed to worry about things like Raft consensus between the different Bubbly services or again, the service mesh or anything because well NATS is at the center of it all. Probably in a bigger scale deployment, you would have a dedicated NATS server. So you wouldn’t use the built-in one, the built-in ones mainly for this simple use case of Bubbly. So you’d need a dedicated NATS server to sit in the middle and that can scale on a turn too. And that obviously has things like the Raft consensus built in to help it communicate and control the different instances that start. And yeah we haven’t really settled completely on a database yet for the, at least for the SaaS version of Bubbly that we’re going to host. We still have a little bit of time to figure that out. But we’ve gone down the road of supporting SQL as the main database, server type. I should be a bit careful what I say about SQL and try and not generalize too much. But yeah, we went with Postgres just to get started and then we found Cockroach DB also through the Gotime podcast. And now we’re building against CockroachDB, which is what they call NewSQL. I’m not sure if they coined that or where that came from, but it’s yeah. It’s like a SQL server with scalability built in from the ground up. Yeah. And the final topic we wanted to talk about today was a little bit about the Bubbly UI. And we started to run a couple of pilots now with a few customers. So we’re getting the data, getting an understanding of what pilot customers want to do. And seeing how we put Bubbly into the mix to read, test results, OSS libraries, and licenses, and these sorts of things, and display the data that they want. And we very quickly learned that this Bubbly schema that we have defined, the way you make relationships about different data, it’s quite hierarchical, in that you could define a product level, you could define projects within a product, you could define software repositories within a project. And then within the software repository, you might have things like test results, OSS licenses, and libraries and components and this sort of stuff. And this hierarchy of data is really powerful. So we should use it. And it wasn’t very easy to use with existing dashboard solutions like Grafana and Kibana. We piloted mostly with Grafana to begin with which is great for building your single view type dashboards. You can put a whole ton of panels and things on there, but if you want this hierarchical data structure, then it becomes a bit of work to shoehorn that. We’ve started to build our own Bubbly UI to support this primary use case. And then obviously you can have Kibana or Grafana on the side as well, if you already have existing dashboards. But the technology behind the Bubbly UI, we’ve gone with Svelte, so SvelteJS. I think Svelte, as well, came from the Gotime podcast is where we first heard about it. And I guess any developer in general, but especially front end developers, if you haven’t heard about Svelte yet, I would really highly recommend you go check it out. It’s a really interesting piece of technology and none of us in the team are very strong front end developers. So I think we were dreading the fear of having to pull out React. I think Nate, you’re more of an Angular guy, and a few of us have experienced working with React. And I think we’re all dreading a little bit having to like dust off our front end skills and start building something with those frameworks. But then Svelte came along and it was like, Oh this is really easy. It’s compiled. And just the way you write your components, the way you write your UIs, is so intuitive and so simple. The way you write reactivity in your UI, as well as amazing. It has stores just built in, so you can store values in this thing called Svelte store and it’s just like beautiful. I think the last feature I had to use now was context as well, that I had some component on a higher level. So basically in the layout component, which handles authentication and nav bar and things. I had a value there, which is the logged in user, and at some arbitrary point in the descendant, so a child of a child, or a child of a child type component. I didn’t really care what the level was. I needed to access this user. Svelte has this thing called a context and you can store things in there. But I guess the point I’m getting at is these kind of use cases, which have been developing over the modern era of UI development. It seems like Svelte has addressed a lot of them, maybe all of them, and built it natively into the framework. Because it’s more than just a DOM render. It’s more than just a UI renderer. It’s like a whole package for managing UI development. And that’s coming from me. You can hear that I’m excited about UI development, which is probably the first time of my life I can say that. Yeah, I guess one bad thing maybe to mention about Svelte is that it’s new. It hasn’t been around that long, maybe a couple of years now. There are more and more projects coming out using it. So you get more and more references to learn from. But one main point is that a lot of the libraries don’t have native integrations with Svelte yet. So you have to roll your own or come up with your own creative ways of including things. And there’s not many blogs or other things around it just yet. But at the same time, because it has the simplicity, I think it’s quite intuitive a lot of the time to do the things you need to do. Anyone want to add anything to that about Svelte? If you’ve had a little play with it,

Ben: [00:37:26] I would say a great summary, first. At least from my interactions with Svelte I can mirror a lot of what you said. When I was looking internally at a justification for Svelte, I came across this really nice post called Why Svelte, very accurate terminology. And I think I will just say that for any front end devs out there looking to consider Svelte, I would really point them in that direction. I think we’ll probably put a link at the bottom of this podcast. But yeah, I would just say that Svelte is a really cool way of approaching front end development that makes it really accessible to people like us. And when I say people like us, not your typical front end guys. But more your dev ops engineers and your infrastructure engineers, this sort of stuff. I think if we can come together and write a really nice UI with Svelte, then I think anyone can.

Jacob: [00:38:15] I guess the point is that we know HTML, and we know CSS, and we know JavaScript as much as we need to anyway. But if you want to know, React or Angular, then you really have to learn the frameworks and that takes time. Whereas that learning curve with Svelte is so low. And I know a lot of people are comparing the learning curve and some of the syntax between Vue.js and Svelte. I think that a really big part for me, at least, was that I just looked at it and it just clicked. It was like, Oh okay, I could go away and write this now without really having to learn the framework. And I didn’t get that with React. I felt like when I was doing some things with React I had to, yeah, you really had to learn React, at least to a certain extent.

Ben: [00:39:04] Yeah, the dollar sign operand in Svelte for example, is like the epitomy of simplicity behind implementing reactivity. And it’s just there, when you look at it and think, wow why hasn’t it always been this simple.

Jacob: [00:39:17] Yeah, for people listening, who don’t know what that means it’s if you want the UI to react based on some values changing, then you can just pre append the declaration of a variable with a dollar sign. And it means that anything in any variables used in the assignment of that variable will be marked as reactive. And it means if they change. Svelte will automatically re-evaluate the variable you just declared and then re-render the UI where that variable is used, and that’s all you need to do. If that sounds exciting, go check the tutorials on the website. Cause there’s loads more little things like that, which yeah, at least got me excited, but I get excited about technology quite easily. Okay. I think we’ve run over time again a little bit today which is maybe not such a bad thing, because at least I’ve enjoyed the chat with you guys. And it’s always nice to reflect on things you’ve been doing and decisions you’ve been making. And this was a good time for us to do so as well. I guess if maybe one last open question. Is there anything that we’ve discussed today or anything that we haven’t discussed today? Any decisions that you regret that we’ve made about Bubbly, or anything that generally you’re a little bit concerned about, this kind of thing? Or does it feel like generally we’ve been making pretty good decisions about the frameworks, the scripting languages, the architecture?

Ben: [00:40:42] In general, I’d say I’ve been very happy. I think still, and this will develop over time, but HCL in general, as the language of choice for us is quite a big bet in a way, because even though you can see it’s growing in popularity and things like this, there are also alternatives in the running. And this idea of application, sorry, infrastructure as code can be interpreted in lots of different ways. If you ask a developer, then they might want to write natively in JavaScript or TypeScript or Python. And in some ways I feel like by betting on HCL, it is I think that the right decision for right now. But it also has its downsides in that we now expect people to learn essentially a different language. Albeit one that we think is quite easy to pick up. Nonetheless, it’s still a new thing. And so I think it’s important for us, I think to reflect on, Okay, what’s it going to take to get people to enjoy, to use Bubbly. And always think back and reevaluate. Okay, is HCL that solution for us, or do we really want to be native to what developers are writing in in their everyday lives?

Jacob: [00:41:59] Yeah, it’s a really good topic. And I think we could almost do an entire podcast just discussing declarative versus SDK type of approach

Ben: [00:42:08] I think so.

Jacob: [00:42:09] So maybe we should just let the dust settle there before we end up starting that discussion and going off on But yeah, I guess even down the declarative path as well, there are things like Jsonit or Jsonit, or however it’s pronounced. Which is like dynamic JSON. So it compiles to JSON in the end. And knowing that the encoders and decoders in Golang, for example, well, basically any language for Jason are amazing. There’s massive support in the ecosystem for using JSON. And yeah, there are favorable things for other DSL type languages as well. So yeah, I think that’s a good point raised there.

Oliver: [00:42:47] I must say though that JSON is not a DSL language. I know people use it as such, but there is a book written by one of the authors of the JSON format basically, and why they have an advantage, and what it’s for. It’s a data exchange format, and a lot of shortcomings that JSON has, comes from that fact. So it’s a simple thing, and this is why it has become so popular. But I think HCL occupies a different niche. It’s like a completely different thing in itself, which is meant to be configurational language while JSON is meant to be simple data exchange language. And there is a lot of implications from that.

Jacob: [00:43:30] Yeah, but I think if you couple JSON with Jsonit, that doesn’t have then more of a declarative-y feel?

Oliver: [00:43:36] Yes. Yes, but as Jsonit, as in it’s the second tool needed in your toolbox. By itself, JSON is less flexible.

Jacob: [00:43:46] Yeah, but then Jsonit compiles to JSON in the end. So it’s yeah, I guess that’s kinda like a point that if you wanted to use Jsonit, even though it’s another tool in the end, it’s going to be JSON. And there’s a massive ecosystem around JSON. I think having the ecosystem is good. But then, yeah, I guess to your point, is it good that it’s compiling to JSON in the end for our purposes where you want more of a declarative scripting interface.

Oliver: [00:44:17] In a more philosophical way, if you compile something to JSON, which is verbose, because JSON is JSON, it’s what it is for data exchange. And then other tools can in theory use it, but people start building hacky solutions on top of that. Oh, I get JSON, I can twist and turn it into something I want it to be, but it’s not meant to be. And yet there is an ecosystem, but in a way you would be promoting or what would you call it? Nudging people towards using suboptimal solutions because they will be using all sorts of tools to twist that JSON output into something else.

Jacob: [00:44:55] Yeah. I guess we will be revisiting this topic sometime in the nearest future, I say nearest future meaning like less than five years. Yeah. Ben, did you want to say something?

Ben: [00:45:08] Yeah, I’d also just like to say that we completely regret choosing Go. No, just

Jacob: [00:45:18] Yeah. What was the new language called that somebody posted in Slack recently?

Ben: [00:45:24] A V right? I think,

Jacob: [00:45:26] Vlang. Yeah, I don’t think we’ll be revisiting that choice anytime soon, but

Ben: [00:45:30] I agree.

Jacob: [00:45:32] But good joke! Anything Nate from your side that comes to mind? Anything that frustrates you in your daily life, even if it’s not to do with Bubbly?

Oliver: [00:45:46] I think Nate might be away from keyboard.

Ben: [00:45:49] Actually, I’d like to think that he’s just extremely happy and has nothing to complain about.

Jacob: [00:45:53] Yeah. Nate is a positive guy. So I think probably on that positive note the idea and the feeling of being content. I think we should probably get this wrapped up. I guess big, thanks for joining this podcast. Nate and Oliver and Ben,

Ben: [00:46:10] thanks for having me.

Oliver: [00:46:12] Thanks for having me.

Jacob: [00:46:13] until next time everyone stay safe.



Creator of Bubbly
Jacob Larfors

Jacob Larfors

Lead Visionary, Bubbly
Ben Marsden

Ben Marsden

Full Stack Dev, Bubbly
Oliver Frolovs

Oliver Frolovs

Nocturnal, Bubbly
Nate Ham

Nate Ham

Senior Software Architect, Bubbly

Sign up for the Bubbly Bulletin

Keep up to date on the latest news, releases and features and fun stuff as Bubbly redefines Release Readiness.