On this week’s episode of The Data Stack Show, Kostas Pardalis and Eric Dodds are joined by Slapdash co-founder Ivan Kanevski. Slapdash describes itself as the operating system for work. Slapdash emphasizes reducing the time people spend controlling their computer in relation to the time they spend expressing their intent.
Key topics discussed were:
- Starting Slapdash and expanding on tools from working at Facebook (3:31)
- Being client agnostic and working with the tools that people bring to the job (7:35)
- Distinctions between mouse-centric and keyboard-centric users (12:58)
- Slapdash’s approach to collecting data (16:08)
- Building Slapdash to scale and using Postgres (19:45)
- Using a graph model and a focus on efficiency (24:50)
- Challenges of reducing latency (29:35)
- Opening up Slapdash to be programmable (38:17)
The Data Stack Show is a weekly podcast powered by RudderStack. Each week we’ll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.
RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.
Eric Dodds: [00:00:00] Welcome back to The Data Stack Show here with Eric Dodds and Kostas Pardalis. Today, we have a fascinating couple of guests for you. They’re from a company called Slapdash.
Slapdash says that they are building the operating system for work. If you’ve ever used a tool like Alfred or sort of automated workflows on your desktop from the command line, this will be really an interesting product for you. Kostas and I’ve actually been using it, for the past couple of months.
And we had so many interesting questions about how it worked, that we reached out to the founders and asked if they would join us and let us ask them all the questions. So really interesting. I think one of the things that I’m interested in is, kind of like IFTTT, when we talked with them there were different categories of data in the product.
One is the user data that represents user actions. And then the other is that there are a bunch of jobs running because there are tons of integrations. So, I’m excited to ask them how they manage those two different categories of data. Kostas, what are you, what are you interested in from an engineering perspective specifically?
Kostas Paradis: [00:01:22] Yeah, I think the, just the use of the word operating system. It’s something that resonates a lot with me. You know, Eric like, operating systems, they have two important characteristics. One is their complexity. They are extremely complex pieces of software. And the other thing is abstraction. Like yeah, they are like masterpieces. So, for abstraction, how you abstract, if you think about it, like the hardware and the actual silicone and make it available, like two people, two at the end go there and just browse your file system and see a picture.
So, and I understand that the reason that they chose, these, terms of course, because it makes it easier for people to understand the scope of the product.
But I think that we are going to have a very interesting discussion about also how they manage to build a very, very complex system because interacting with all these different cloud applications and creating an open and extended platform is something extremely complex. So, it will be very interesting to see what kind of abstractions they manage to, to put there in place and not to be able to interact with all these different data and different services and create a very unified and smooth experience for the customer at the end. So, I’m very excited to see what Ivan has to say about the product and how they approach, the development of Slapdash.
Eric Dodds: [00:02:47] I agree. And I’m also going to ask them, if we have time, how in the world they made searching Google drive files faster and better than Google did because that’s a pretty amazing task. So, let’s dive in and talk with Ivan and Lester.
Kostas Paradis: [00:03:04] Let’s do it.
Eric Dodds: [00:03:05] Lester and Ivan. Welcome to The Data Stack Show. We’re really excited to have you.
Ivan Kanevski: [00:03:11] Cool. Thank you.
Lester Lee: [00:03:11] Thank you.
Eric Dodds: [00:03:13] I’d love to start out with, just the quick story. I know we like to dig into the tech, but with just have a quick story on where the idea of Slapdash came from and, kind of the journey you’ve been on and where you’re at today as a company.
Ivan Kanevski: [00:03:29] Cool. I can take a crack at that.
So Slapdash, yeah, kind of the genesis of it came from working at a big company like Facebook, which is where I was before I left to start Slapdash. And one of the unique parts about working for a company like that is there’s a dedicated team of about a hundred engineers that’s focused on building productivity tools for the rest of the company.
And so, once you kind of get acclimated to using these pretty unique tools when you leave Facebook it feels like you’re missing something. So, effectively, you know, what, what happened was, as soon as I left Facebook, I was interested in sort of building tooling and immediately I ran into effectively, this notion that I’m missing something.
And what made Facebook’s internal stack so special is it was really focused on integrating the whole information space. So, if I had a question about what does a company know about a certain customer, it was one search away. If I wanted to answer a question about a colleague, it was also search away.
So, this notion of having an integrated information space for work was really compelling. And as we started to sort of understand what it means to build something like this outside of Facebook, it was very clear. What we had to do is we had to connect the cloud applications that people use on a day to day basis.
And so, you know, after call it, two years of prototyping, building the team and actually building the product, we find ourselves at a point where we can kind of see clearly what we’re trying to achieve with Slapdash. And the way we try to frame it, the way we frame it, today is what we’re looking to achieve is we’re looking to cut down the amount of time that people spend controlling their computer in relation to the time they spent expressing their intent.
And I can kind of illuminate that with, with a quick example. So today, for example, if you want to file an issue on GitHub, you’re opening a browser, you’re navigating to repository, you’re clicking a new issue button, and then you’re finally starting to type the title of the task. Now all that time that you just spent was effectively you just controlling the computer.
The actual intent you had was to write, to, to start writing that task title, a task description. So, with Slapdash, we’re trying to rethink the physics of what it means to use cloud applications, restore some of the affordances that people lost when software transitioned from desktop to cloud and in the end, you know, make people more productive with the tools they already use.
So that’s kind of, call it, I mean, I think there are, we can kind of dig into the kind of component parts, and it’s always kind of interesting to sort of lean into a metaphors and similes to describe what Slapdash provides, because it’s quite a new category. but I’ll pause there for any elucidation that I can offer.
Eric Dodds: [00:06:00] Yeah. That’s what, I mean to me, it’s fascinating. And I mean, I’ve been using the product for the past two months and I had this moment where I guess I just had never thought about the fact that computer input really hasn’t changed in a significant way in a really long time. I mean, you have gestures and some trackpad type things that have come out that are, that are interesting.
And I think helpful to some extent, but you know, it hasn’t really changed that much. And so to me it was just great because I thought, man, this is just a space that is ripe for change.
Well, I’ll, I’ll kick off getting into the technical stuff, asking kind of a leading question that will probably inspire Kostas to ask a bunch more questions, but, the, the app runs on your desktop. So, it’s this interesting paradigm of your building essentially an interface with cloud tools, but it runs on a computer. I know that from a technical standpoint, that it’s changed a lot you know, even from five years ago. And you think about actually building software that runs on a computer operating system, but would love to know, as you thought about building a service, you know, as opposed to just a cloud web app, something that runs on someone’s computer. What were some of the things you had to think about in terms of architecting your app that you wouldn’t necessarily face just building you know, sort of a normal quote unquote cloud application that runs in the browser.
Ivan Kanevski: [00:07:35] Totally. So, I think one of the things that’s meaningful about Slapdash is that we have this internal philosophy where we like to say that we are client agnostic.
And what that means is we actually don’t care where people use Slapdash you know. In some sense, Slapdash allows you to bring your own tools to the job, right? So whatever project management tool you use will support it. Whatever document editing we’ll support it. We’re not trying to sort of displace those tools, but as importantly is where you’d like to work.
So, some people are very comfortable in the browser. Some people are more comfortable in Slack. And of course, everybody kind of brings their own operating system to bear. So, for us to sort of, because we’re sort of very much focused on speed one of the things that we wanted to do is we wanted to bring Slapdash close to where people work.
So, in other words, the client, where it’s actually deployed, is, is kind of less material, however, we think that desktop experience is the best way to experience Slapdash. And the reason why is that to really kind of take a leap in terms of, changing the physics of how you work with cloud applications, you need to connect three layers.
Number one, you need to connect the data layer, which is the structure and what the contents of all the applications you use. Number two, you need to likely connect the browser layer because frankly, most of your work still happens in a browser. And a third part is you need to bridge the desktop layer as well, because ultimately that is sort of the pane through which you interface with everything else, including the browser.
And as far as sort of, so in terms of our, our architectural considerations, one of the things that was important to us was to be able to have this really broad footprint. And so, to do that, we of course leaned into web technologies and, we started, we actually spent a lot of time getting good at Electron, which is what’s responsible for delivering the desktop app experience.
And so, in that sense, and the other actually aspect of this too, is thinking about the interaction model as well. So, in some sense, even though metaphorically, we’re building this operating system for work, most operating systems can be reduced down to kind of a command line interface. And the neat part with how we’re designing the interaction model is that, you know, we can effectively invoke Slapdash through something as simple as Slack chat, we can invoke it, you know, within sort of the Chrome location bar.
So that’s kind of how we thought about it. So, in short desktop technologies with an emphasis on broad deployment and heterogeneous deployment.
Kostas Paradis: [00:09:56] That’s very interesting Ivan. I have a question and I would like to start, like, from, let’s say going top to down, to the different layers that you mentioned earlier, and I would like to, ask you about the experience of introducing this command line experience on the desktop, which for me, and I mean for probably for every engineer out there who they need like to use a command line in their everyday work it feels probably pretty natural. And personally, I really enjoy it and it reminds me of, because I’m also quite old, so I remember like, starting with VI and just the terminal and then going to user interfaces and then going back and then trying to match these things together. And I find this evolution like extremely interesting. But from your experience also with your users, like, how do you see people, adopting these, new part of the command line as part of, the graphical user interface and what are the challenges and, what do you find really exciting about it, especially for non-technical people.
Ivan Kanevski: [00:11:00] Definitely. Yeah. So, I think what we’re finding and kind of our, once again, our thesis is about expressiveness, right? So, in other words, choose the apps and engage with Slapdash where you work. And so, I think the other part of what we do is we have this sort of focus on ergonomics. So, this idea that, different people have called it different capacities, and they use the computer in different ways.
So, in general, on our end, we be categorize people into two categories. There’s the keyboard-first individuals and the people that are more comfortable with a mouse. And we’re sensitive to provide affordances for both those people. Now in terms of the, kind of the major problem or the major sort of delta where Slapdash really augments your workflow, the fastest way is of course, to learn our command line interface.
So but generally what happens actually is that that’s oftentimes, you know, the, the kind of the early adopters that are very keyboard centric, they’ll pick up Slapdash and they’ll kind of fall in love with a, with a command bar, which is what we call it. But then they’ll find themselves discovering and using the rest of the product even in kind of more conventional ways.
So in other words, the command bar oftentimes acts as sort of interesting entry point, but plenty of people find kind of value in just having kind of this unified surface, for your, for your cloud applications so I’m not sure if that sort of, that kind of touches upon, specifically what you, what you asked, but I’m happy to, to clarify or dig in further.
Kostas Paradis: [00:12:23] Yeah. So I mean, okay. There sounds like a distinction and it makes a lot of sense. And maybe it’s also like a little bit early for the product because it’s still a new product, but how do you educate people from a different group, to also adopt the other or you don’t care about that? Like someone who’s not keyboard centric, right, because okay like working with a keyboard is by definition more efficient. Right. But is this something that you do as part of the product, something that you care about and if, how, how do you do that and what are the reactions of the customers for that?
Ivan Kanevski: [00:12:58] Cool. Yeah. So, in generally speaking, what we do is when we have an opportunity to have, we tried to have kind of closer conversations with our early customers, we try to have effectively directed onboardings.
We call our onboardings ergonomic fittings and so rather than try to push a certain style of working on someone, we try to discover how people work and try to match them up with a set of features in Slapdash that would best benefit them. So, we don’t try to sort of effectively make kind of mouse people, keyboard people.
I think that’s going to be an investment that we make kind of a little bit down the line in terms of call it a, we have some really kind of, call it interesting, but unshipped ideas around this, but at the moment, we’re less interested in bridging the education gap, but more so fitting them to the features that they would benefit from.
Kostas Paradis: [00:13:44] What about you, Eric? Are you a keyboard or a mouse person?
Eric Dodds: [00:13:49] A keyboard all the way. I was actually, I did an ergonomic fitting with Ivan and Lester, and I am a heavy or actually was an extremely heavy Alfred users. So, I don’t know if any of our listeners use Alfred for, you know, sort of their desktop, sort of keyboard workflow experience. But yeah, I try and do, I mean, let me put it this way.
I use Emacs keyboard shortcuts and have remapped the caps lock key to be controlled so that I can navigate around text documents without having to touch the mouse. It’s pretty bad, but also it just gets the machine out of the way and sort of, I mean, actually I haven’t thought about it the way you put it so eloquently, Ivan that I, someone showed me that and I started doing that when writing. And it was just such a better way to get, to sort of remove the barrier of my thoughts, getting into like a text document, which is really the same sort of experience that you’re trying to create a Slapdash, which is, which is interesting.
Getting into some more of the data components, one thing I’m really interested in is, and I would love to know if this is even the way that you think about it, but as I, as a user of the app and sort of looking at it from the outside, there seem to be two broad categories of sort of app type of behavior, related to users.
So, one would be the actual user behaviors, right, which would drive product analytics and, you know, other use cases like that, right.
So, invoking the commands, running the commands, like the things that, that represent user actions when using Slapdash. And I have another follow up question there. But the other major category would be the data that’s produced by the commands themselves running because I would think that there’s sort of diagnostic information around how Slapdash interacts with other applications that is really critical, right? So, failures or errors or other things like that. Is that even the way that you think about data and I would love to know how do you approach sort of collecting and using such sort of different types of data?
Ivan Kanevski: [00:16:08] Totally. Yeah, so I think, there’s probably another pillar that’s more, I guess at the top of mind for us, as well, which is how we store the actual data as well. Right. So, at its core, you know, Slapdash, what do we, what do we have? What do we do? We solve a graph replication problem. In other words, when we connect to an application like Asana or Monday we’ll kind of take the structure of those application.
It wouldn’t build one giant graph on Slapdash. And so, a lot, a lot of the interesting things we do are, are within that layer. And of course, in terms of how do we then sort of focus on actually building a better product. And that, of course, you know, the kind of the third-party data stack is where it really fits in.
And I think your categories are pretty correct in some sense in terms of, how we kind of segment our data syncs if you will. We certainly have the traditional kind of analytics. So, we lean into Google analytics and Amplitude to understand user behavior. And as far as understanding, call it, the kind of giving kind of our infrastructure observability and understanding how things are working, that we actually delegate to a product called, Honeycomb, which we’re quite fond of. But everything else we try to kind of do as much in our systems as possible. We try to make a kind of effective, they have a very portable infrastructure, to have some optionality for, for deployment strategies.
What can I dig into from there? What’s I’m not sure if I answered that to your question itself.
Eric Dodds: [00:17:37] That’s super helpful and because, because I work with Rudderstack, I have to ask, how do you, are you using like native SDKs from Amplitude to just capture user behaviors or how has that instrumented? And how are you getting that information to Google analytics?
Ivan Kanevski: [00:17:52] Totally. I mean, you might not like to answer, but we do use Segment internally.
Eric Dodds: [00:17:56] Segment’s a great tool.
Ivan Kanevski: [00:17:57] It’s a great tool. So yeah, we use Segment for that actually. And, we built our own sort of end points, just because we found a lot of issues in terms of a certain analytics event being dropped. So, we had to build kind of effectively, a lightweight proxy for a lot of the events, but, but otherwise it’s been pretty, pretty sustainable.
And once again, our focus has been more on the internal data stack rather than, call it piping it out, just based on sort of where we are as a company.
Eric Dodds: [00:18:23] Very cool. Got it. Yeah, that’s a, yeah, that’s super interesting. Kostas any, I know there are tons of questions brewing in your mind, and I’ve kind of been dominating the conversation.
Kostas Paradis: [00:18:34] Ah, no, my, to be honest, like the question that I want to do, like from the first time that I met with, with the team is, about the experience of, building like an infrastructure that has to interact with so many different services out there. I mean, I have my own experience here at Rudderstack and what that probably means but also like before that, with Blendo, where we had like to integrate with, with many different services on the cloud, but, I’m really impressed with Slapdash because, Slapdash, has like to, let’s say integrate with all these services. In a much tighter way, right?
Like you have to pull the data, you have to allow to share this data and then you have to also interact back and create actions and all that stuff. So, to me, at least knowing the complexity of interacting with something, even once they’re just like presenters, for example, or something like that, I find it like extremely challenging to build, a service like this and ensure, like the quality of the service that you’re providing.
So. Yeah. Like how, how’s the experience of doing that Ivan and like what kind of abstractions help you do that at that scale on Slapdash.
Ivan Kanevski: [00:19:45] Totally. And this is a really deep topic that I can talk about forever. Most of the credit here belongs to our CTO Dimitri who’s, an amazing technologist, and really kind of the etymology of us being able to manage the complexity of call it like synchronizing billions of things comes from our experience. Dimitri and I both previously worked at Facebook and we had to build a kind of an analogous infrastructure there and where the responsibility of the infrastructure was to effectively the goal was to have a copy of every product in the world. So, things that are bought and sold. So, if Amazon has a, has a product, we want to know the price, all the photos, all the descriptions. And so, we built something similar there where it was effectively an ingestion system. It had aspects of sort of this graph replication problem certainly had a scale problem in terms of has to be real time, has to be high throughput, both on the read and write ends.
And so, we had a kind of effectively a, an opportunity to build a V1 of this architecture there and so when we started to build what we, what we have, today on Slapdash, one of the first things that we, what we saw was the data store itself. And, so you know, the, in terms of, I think it’s always helpful when the abstractions that you’re working with really mapped to the domain nicely.
And for us to reason about these applications as being effectively, just graphs that we’re replicating to our end was, was kind of a helpful abstraction. And so, to support that we built this graph database on top of Postgres. What we have to solve there of course were issues around scale and data isolation.
And so of course, you know, as you know, right now we can scale into the billions kind of no problem. And, and so that was kind of, first kind of problem that we have to solve to kind of manage complexity. Can we store it? And then there was this synchronization part. There I think the real kind of killer tool is infrastructure kind of observability.
So, we have so many, and we have millions of events running through the system, but you really want to have a really high signal feedback when things are going wrong. Or if things are breaking and for that, you know, internally at Facebook, there was a product called Scuba. And thankfully when we left Facebook and we looked around, there was an analog in a product called Honeycomb.
So, that is actually one of the things that, allows us to sort of manage this, call it, these, you know, billions of events on, on a daily basis, and understand when, what things are helping and when they are not. And I think part of it too well, you know, one of our strategies was to launch early, launch early and have the product be open so we have more people trying to connect applications so we can kind of cover those edge cases. And as a result, we’ve been kind of, on, on effectively version three of our current ingestion architecture. So, I get, and I think the, you know, so I’ll pause there because I can dig into any layer you know, but, yeah, it’s been a process, let’s put it that way.
Kostas Paradis: [00:22:31] Yeah. Yeah. So, you said that, you have actually built like kind of graph database on top of Postgres. Is this correct?
Ivan Kanevski: [00:22:39] That’s right. Yeah.
Kostas Paradis: [00:22:40] And why did you choose Postgres to do that and not just use a graph database, like DEGraph, for example, out of the books.
I’m very interested because we did something similar let’s say at RudderStack because we needed to build some kind of queuing system and stream the data and we also used groups for that and I find it very interesting that Postgres is becoming like the kind of database systems that many interesting projects are built on top of that and use it as the end of the line, like storage system, or like some parts of it to be like something completely different than what postgres was intended to be doing so yeah, I would be really interested to hear, the story behind this.
Ivan Kanevski: [00:23:20] Yeah. So, I think we have this philosophy that we like to keep things super vanilla. I’ve personally had experienced in the past where I adopted a tool that was a little bit more nascent, when trying to build something brand new.
And what I learned from that experience is that you end up fighting the tooling or the lack of tooling more than you do in terms of making progress against the problem and so when possible we try to, use kind of the most tried and tested pieces of software when possible, and to be fair, we borrowed that sort of approach from Facebook, which also has a graph database, which in practice is built on top of MySQL.
And we chose PostGres just because at this point it has better qualities and better features than MySQL when we evaluated it. But it was more so about the maturity and the tooling, and also seeing an architecture that was really successful that was, that was built in a similar way.
Kostas Paradis: [00:24:17] Yeah, that’s great. I mean, that’s pretty much aligned to also the reason we decided at Rudderstack to do something similar. Another question about, the graph database, why did you choose a graph model to model your data and what are the advantages compared to the domain? The problem domain that you are interacting with and why this is like for you at least like the best way to represent data and relationships. And what kind of added value it gives from a product perspective and the technology perspective.
Ivan Kanevski: [00:24:50] Yeah. So, so one thing, that is distinct about Slapdash, one of our big focuses is speed and effectively, one of the ways that we get to the call it the fast experience that we have today is that we have effectively abstractions that fit each other from call it the storage layer all the way to the client side.
So, by virtue of effectively, it turns out that, you know, by, there’s certain amounts of, for example, optimization techniques available to us that are really natural with the GraphQL retrieval model. So, one of the things that we were able to build, and it’s kind of very helpful in terms of the, the graph databases we can perform, do things like, batching and coalescing, in much more natural ways. But in practice, you know, the graph data structure is just really versatile. I mean, it’s, and maybe the alternative is the one group where we kind of started with something more tree-like. But the reality is that when you look at sort of, kind of the world and you try to sort of model the world it turns out that the graph is, almost kind of like irreducible. A lot of the sort of habits around sort of like looking at the world as a graph, actually once again, comes from working at a company like Facebook, which literally looks every, at every problem as a graph problem. So, it was already quite natural.
And then when we try to apply, call it the data model of these applications, it just, it just continued to work. And so, it kind of ended up kind of, a lot of it was kind of discovered in some sense, like, you know, in terms of what we can do to what we can achieve. But at the moment it’s like, after having built a lot of the system, it’s, it’s, it continues to sort of, offer us a pit of success in terms of, our, our infrastructure and engineering problem.
Kostas Paradis: [00:26:52] Right. So, this is the way that you structure and represent the data that you’re pulling from these services, right? Like you create some graphs there. The interactions are also parts of this graph model, or it’s like a separate system that just interacts with that to figure out what to do? Like how do you have you architected like these different, these two different phases of the product one, which is the interactions, which is more like, let’s say the extroverted side of, of the product you have to interact with the outside world? And the data model is the way that you pull the data and you model them to make the people able to interact with the data from insight Slapdash. So, what is the relationship between these two?
Ivan Kanevski: [00:27:30] Yeah. So, there are, so when we actually started to build this, it was important for us to sort of not just think of this as a read only platform. So, when we do think of, let’s say nodes in the graph, so what does a node in our graph? So, a project is a node in our graph, a task, you know, a task is node on our graph, a document is a node on our graph. We think about not only sort of effectively, what are the relationships between this node and other things like what’s the relationship with this folder and the items inside it?
We also think about the, the properties of the node. So, in other words, we will, understand, for example, certain types of nodes have the ability to, for example, to close a task. They’re effectively modeled as a task and you can close it or they have this property where you can rename it. So, part of it is sort of, you know, discovering kind of the common denominator and really kind of operating with the nodes where, which havein some sense, always kind of a variable amount of properties associated with them, you know?
The different capabilities. So that kind of fits into that world view. I think when we were talking about kind of commands in terms of, and is there sort of like a formal way to sort of like a reason about those things? You know, at the moment they are kind of like living at a higher sort of like plane of abstraction, and, and maybe they’re less sort of like related to the, kind of the graph kind of itself. They’re mostly kind of call it, adding new objects to the graph. But, the, I think the graph semantics are, are less important there right now, but we’re always thinking about sort of formalism and reduction and really reasoning about our system, you know, with the, with the rigor of, let’s say like a file system API but, again, I think we, you know, I think, I think a lot of what we have today is, is certainly emergent and based on having tried to build it.
Kostas Paradis: [00:29:06] That’s really interesting. Very, very interesting. So, as it, as a customer, like as a user of Slapdash, what kind of consistency should I expect between how I interact with the data from within Slapdash or interacting directly with a service? Like what kind of latencies there are like between syncing the services and, other like, issues that you have seen there and anything like interesting from this problem that you have encountered and found like an interesting solution to that.
Ivan Kanevski: [00:29:35] Yeah, that’s a, that’s actually probably, you know, we have a lot of problems that are evergreen and I think that is one of ’em.
I think depending on the integration, it’s, it’s, it varies in terms of fidelity. So certain integrations like GitHub and Google Drive, they’re going to be very close to real time, without much effort. But the way we approach the problem, we take different attacks at it. In other words, we try to be excellent from strictly a server-side integration.
However, if you have a browser extension installed, we use that to kind of augment the graph as well. So, the idea being that if let’s say you connect Asana, which has like a five-minute sort of sync window, but in that timeframe, you let’s say create a new task. It still should be possible for that past to appear directly and immediately in Slapdash because you have our Chrome extension installed. In other words, we kind of take a multipronged approach, but we always opt to effectively lead with a, with a server-side integration.
I don’t think we’re, done on this front. And I think we have a couple of kind of creative, augmentations that we’re gonna be releasing to kind of keep it closer to real time, but that’s just the nature of what we do is, trying to, build this kind of real time sync layer which is not available by default of course.
Eric Dodds: [00:30:46] I’m interested to know because, and an interesting thing about how this has changed with access to the internet, but do you do, do you face any issues around internet speed? Right, because when you’re sort of interacting with a cloud, or, you know, internet speed affects your ability to use cloud services in general. I’m interested to know how you think about that problem, because in some ways, as a, especially if you download the desktop version of Slapdash, you sort of expect you know, almost an immediate response, even though if you step back, you realize that you’re interfacing with cloud tools. So, we’d love to know about any sort of challenges or interesting things on that front.
Ivan Kanevski: [00:31:35] Yeah. I mean, I think this is one of the things, yeah that is really important to us. This notion of delivering the experience with minimal latency there’s areas where we’re excellent and there’s areas that we’re still improving.
But the idea for us is to always bring it down to as close to zero latency as possible. Right. I mean, look like when you’re browsing the files in your computer, you’re not waiting for the folders to load. Right and so we want to be able to deliver that type of experience, that fidelity of experience and what are, what are we thinking in terms of how we can bridge the gap?
So, number one, of course, our, our infrastructure and architecture is as a big part of it in terms of, minimizing latency. One of my favorite tricks that we have within the product, is, is that anytime you hover over, let’s say a link in the product, or even an option, the command bar, we always try to anticipate that you will be kind of effectively hitting that link.
So, we preload it. So, in practice the neat part about this is that, what that allows us to do is it cuts about 50 milliseconds of perceived latency. And if our server response is within that timeframe, we achieve effectively on balance a zero-latency experience. So that’s one of the things that we do.
The other thing that we’re, we’re exploring, there’s two other things is a different, number one, different deployment models. So, certain customers have stricter requirements around sort of call it data isolation and we want to be able to deploy Slapdash, outside of our sort of call it like, our, our data center.
And so, what we’re, what we would be offering is that when you actually do outfit, let’s say Slapdash for your company would be offering effectively geographic selection and where you want Slapdash to run. Once again, cut down on that latency. And I think the other sort of thing that we’ll be reaching for, which is meaningful is some degree of an offline architecture as well.
So, you shouldn’t have to be connected to the internet, to start getting kind of immediate responses that there might be sort of an intermediate call it, offline cache, that will kind of replicate over, at some point. So that’s kind of it’s, it’s deeply important to us. I still think we’re, you know, getting good at it.
And I think that’s really what we want to cut out of the call it the experience of working with cloud application is just like literally waiting for things to load. And I’ll mention one last, my favorite thing too, that we do as well that’s actually gonna be featured more prominently in the next release, is any time that we detect that let’s say you have something open in a tab already, we’ll try to recycle that tab. So, in the case of Google Docs, that’s like, you know, five to 10 seconds of call it, like, you know, loading cutout. Yeah, that’s a, that’s a, that’s a broad, that’s a broad spread. It’s a, it’s a, it’s a problem that we always return to.
Eric Dodds: [00:34:09] Sure, sure. And, and I know we’re getting close to time here. A couple more questions. So, in terms of search, I would love to know how you have approached this search issue and I’m coming from the perspective of using Slapdash, really just for search in general for cloud stuff but one thing in particular, is searching Google Drive and it became immediately noticeable to me when I started using Slapdash.
I’m not trying to make this a commercial, but it was just from an engineering perspective. I’m very interested in this, but searching Google Drive files via Slapdash is better, is a better experience. I’m not going to say the search is better necessarily just because I don’t have enough technical knowledge to understand the mechanics of that, and Google knows a thing or two about search. But the experience is amazing. So, and then you’re sort of, you sort of create that experience both individually, I noticed it very acutely with Drive, but then you have that across cloud tools. So how did you go about thinking about how to build that and from a technical spec perspective, is there a tooling or other stuff that goes into that?
Ivan Kanevski: [00:35:32] Yeah. So, I think in general, when we think about search, we actually don’t think about search in particular. We think about information retrieval, and thinking about how people retrieve information in a work setting. So, in other words, I think with any sort of, kind of problem solving, I think, you know, you kind of have to discover the constraints around it.
and you know, so for us, and I think, number one, search is kind of like a one part of, kind of the, the toolkit, and, I should also mention that, like, you know, what Google is good at is different from what let’s say, search is in Google Drive. So what Google is good at is this needle in a haystack search across a vast unknown information space.
Whereas you know, what Slapdash is focusing on and it’s kind of is actually finding information at a, in a roughly known information space, which is your work environment. Right. So, for example, so you already have strategies there. You might have, you know, so how do people actually find information. Actually, not often by, there’s like some hot paths within the keyword of the file name and if you’ve got to get to the habit, it’s a really fast way to do it. But the other things that we try to unlock is, effectively mapping to people’s natural information retrieval strategies, which are based on things like landmarks. Like for example, like I know that this lives in this folder in Google Drive, or I know that Lester was working on something.
And so being able to sort of like express these very quickly across the kind of cloud applications is really where we kind of call it, get the delta and that’s sort of the root problem of information retrieval. And as far as why we’re faster on search it’s once again, it comes back to the architecture.
I think when you start from sort of like your main goal being, Hey, let’s make this thing be super fast, then I think you, you know it was, it was kind of a revelation for us too, frankly, that we were able to be like 10 times faster than, than Google and maybe there’s that little bit of emotional resonance too.
Right? When you experienced fast piece of software, you, you generally kind of, it feels better than when you’re waiting.
Eric Dodds: [00:37:24] Sure. Yeah. That’s, that’s fascinating. Well, we’re close on time here. So last question, unless Kostas says anything else, but, it’s amazing to hear how visionary and thoughtful you are about what you’re trying to build and the operating system for work is, is a very aspirational tagline. I would love to know what, what kind of features do you see sort of far down the roadmap or what maybe not even features, but what sorts of functionality do you envision people and teams being, being able to use Slapdash for that, maybe are things that you’ve thought about or are conceiving of that, you know, sort of don’t necessarily register with the average user, who’s still just accessing everything through the browser.
Ivan Kanevski: [00:38:17] Yeah. You know, I think that’s a kind of a good question. You know, we usually, keep things pretty close to heart in terms of our kind of broader plans, but I think we do have some things on the immediate roadmap. I think, I think the one thing that I think you’ll find and that we’ll be focusing on, that’s quite important.
And we recently kind of, launched the product on Hacker News. And so, we mentioned it there, but, we’re interested in people extending Slapdash. Right. We were interested in Slapdash being kind of a programmable surface. So, we’ll do our best job of kind of augmenting kind of your existing workflows, building the integrations, building the commands, but we’re also really excited to open up some of the same tools that we have for people to be able to kind of express new things on top of that.
So, I think, I think that’s kind of the thing that I’m kind of most excited about that’s coming very soon.
Eric Dodds: [00:39:08] Very cool. I’m personally very excited about that. That’s awesome.
Kostas any questions before we, before we close out the conversation.
Kostas Paradis: [00:39:19] There is one question. So, you’ve already said about opening up the platform and actually turning the product in the platform.
Is this something that you are going to do, like by exposing an API or the results, some kind of open source?
Ivan Kanevski: [00:39:33] I think both, I think, look, we’re a team of engineers. We care about, you know, we rely on open source software to build Slapdash, so we want to be able to give back. And, so I think we’re trying to figure out, you know, the, the right parts of our stack to open source, just to be able to, I mean, I think it’s just more fun to hack on open source things, frankly. And of course, there will be, an API and, and a platform, kind of more traditional API that you would expect.
Kostas Paradis: [00:39:58] That’s interesting. I’m really looking forward to playing with it. So yeah, that’s all from me, Eric.
Eric Dodds: [00:40:05] Great. Well, thanks so much for joining us. And again, can’t tell you how fun it’s been just to hear how much thought you’ve put into things that really show up as simple when you are a user of the product.
But, you know, as we see over and over on the show just tend to be pretty complicated underneath the hood. So, thanks for sharing all the inside information with us and, best of luck as you continue to build the company.
Ivan Kanevski: [00:40:31] Thank you so much. Thanks for being an early customer.
Kostas Paradis: [00:40:34] Wow. That was quite interesting.
What do you think, Eric? I think that this conversation with Ivan, it was extremely detailed and I think it’s very fascinating to hear what these guys managed to build and, how they interact with all these different data. And, how obsessed they are with anything that has to do with performance and latency in order to deliver like the best possible experience.
What are your, most interesting parts of the conversation we had?
Eric Dodds: [00:41:06] Yeah, the, the thing that really stuck out to me, and this is both specific to Slapdash, but also something that I think you see as somewhat of a pattern, and that stuck out with me, I did get to ask my question about Google Drive, which I was very excited about. And it really struck me after listening to the answer that, if you can step back and evaluate a problem and build a solution from the ground up as opposed to applying part of another solution or sort of taking an existing solution and trying to retrofit it. That you can do some pretty powerful things. And I think that that’s the way that they’ve seemed to approach most of the things that they’ve done on the product, which is, which is just fascinating and you can see how much thought that, that they put into the things they build just from the way that they talk about it.
Kostas Paradis: [00:42:06] Yeah. And something else that I found very interesting. And it’s the second time on this show that we hear about it is about the importance of going vanilla when it has to do with technology.
If you hear like Ivan, he was like, and this is, I think like, a sign of, engineering maturity that people have when they have like to build like very complex and important systems.
So yeah, that’s another thing that I found it very fascinating that even like with something, so state of the art that these guys at Slapdash are doing and are building right now and the sophistication of the product itself, that it is built on some very fundamental technologies stuff like Postgres, which exists out there for more than 20 years now. So that’s another very interesting point of this conversation that we had.
Eric Dodds: [00:42:56] Yeah. I’ll be interested to see if we hear about more companies that sort of take the vanilla or the boring approach to certain parts of their staff you know, to sort of manage complexity, which seems to be a pattern that’s emerging on the show.
So, well that was a great conversation. We’ll touch base with them maybe later in the year and see where they’re at. And join us next time. For another episode of The Data Stack Show.
Kostas Paradis: [00:43:21] I’m really looking forward to it.