Menu

VENN 3.64 Galactic Fog - Galactic Fog

1x1 interview with a technology thought leader

Daniel Lizio-Katzen, CEO

When?

6 Nov 2018

2:00 PM (EST)

Where?

Webinar

129 W 29th St, New York, NY 10001

OVERVIEW

VENN 3.64 featured the CEO of Galactic Fog to help analyze the expanding container and functions as a service landscape emerging with increased container adoption. Galactic Fog originated from a need to reduce complexity and build applications faster and more efficiently. The company’s core product, Gestalt, provides a platform that massively simplifies and reduces the amount of time it takes to get code to market in a scalable and secure manner.

Transcription Search

Peter Steube

Hello, everybody. My name is Peter Steube, Managing Director of the ETR VENN platform. I'm joined by Erik Bradley, Chief Engagement Strategist for aptiviti. Also Managing Director of VENN. Thank you for joining us for VENN 3.64. Today's webinar interview features Galactic Fog CEO Daniel Lizio-Katzen. Daniel, we'll allow you to give a more detailed introduction shortly. Thank you for joining us, first and foremost today. As a quick introduction here, Galactic Fog is a leading Functions as a Service vendor, effectively enabling enterprises to both accelerate and optimize their serverless, cloud-native and containerized environments.

Galactic Fog is also one of the first vendors that VENN is talking to or interviewing in conjunction with ETR's expanded universe of CIO spending, and that comes through our study, which we'll actually be launching tomorrow. The new survey is called the Emerging Technology Study (ETS), and it is focused on earlier-stage enterprise companies like Galactic Fog. This survey is not measuring actual commitments of spending, like our traditional survey that we've done over the course of the past ten years, but it is more focused intently on brand awareness leading up to plans to evaluate, adopt, and ultimately deploy further. For our CIO members that are listening in, please keep an eye out for that invitation to participate, as well as the results in the coming weeks here. The findings will be key in pinpointing the enterprise traction and landscape disruption by privately held vendors like Galactic Fog.

Also, please keep in mind that, if you are listening in live, you are on mute. We have received and will field any of your pre-submitted questions. If you'd like to submit a live question to us, you could use our e-mail address, which is venn@etr.ai. And as a reminder for all of our vendors, as with all of our VENN interviews, we will be recording today's discussion, publishing an executive summary, full transcript and replay, all of which is available to you via your ETR+ login. Of course, you can reach out to us and let us know if you're having difficulty with your access.

Before I lose my breath, Daniel, thank you very much for joining us today. I would appreciate you giving us an overview of who Galactic Fog is and what you guys do. I think that that might be the best way for us to start. And then, also, if you want to layer in some of your personal experience, Daniel, I know that that would be extremely appreciated by the audience that will be tuning in, whether live or afterwards.

Daniel Lizio-Katzen

Absolutely and thank you very much for the overview. And yes, that was a lot in one breath; impressive lungs there. Galactic Fog, as a business, is about four years old. We actually were started by a team that has built cloud abstraction layers previously, a team behind something called Service Mesh, which was bought by CSC back in 2013. All of us have backgrounds that are steeped in both enterprise, as well as in startups. And we tend to go back and forth, a little while in the large enterprise, and then we get the startup bug and are right back at it.

Really, where Galactic Fog came from in our principal product, which is called Gestalt, is a need that we saw coming from the enterprise to be able to build software faster. You have many, many large, regulated enterprises that are looking at the speed of development that companies like Google and Facebook and other leading technology shops are achieving, and really wondering why they can't do it the same way internally.

Especially when you focus on finance or insurance or health care, there's a lot of additional regulatory burdens that they have to go through and red tape that they have to tackle, which can slow things down. When you look at the flip side of the benefits that new technologies like containers and Function as a Service (FaaS) - sometimes called serverless - bring to those businesses, specifically reducing the amount of resistance in the software development life cycle, while at the same time improving security and reducing the surface area that can be attacked by outside, or even internal, malicious parties, they can't really be ignored.

We really set ourselves on a path to build out the Gestalt platform to enable large, regulated enterprises to build software faster. Really, to have their software development teams focus on actually building software that improves the key differentiators for their businesses, as opposed to focusing on configuring very temperamental open source software like Kubernetes or things like Docker or more and more complicated public clouds.

We actually sit on top of those technologies. We provide all the benefits of those technologies, but we do it in a way so that development teams don't necessarily need to learn every detail about configuring Docker, Kubernetes or serverless. They can just use the tools that are provided, and then take advantage of those benefits. And that's our product at a very, very high level.

Peter Steube

Thanks a lot, Daniel. And maybe if you could give the folks on the line who may not have heard of you, just kind of an overview of where you guys stand as a business today. In terms of the engagements that you do have with folks in the enterprise landscape, what does your typical customer look like? And maybe if you could home in a little bit more specifically on, as a customer approaches you or you link up with a customer who seems well-aligned, what's the key problems that you're solving for them? Or what are some specific use cases that you could point to that you think are maybe overarching, in terms of riding the wave of serverless, containerized environments, which a lot of our community is moving towards? But maybe you could just kind of go into some more specific detail.

Daniel Lizio-Katzen

Sure. Well, as I mentioned, we primarily deal with large, regulated enterprises. Those are typically in three or four industries: finance, insurance and health care and pharmaceuticals. We are pleased to have companies representing a few of the top five in each of those verticals already as our customers, which we think is very cool but really, we can't take entire advantage of.

As I mentioned previously, the market is very quickly adopting containers and container as a service. And the reason for that is, there's been a massive expansion of both internal infrastructures, as well as infrastructure as a service through the public clouds over the past decade and decade and a half. And what that's meant is that the internal teams at these large enterprises, whether it's security or operations, or even the newer teams in DevOps, just have more and more workload that they need to manage.

In addition to that, there's more and more intermediaries, in terms of software vendors, that are asking for monthly subscriptions. Whether those are the more traditional ones in a cloud sense, like VMware and Red Hat, or some of the newer ones, like DataDog or some of the other logging and auditing pieces of software that are there, there's just a lot of complexity in even getting what's sometimes referred to as continuous integration (CI), continuous deployment (CD) pipeline working in a common fashion, regardless of the deployment target.

And because of that complexity, one of the first places we go in - and this is really kind of the common use case in each of these industries - is we drop our Gestalt platform in on top of typically Kubernetes, but it could be something like Amazon ECS, which is Amazon's proprietary container format. Can be on an older container format, like Mesosphere's DC/OS or Docker Swarm. But more and more, we see Kubernetes is kind of the de facto standard out there.

Saying that, all versions of Kubernetes are not the same. We see things like, there's something called GKE, which is Google Kubernetes Engine. We see things like Amazon EKS, which is their new Kubernetes service. Or even offerings from Red Hat like OpenShift or Pivotal like Cloud Foundry, which are container management implementations, but not necessarily the tools that are used to plug into that software development pipeline.

The first use case that we're used for is that we plug into typically the source code repository, whether that's something like Elastic and Bitbucket, or GitHub or GitLab. Then what we do is we enable, upon check-in, a container - and a container being an immutable environment that's been defined by the operations team. Now instead of the developer having to go and create a helpdesk ticket to get a new environment provisioned, what they're able to do using Gestalt is simply check in their code, and out pops that configured container in seconds. That container is linked back into the source code repository, and that developer is now able to test their code in the exact same environment that it will be running it in production.

What that does is, it massively simplifies and reduces the amount of time it takes to get code to market and really speeds up that whole source code deployment paradigm. One of the things and one of the ways we do that is with our serverless engine, which is called LASER. That's the common use case that's across each of those industry verticals. Where we really start to see as the second use case, that's where they start to diverge pretty rapidly, depending on the industry.

Peter Steube

Got it. Understood. And you had kind of walked through a number of the different technologies that you support or integrate with. Can you maybe give some commentary, in terms of who your best and brightest partners are? Are you completely agnostic? You had mentioned Kubernetes. I think that our data supports their strength over the platforms of, like, Mesosphere, especially as of late. Are there any that are top-of-mind, in terms of specific integrations that you work with currently, and people gravitate to you because of that? Or is it completely agnostic?

And then, a follow-up to this would also be, are there technologies that you don't support, but maybe will be in the future?

Daniel Lizio-Katzen

Great question. In terms of container platforms, we are agnostic. One use case that we see as I mentioned that second use case today is migrations, either between older container formats - so that can be something like ECS or Docker Swarm or DC/OS - to a Kubernetes orchestration platform. And Kubernetes as the container orchestration, and Docker being the container format, typically. Or we see our platform used to migrate between Kubernetes implementations.

Another common use case is if I have Red Hat OpenShift running in my own datacenter, and I'm using Amazon EKS or Google GKE, or even Azure Kubernetes Service as a hosted Kubernetes service, and I want to be able to do development in one of those hosted services and deployment in production in my own private cloud. There are quite a few differences between your on-prem, hosted OpenShift, versus what's going on in GKE or EKS, for instance. We facilitate that migration so that, depending on the deployment target, which is really dependent on the user's permissions and their kind of own internal security model, we're deploying the code to the right cloud, the right kind of target, at the right time.

What that means is, if you're looking at a traditional software development life cycle, where you've got a local environment, and then you've got a dev integration environment, and then you've got a QA environment, and then finally you have production. What it means is that where the code is checked in and who it's approved by is going to dictate where it's actually deployed to.

And because that's done in a fully automated fashion, as opposed to waiting for Joe over in operations to press a button and check in the code, what it means is that you can massively shorten the amount of time it takes for code to proceed all the way from that local dev to production. That's really the driving force between CI and CD is wanting to automate as much as possible and wanting something like infrastructure as code. That's what Kubernetes underneath the hatch really does.

Now back to your original question, we really do support all of those different container platforms and we do a similar thing on the serverless side. We have our own serverless engine, which we call LASER for Lambda Application Server, that runs in a consistent fashion across clouds. Whereas Amazon Lambda works one way in AWS and Google Cloud Functions works another way, but just in GKE, LASER works in a consistent fashion, regardless of where it's deployed.

On top of that, just like on the container world, we do support other implementations or serverless implementations, things like OpenFaaS or Amazon Lambda or Google Cloud Functions or Azure Functions. Our goal is to be that piece of software or that platform, if you will, that's going to allow the developers at a large enterprise to pick the right tool for the job every time, as opposed to have to be a slave to the tools that are in use at a certain time.

Erik Bradley

Peter, can I interject for a moment? First of all Daniel, I want to say, I just love the name of the product that you picked in Gestalt, with the definition of it being an organized whole that's greater than the sum of its parts.

Daniel Lizio-Katzen

That's exactly right.

Erik Bradley

As we're listening to you describe this product, it almost sounds too good to be true, in that you're interoperable with any kind of on-prem, with any kind of container, any kind of orchestration. I guess my question would be, it does seem that you're truly agnostic to the environment, whether it's public, private, hybrid, or even a bare metal datacenter. How did you guys get together and architect the product to enable that interoperability? From the day one, how did you make that agnosticism possible?

Daniel Lizio-Katzen

Sure, but just to be clear, we do need a container platform to run on top of so we can't run it anywhere. We could probably hack ourselves to run straight on VMware and RHEL, but what would end up happening would be you're going to lose a lot of the functionality that we absorb from Kubernetes. Some of that is the container orchestration. That's the auto-scaling. It's a lot of what comes with building cloud-native applications. We take advantage of that capability, instead of actually bringing that capability.

Now once there is an underlying container platform, then we do offer that federation of containers between clouds, and the ability to migrate your applications and your data between those clouds, as well. To answer your question more directly, it's not our first rodeo. We created something very similar at Service Mesh with the Agility platform previously. The Agility platform previously was built as an early version of a cloud abstraction. It really only repurposes the very early infrastructure as a service APIs like storage and compute, so Amazon EC2 and Amazon S3, as well as Google and then OpenStack.

What ended up happening, though, is because of how that platform was built, there was always the lowest common denominator problem, which was, when Amazon or when Google or when Microsoft introduced a new API that extended the capability of each of their clouds, Service Mesh's customers were dependent on the update of the Agility platform to be able to take advantage of that capability. That's brittle, and especially if you're not an 800-pound gorilla with thousands of engineers, there's really no way you're going to be able to stay on top of those advancing capabilities.

When we started redeveloping and coming from the ground up in reconceptualizing a platform, one of the first things we wanted to do was make it so that there wasn't lock-in to our customers. This way, once the Gestalt platform was in place, if a new capability was delivered to the market that a business using our software wanted to take advantage of, they would not need to wait for us to basically deliver to them a new version, or hack the actual software in a way that, when a new version came out, it was not upgradeable.

The way we do that is, we built the entire platform on top of Function as a Service. When we perform the migration between container specifications, for instance, we're doing that in a way that there's a function that's being called. That function is visible. It's sitting right in Git and is pulled right out of GitLab or GitHub, whatever the repository is. And it's performing the mapping between the two container specifications. What happens is, if one of those specifications gets upgraded - so, Kubernetes just went to 1.12 from 1.11, if there's a new schema that's extended or added, we can very quickly update that function. Or if one of our customers has beat us to the punch, they can update that function. When the Gestalt platform is upgraded, those functions don't change. They're just snippets of code written in one of six or seven different languages.

Hopefully that provides a little bit more insight into how we got the idea to make the platform a little more flexible than what has been in the market in the past.

Erik Bradley

Yes, it does. Thank you, Daniel.

Peter Steube

Daniel, pardon me if this is a little bit redundant - can you differentiate yourself or identify yourselves as it sits next to the cloud providers themselves and their functions offerings? Can you go into a little bit more detail for that for our network?

Daniel Lizio-Katzen

Absolutely. Today, one of the challenges with the public cloud providers' serverless implementations is that they are typically a slave to the specific cloud they were built for. Amazon Lambda runs in AWS. Google Cloud Functions runs in Google Cloud Platform. And Azure Functions runs in Azure. That means that the code or function that's written in one of those platforms can't be copied and pasted and run in the same fashion in the other platform.

There is a framework called the Serverless Framework from a company called Serverless that attempts to make those functions a little bit more interoperable, but again, those will work across the public cloud providers, but not necessarily on-prem. As a team that has spent a lot of time in the enterprise, we also recognize that data has gravity, and there are a lot of regulated workloads out there. The scenario I mentioned a little earlier where a developer may be building their prototypes in the cloud, but the production is actually in the datacenter, is pretty common.

When we look at those public cloud providers, what we do is we provide the interoperability so that a function written for Gestalt LASER will run in the same fashion, regardless of where it's deployed. We also extend the languages that are available. If we look at Amazon Lambda, they primarily support Python and Node.js. If you look at something like Google Cloud Platform, they're supporting Node.js and also Go and a little bit of Python. Then if you flip over to Microsoft, they have .NET and then your Node.js. We actually support eight different languages today. Which means that if you are a large financial company and have a lot of code and a lot of developers that know Java very well, then guess what? You can write a serverless function in Java on top of Gestalt LASER. That's one of the ways that we differentiate.

The other way is the types of use cases we're deployed against. Because we are a framework that is tunable, as opposed to somewhat rigid because of how the public cloud providers need to provide the service out to millions of end users, we have the luxury of being able to configure things like how long a function can persist for. If you want a long-running function on Amazon Lambda, you're limited to a maximum of 300 seconds, which is a lot of time in terms of compute, but if you're doing something like a multi-gigabyte or even a terabyte image file, let's say an MRI that needs to be run through a model and have an ML model run against it using PySpark or something like that, then you're going to need longer than five minutes to run that.

Or if you're doing batch process replacement, and you want to be able to have your serverless workers span out over really hundreds of millions of different messages that need to be transformed, that's another scenario where the public cloud isn't really going to be the best fit for it. The diversity of use cases and languages are key differentiators. We don't expect that to be something that's a moat that lasts forever, but as of today, those are a couple of the key things that allow us to differentiate.

Erik Bradley

Daniel, it seems more and more clear to me as you speak that this is a product that was designed specifically for DevOps and the continuous integration cycle, which is wonderful for them, but it also brings up the part of security. It reminds me of a relevant comment that we heard from a chief information security officer of a large financial enterprise, which would be your target audience. And basically, what he said was this

While the concept of serverless computing is quite exciting, and I'm beginning to use Lambda for security functions in AWS, the truth is, as an information security officer, I get the heebie jeebies when developers use Lambda. Because it's a serverless function, my ability to monitor what they're doing, or at least put some structure around it, is limited. If something goes wrong, will I even know that it went wrong? And because I can't pin it back to a server that has a monitoring layer or a monitoring tool, I don't even know how to fix it.

What would you say to a CISO like this, who is right in your target wheelhouse for a customer, that says, you know what? I see the purpose of it but I'm not going to let my DevOps team have it, because I can't control what they're doing.

Daniel Lizio-Katzen

Absolutely. There's two things there that I'll talk to, but he or she is absolutely right with regards to Amazon Lambda. One of the big challenges today with public cloud serverless implementations is that most of the use cases that they're being put in production for are for gluing applications together, what we used to use web hooks for, for instance. The logging and the monitoring are new. Because the challenge, when you're in the public cloud, is you're talking about a server infrastructure, where your Lambda can be spun up - if you're in Amazon East, for instance, can be spun up and actually deployed across hundreds of thousands, if not millions of different nodes of compute.

Unless you think about that monitoring and that security model from the very beginning, you have some security through obscurity, but that's not defensible in a modern enterprise, right? The nice thing about how LASER is implemented is, because we know exactly what cluster or what container cluster it's deployed across, we are able to stitch the monitoring together and the logging for each Lambda that's invoked.

The interesting thing that that allows us to do - and when I say Lambda, I'm not talking about Amazon Lambda, I'm just talking about a serverless function that's being invoked - when that function is invoked and it's using LASER, we're actually logging not just the invocation and all of the data that was fed into it, but we're logging the output, where it ran, when it ran, all the metrics around how it ran. And we're feeding those into two things: one is, we feed it into a default ELK stack - or Elasticsearch, Logstash, and Kibana - that we ship with the product. The second place is wherever our enterprise wants us to feed it; that could be their ELK stack, that can be into something like Splunk or DataDog, or wherever they want it to go.

Because of how our meta model, which is kind of the brain of the platform, works, every developer only has access to the environment, or the container that the operations team has specifically dictated. What that means is, that's the same for an application that they're building, or for serverless functions. What that means in real life is that a developer can write that Lambda, and they can invoke it and test that Lambda, and they are only doing it in a place that may only be visible to inside that enterprise.

We were actually on-premise this morning and yesterday with a large company in the health care industry where we set up and exposed the Gestalt platform and allowed their developers - and this is not just DevOps, these are also their actual application developers - to play around with Gestalt. They're doing it all on internal DNS so that everything they're doing is open to their dev team because of the permissions that are managed through Gestalt, but it's not available even to another dev team sitting right next door. And that's how firmly we can lock down access.

You can also configure the platform in such a way that, if logging is not available, that basically serverless functions cannot be invoked, but that's kind of an extreme security model. However, if you're really, really, really serious about security, then you may want to do that.

Erik Bradley

It sounds like to me like you have an identity access and a rights management back end already built in. Is that something that you developed yourself? Are you partnering with other technologies for that? Or is this something that you leverage with the existing enterprise if they already have some sort of identity access management like a SailPoint or an Okta to tie into? How does it work? Is it all native into your Gestalt product? Or is this something you have to work with partners for?

Daniel Lizio-Katzen

Yes, we tie into their existing security engine. Whether that's something that produces an LDAP endpoint or if it's Active Directory or PingFederate or OIDC, we tie into wherever their user and rights management is. We're not trying to reinvent that wheel. What we do is we allow the creation through an RBAC model of the policies and the entitlement management that goes around each of those user groups. Or it can even be specific users. We integrate; we don't try and rebuild it in that case.

Erik Bradley

Really appreciate that. Thank you, Daniel.

Peter Steube

Daniel, another thing that we've kind of started to see come through in our surveys, and also in the additional conversations that we have with Fortune 500, Global 2000 CIOs. And it pertains specifically towards digital transformation or microservices-oriented type of vendors. And I think a lot of maybe either trepidation, especially as you get into the even extra-large Fortune 100 enterprises, would be the service that's provided by those companies.

It's kind of caused maybe some turmoil, and we see it come through in the survey responses, in terms of call it experimentation, rather than real adoption and pervasion. I'm curious, how do you combat that impatience or that trepidation that they have, being that you are, at this stage, a smaller vendor with admittedly maybe smaller workforce. What's the solution from your end, especially that you do work very closely with, I think you said the majority - if not all - of your customers are Global 2000 enterprises. Maybe you could just expound on that a little bit more for us.

Daniel Lizio-Katzen

Sure. And that's an important question. While we do go to market directly with a few key customers, we do have quite a few different resellers that we work with, primarily in the Northeast today, because that's where most of our clients are, and that's who can assist with certain things. Case in point, setting up the platform so that your developers can deploy to containers is cool, right? And can potentially change a lot or save a lot of money and really streamline development, but the way you get there is by actually moving applications to containers, or in the future, moving AppDev to serverless.

We don't, as a firm, typically handle the containerization of existing applications. That's something we work with partners on. We really want to make sure that that CI/CD pipeline and the entire software development life cycle is streamlined as much as possible, so that when a developer checks in code, they get the environment they want that's been blessed by operations and the security team, right?

Where we like to step back is, after providing kind of a template application, here's a best practice for how the SDLC should work with both serverless and with containers, then we're going to hand off to a reseller or a value-added reseller who is going to come in and help that large firm actually take their hundreds, if not thousands, of existing legacy applications and start to migrate those and train up that internal staff. We're not trying to slay every dragon ourselves.

Peter Steube

I appreciate that, Daniel. If I could then add on top, it's a common question that we get asked when we talk to folks like yourself is, when you talk about use cases day one, then the CIO is always concerned, alright, I would really like to expand with this business. What's the roadmap, in terms of pervasion, at day 180 or year two? And can that be done with the CIO's existing IT skills? Or what's the support needed there? You had mentioned some training, in terms of maybe getting the IT team up to speed on your guys' platform. Can you walk us through a similar situation or a success story, day one and year one and then year two, what that kind of looks like?

Daniel Lizio-Katzen

Sure. Really, our goal is to make the end client - those Fortune 100s and Global 2000s - to be self-sufficient. And one of the ways we do that is we take away a lot of the complexity that DevOps has started to bring into the picture. Those complexities of tooling. For instance, today, if we look at what's kind of the modern DevOps pipeline, you may have a dozen or even twenty different tools in place to get from your code check-in all the way through your test running to your actual build to your deployment.

Initially, we're not trying to actually pull out many of the existing tools. We're merely trying to help stitch them together, but over time, and usually after six to eight months, we start to see some of the tools that are in your typical VM-based SDLC and CI/CD pipeline start to disappear. We see, you know, Red Hat Enterprise and VMware get supplanted by Docker because you don't need to virtualize the servers to the extent that you had previously.

You start to see things like Bitbucket and GitHub become more than just the source control. Some tools, like GitLab today actually have part of that build pipeline. You see things like JFrog Artifactory come in, in terms of repo management. You also see tools like Jenkins and Ansible and Chef and Puppet becoming less relevant. Over that six months to two-year timeframe, we see that CI/CD pipeline start to get simplified.

So, that helps existing teams, because a lot of these large enterprise, they haven't really even embarked too much on DevOps, because just understanding that model and CI/CD is much different than having your traditional build pipeline off a waterfall development model. We help them simplify what ends up being their end state is one place.

The second piece is that Function as a Service, or serverless, by its nature, is really just doing what developers do today, which is write functions. As you start to move to that microservice standpoint, and really, if you think of a microservice architecture and a serverless or a Function as a Service-based architecture, there's a lot of similarities. As companies start to move away from these big, monolithic applications and some of the early service-oriented architectures towards microservices and Function as a Service, what ends up happening is you're actually simplifying a lot of that development life cycle, a lot of the architecture. That simplification means faster time-to-market.

And because of the code support, because it's not like we're making everybody learn Go or making everybody learn some new programming language, it can be accomplished with the tools that are being used by those developers today. Whether that's a Java dev stack and the associated IDEs, or a Microsoft dev stack, or even a Python dev stack, these are all things that are supported. It means that your developers don't necessarily need to be re-trained, which is faster time-to-market. They don't need to learn or implement dozens of new tools, which is faster time-to-market and easier troubleshooting.

And really, when you start ending up taking advantage of the consistent deployment target, meaning your dev environment now is exactly the same as your QA environment is exactly the same as production, that also is faster time-to-market. What we see after a year or two is the software development life cycle, and even the product development life cycle, starts to get streamlined and faster. It's not because of some new magic process that's being implemented; it's just taking a lot of this kind of tool clutter and a lot of these different steps that DevOps has brought to the developer and simplifying them.

Peter Steube

Thanks, Daniel. I think you actually kind of touched on it or you're painting around it. And if you'd prefer not to drive a stake through it, that's perfectly fine. But I had it also listed here on my list of questions, what role does the configuration management tools like Puppet, Chef, Ansible play with Galactic in place. I'm curious, and I'm going to say it in a constructive way, the response in terms of spending from our CIO community on those tools has been notably down, in terms of their future plans. One of the things that we're trying to dig into here as to why that might be, and I think that might be coming a little bit at the hands of disruption by technologies like yours. Is that fair to say? Or can you comment on that a little bit more?

Daniel Lizio-Katzen

It is. That's not necessarily just the benefit of the Gestalt platform. That's something that containers bring to the table. When you look at Ansible, Chef or Puppet, you're talking about the configuration of the VMs predominantly that you're deploying to. And whether that's a VM that's behind an API offered by Amazon or Microsoft or Google, or that's a VM in your own datacenter, as you move to containers, you don't need the same configuration. You have things like Helm Chart or demonstrating the configuration of a specific container. But once that container has been defined and associated services have been specified through a Helm Chart or through some other type of artifact, what you actually end up having is no need for Ansible, Chef or Puppet.

I think that they're each trying to look for new revenue streams and ways to stay relevant as the deployment of VMs is kind of slowly pushed aside by containers and, in the future, serverless but that's probably why you're seeing the decline in spend there.

Erik Bradley

Thank you very much for that answer, Daniel. There's another thing that you spoke about in your last response that I wanted to touch upon. You had mentioned that the majority of your clients are in the Global 2000. You also had mentioned Fortune 100. And Peter and I have had the benefit of speaking to a Chief Infrastructure Architects of the Fortune 100 recently, and what we've learned is that scalability in certain technology can be a very relative term. And what I mean by that is, what's sufficiently scalable for a Global 2000 organization may not be suitable for a Fortune 100 enterprise.

My question would be what is the true scalability of Functions as a Service in general, and why people in a Fortune 100 need to be looking at this technology in not just a playing around way, but in a true adoption and employment way? And then the second follow up to that would be, in the Gestalt platform in particular, how truly scalable is your product?

Daniel Lizio-Katzen

Sure. I'm going to address that second question first. We are not Software as a Service. We are installed on our client's clusters, so the scalability of Gestalt is really dictated by the capacity of the underlying cluster it's installed on. We are a relatively young company, so if someone stuck us on AWS EC2 in an account with unlimited autoscaling, and then started pointing every single trade quote, in terms of inbound data and wanted to do some expensive compute ETL on that, I'm sure we could find the upper bounds of our scalability.

However, we have been deployed on a 20,000+ core cluster at one of our clients, which is one of the top five banks in the United States, on top of their grid. We have scaled a process out that handles over a half billion transactions per day doing ETL on each of those, then it takes up upwards of 4,000 cores at peak processing, and that's 4,000 concurrent cores.

When you start talking about the scalability, one of the things we've seen, particularly in finance, and specifically with large banks, is that they each have these very large compute grids. Whether they're Symphony or other types of provisioned platforms, and many of them run into the multi-hundreds of thousands of cores of compute. What they also find is that they're massively under-utilized. They have billions of dollars of spend invested in these grid compute platforms, and they're running at anywhere from 8% to 14% when you look at a 24/7 utilization, which means there's a lot of money being left on the table.

The big benefit that you can get with serverless, especially in very large enterprise, is you can scale your applications out wider than they may have been allowed to previously. Something that was locked up on a very large single server previously, say something that had a couple terabytes of RAM and 128 cores, now can be allowed to sprawl out to 4,000 or 5,000 cores and actually execute more quickly or in a more performant fashion, in terms of total time spent, in terms of calendar time as opposed to the actual time for all those functions.

What ends up happening is that you can start to stack some of these applications based on the business's need, as opposed to the SLA that the infrastructure team has to meet. When you talk about scalability, you end up really having to rethink how scalability works with serverless. Because when you're able to start auto-scaling those functional invocations, you can end up with what we call true web scale-type processing, especially when you start opening it up to multi-hundred thousand core compute environments.

Then to the second question, I'm sure there are some places where we're going to fall down and we're going to have to go back and refactor parts of our platform. Saying that, we are built on top of Scala and Akka streams, which are both known to be extremely performant. There's some very good architecture in the platform. Anthony Skipper, who's our founder and CTO, worked for a long time at Merrill Lynch on some of their large computing problems, as well. We've got a little history on the high-performance side.

Erik Bradley

Those are really great answers and very helpful for our community members. One last thing we didn't touch upon, and it also kind of ties into scalability, it's an opportunity you could address our audience and in general about the pricing model that you employ. What does the pricing look like at my first trial with you guys? And what does the cost look like after full-scale adoption?

Daniel Lizio-Katzen

Sure. We have a 60-day free trial when you install it on your own cluster, so that you can play around with it and make sure that it's driving the value that you're expecting. We also shortly - this month - will be up for a free trial in Google Cloud Platform. You'll be able to deploy Gestalt and play around with it and actually migrate ECS workloads, which are Amazon, straight over to Google if that's your thing. Inter-cloud or multi-cloud is a big thing in the enterprise these days, although we don't necessarily see people using it as much as they talk about it.

In terms of our pricing specifically, we charge on a consumption model. Much like the public cloud. We meter on compute seconds. We can, if it's easier for clients to budget, also work on a per-core pricing. To us, it's an either/or. We don't really care too much on which side you come down on.

Erik Bradley

Great. I appreciate that, Daniel. Thank you very much. Peter, I know you had another question you were waiting to get in.

Peter Steube

Yeah. I know, Daniel, we'll wrap up with you here in a little bit, and we do really appreciate your time and insights today. I think one final question from me would come from the folks that pay our bills in the investment universe. I know they're always curious of where you guys stand, funding-wise. What are next steps? And to the extent that you could share, if this is the proper forum for you, we'd like to just maybe get a little bit of a better understanding where you guys are there, and where you guys might be headed.

Daniel Lizio-Katzen

Sure, today we are a venture-backed enterprise. We've been around since 2014. I would say that, due to our client base and the size of our contracts, we're not profitable yet, but we're heading towards profitability very soon. We do expect to raise another round of funding while we're on that path. Based on the success we've had over the past 18 months since we've reached general availability, I think that we're very excited about the future.

Peter Steube

Okay. That sounds great, Daniel. And one thing that we would be remiss to not give you the opportunity to address anything that we didn't ask you today. Is there anything that we're missing that our community should be aware of when it comes to you guys? We leave the floor to you.

Daniel Lizio-Katzen

Oh, I appreciate that very much. I think the couple of the things that I would probably touch on last is really just why we see the industry willingness to adopt the Container as a Service and Function as a Service now. Really, we owe a lot of this to Amazon. Amazon's done a terrific job selling the scalability, the security and the functionality of the public cloud. That's great, especially for startups and even for some larger firms.

However, when you start to talk about the Fortune 100 and even the Global 2000, there's a lot of firms that have a lot of businesses that deal with either proprietary software, or with systems that predate the internet, if you're talking about life insurance, specifically. They have the want to use some of these new technologies, but they don't necessarily have the flexibility to move everything to the public cloud. Or maybe they want to avoid a new type of lock-in.

We think that Amazon being out there in the market, and now with Google and Microsoft really chasing them in the enterprise, they're doing a great job talking about containers and making everybody want to adopt containers. That's something that we've even seen since the beginning of 2018 really start to increase the amount of adoption.

The second thing I'd say is, you talked a little bit earlier about how DevOps, they're very excited about Function as a Service and serverless, and while that's certainly the case, I think we actually see more excitement on the side of CIOs and people in charge of application developers. In just the top 50 financial services and insurance companies, they have over a half million developers across their companies. These are companies like JPMC, Bank of America, Prudential, AIG. With those half million developers, literally, if you can save ten minutes a week across that half million developers, the amount of cost savings that's introduced for free into those organizations is really phenomenal.

We see it firsthand, that the biggest adopters of Function as a Service are actually going to be the CIOs and the application development teams that are looking to really scale the output of their organizations, as opposed to having to scale the organizations. We're excited about what we're seeing there and expect those development teams to really start grabbing Function as a Service by both ears and adopting it more quickly.

Peter Steube

Got it. For anybody on the line or in our community, to get ahold of you, obviously, they could do that through your guys' website. But will you guys be at AWS in a few weeks? What's the best way to interact with you folks upcoming?

Daniel Lizio-Katzen

We will be at CubeCon or anyone can contact us through our public Slack channel. There's a multitude of ways to contact us and we're very, very responsive. So, we'll be at CubeCon. We may be at Ignite, although we don't have a booth. And we're always available through our website.

Peter Steube

Alright. Sounds good, Daniel. We appreciate your time today. For everybody who's still on the line but may have had to miss a few details, we will have a full transcript and a summary of today's discussion out to you in a few days. Thanks a lot for the interview and your time, Daniel. We really appreciate it.

Daniel Lizio-Katzen

Absolutely. Thank you for the opportunity.

What you get

Transcript

Abstract

Event Deck

Full Recording