Cloud-native

streaming and queuing

as a service

Powered by Apache Pulsar

Introduction

Streaming

Need to handle a stream of data, such as events or logs, in real-time? Need to make sure you don’t lose any of that data? Need to be able to replay those events or logs if something goes wrong? Kafkaesque has you covered. 

Powered by the proven, world-class, and open-source Apache Pulsar technology, Kafkaesque can handle your stream processing use cases with ease. With our private cluster deployments, you can scale to millions of topics and process hundreds of thousands of messages per second.

(It’s totally free. Less than 1 minute.)

Queuing

What if you need to queue up messages for a group of worker tasks to process? What if you need to send the same message to multiple consumers? What if you want to delegate the keeping track of what messages have been acknowledged? Kafkaesque has you covered here too. 

Kafkaesque supports shared, exclusive, and fail-over subscriptions modes, meaning you can set up just about any delivery pattern you want. And it automatically keeps track of what messages have been consumed.

(Did we mention it’s free? No credit card needed.)

What you get

Features

Durability

Can’t lose a message? Messages sent to Kafkaesque are written to multiple disks before being acknowledged to the producer.

Want to keep messages even after being consumed in case you need them later or you are doing event sourcing? No problem, Kafkaesque can retain messages for days, weeks, or even forever on our advanced plans.

Real Time

The technology behind Kafkaesque has been designed to support real-time process of messages with consistently low publish latency. It can even keep latency low during (zero-downtime) fault recovery and maintenance.

With our dedicated cluster deployments we will tune it for optimum latency and continuously monitor for latency outliers.

Scale and performance

Need to scale to hundreds of thousands or even millions of topics? Not a problem, we can do that. 

Have some high-volume topics? Create multiple topic partitions to increase throughput.

Don’t want to mess around with topic partitions for low volume topics? Piece of cake. Topic partitions are optional. Just publish and subscribe to a topic, no partition needed.

GEo-Replication

Have an application with users around the world or  need to keep copies of messages at a remote location for disaster recovery? Not an issue. Geo-replication in Kafkaesque is a snap. 

In fact, if you sign up (it’s free) you can try it out for yourself. Just use the test clients on the dashboard to send messages in the worldwide namespace. All messages in that namespace are automatically replicated around the world. 

Sign up now and check it out.

security

By default all connections to Kafkaesque are authenticated and encrypted. You can’t connect in plain text even if you wanted to.

All our certificates are signed by public certificate authorities (CA), so there is no need to upload privately signed certificates into your trust stores to connect. Just point to the default trust store, specify your signed token, and away you go.

Multi-Cloud

Building applications in multiple clouds? Want to avoid cloud lock-in? Kafkaesque has got you covered. 

We can operate in all major cloud providers (AWS, GCP, and Azure) and are experts in these cloud environments.

Want to replicate messages published in one cloud to another? No sweat. Just set up a cluster in both cloud providers and turn on geo-replication. It’s that easy.

APIS

Need to support multiple programming languages (or maybe just your favorite)? We’ve got you covered with high-level APIs for Java, Python, Go, C++, with more on the way. 

Don’t want to embed a client library in your application? You can use the WebSocket API. It works like as a sidecar container in Kubernetes.

Have an application written for Apache Kafka? Try out the Kafka wrapper for zero code change porting. 

Using Apache Spark or Apache Storm? There are adapters for those too. 

Dedicated and Private Clusters

Have high scale or performance requirements? We can set up a dedicated cluster in your cloud provider of choice that can horizontally scale as your needs grow.

Need private access to your dedicated cluster? We can set up VPC-peering configurations so you can use private IP addresses to access your cluster.

Need to run your cluster in your own cloud account for security and governance reasons? We can do that to. Just ask us about it.

Fully Managed

With 24/7 monitoring we make sure your messaging system is always up and running. We handle upgrades, security patches, and other maintenance activities.

You can keep tabs on your messaging system using our state-of-the-art dashboard, but we’ll make sure everything is running in top form for you.

We handle the Ops, so you can focus on the Dev.

Streaming Functions (Coming Soon)

Do you need to clean, categorize, or count your data as you move it? Well, no need to integrate with an external system. You can that and more in real-time as your data moves through Kafkaesque. 

By creating custom, lightweight functions in Java or Python you can process data as it is moved by Kafkaesque in real time. You write the streaming function, upload it, and then it becomes a part of your managed service.

Streaming functions unlock powerful new use cases for Kafkaesque and are coming soon. If you want more info, don’t hesitate to contact us.

Connectors (Coming Soon)

Moving data in or out of HDFS, Mongo, Cassandra, SQL, Elasticsearch, Kafka, or even Twitter? Well it’s easy with source and sink connectors.

With just a few clicks, you can connect your Kafkaesque service to 3rd party sources and get your data flowing in no time. 

Connections make integrating with your existing data sources a snap. Interested? Ask us about them.

Schema Registry (Coming Soon)

How do you make sure your decoupled producers and consumers are talking the same language? You need to have a schema registry. 

With a schema registry, your messaging clients can be sure the types of the messages they are exchanging are compatible. 

You are not locked into a single schema format. JSON, Protobuf, and Avro are all supported, so it’s up to you.

Want to learn more? Contact us.