This software is provided under a dual license. You may choose to use it under the terms of either:
1. GNU General Public License, Version 2 (GPLv2), or
2. GNU Affero General Public License, Version 3 (AGPLv3).
You may use, copy, modify, and distribute this software under the terms of either license, at your option. The full text of both licenses is included below for reference.
IANAL, but isn't GPLv2 strictly more permissive? Why would anybody ever not just pick to use it under that one and ignore AGPL?
The reason is because we use a layer (our main innovation) called DataSubstrate to build modular databases that support distributed transactions [1]. Because we use some MariaDB code (GPLv2) and some MongoDB code (AGPLv3) in our other projects [2] [3], so we license our DataSubstrate code to be compatible with both, and therefore, we also license EloqKV under both licenses.
I don’t think end users can pick a license because you have pieces licensed under GPLv2 and AGPLv3 . They will have to obey by both, which is tricky if they are conflicting.
The EloqKV developers could choose to pick just the AGPL one in the future and drop GPLv2 support while taking contributors dual licensed contributions with them...
I don't know that's the plan, but it's the best reason I see to dual license like this.
How do they properly accept contributions under dual licensing in a way that allows them to re-license those contributions? Through CLA? I'm not certain what you're saying is true - dropping one license might be as challenging as changing the license. Maybe I misunderstand
The same way you accept any contribution, because it isn't technically relicensing. You already granted them an AGPLv3 license when you uploaded your change to github without modifying the license file - that's what the "or" in the file means [1] - which entitles them (and anyone else) to create and distribute derivative works under only the AGPLv3 without any GPLv2 grant...
[1] Quoting the license file:
This software is provided under a dual license. You may choose to use it under the terms of either:
1. GNU General Public License, Version 2 (GPLv2), or
2. GNU Affero General Public License, Version 3 (AGPLv3).
As a contributor, couldn’t I use it under the terms of the GPL and make GPL-licensed derivative works and ignore their AGPL nonsense? If it is GPL then I understand that I am under no obligation to license my contributions under AGPL.
That’s right. You can simply choose GPL and ignore the AGPL part for EloqKV. The reason we use both is that, in other projects, we need to support both GPL and AGPL. EloqKV and all its dependencies are either developed by us or licensed under more permissive terms, so you can choose either license. However, EloqDoc is under AGPL (https://github.com/eloqdata/eloqdoc) and we cannot relicense it under GPL because it includes some AGPL-licensed code from MongoDB.
Yes, of course. Dropping GPL2 support (or you dropping AGPLv3 support in your fork) only affects the future changes made after dropping support... you can't revoke the GPL licenses on the existing commits.
Presumably they aren't going to merge any changes that drop the AGPL license...
You could license your contributions (if substantial enough to warrant copyright) with any license, for instance BSD 2 clause. (If your license is not compatible with the GPLs they should not accept your contribution.)
I would be honestly interested to know what sort of actual paid-for legal advice they got about this.
IANAL, but dual licensing with two conflicting syles of license just stinks of something that will come back and bite them in the backside big-time in the future.
From a purely practical perspective it is also not clear what the point is ? Because quite clearly the people you want to be subject to AGPL (i.e. the big-evil cloud providers) will simply take you at your word and run under GPL.
Really appreciate it. I am the CEO of EloqData. We submitted the ShowHN about a year ago [1]. Since then we made a lot of progress, much of the work is based on the feedback from the great HN community, including:
1) Open Source (GPL and AGPL) (thanks PeterZaitsev).
2) Session based transaction in Redis API. (thanks fizx)
3) Better explanation of the architecture [2] (thanks apavlo).
4) Testing with Jepsen (internally, we will do it officially when we have the resource) (thanks jacobn, among others).
Again, thanks, really appreciate the community support. Please go to our website [3] or join our discord channel to provide more feedback.
Great question. Currently, the database landscape is very fragmented. We are faced with a multitude of database choices (different ACID guarantees, data modalities, scalability, and so on). Data pipeline become very complicated, and we believe there must be a better solution. That’s why we developed a common architecture called DataSubstrate and built different APIs on top of it. EloqKV with a Redis API is just one of them; we also provide a MySQL-API RDBMS and a MongoDB-API JSON database (both open-sourced). Our goal is to create the next-generation database foundation to support the growing demand from new generation of applications. We believe future AI agent-driven applications will generate huge volumes of queries and data that will be difficult to handle with existing solutions.
EloqKV is only slightly slower than Dragonfly—about 10–20%—but for good reason. Dragonfly is a pure in-memory database with a highly optimized network layer and a very specialized design. EloqKV, on the other hand, is a full-featured database with all the checkboxes you can think of: fully consistent, durable, distributed transactions, fault-tolerant, tiered storage, and more. Despite this, we incur very little overhead compared with state-of-the-art, purpose-built databases when we have the same workload guarantee. Our thesis is that we may not need dedicated specialized solutions if we can achieve comparable performance (and cost) with general-purpose systems.
We will also add vector support very soon. Please stay tuned.
I have tested EloqKV for a pet project and seems it is quite solid. Performance is fantastic, far out-performing most databases with durability by a large margin. I am not sure about the distributed transaction correctness but all my tests seem to indicate it works as advertised, which is very interesting because the other distributed NewSql databases are all rather slow. Haven't tried their SQL and Mongo solutions, but they also look quite interesting.
Interesting that it supports SQL transaction syntax. Does make it easier for traditional SQL users to switch to a Redis interface database as its primary store
It's weird that they don't mention similar distributed NewSQL databases like TiKV, which also has a MySQL layer (TiDB), and position themselves as a Redis replacement.
Tikv is very (very) slow (compared to redis, not sure about elogkv) and it's redis layer seems unmaintained. TiDB is very interesting product ; we tried to replace our massive mysql (master with multi read replica/standby's) system and it is just so slow that it's unusable really. Maybe it's great for other goals though, I don't know; I like the idea so we tried some setups with it.
According to their benchmark results read latency is comparable to redis even under 'Persistent Transactional Mode'. Might be a solid choice for people who don't really need complicated relational database but need the data to be persisted
TiKV is a great project and we have a lot of respect for their work. EloqKV is based on a very different architecture [1], and we also have MySQL compatible [2] and MongoDB compatible [3] databases build on top of the same architecture. They all inherit the extreme performance, scalability, fault tolerance, and ACID properties due to the common underpinning.
KVRocks (which _is_ mentioned) does, as does valkey (also not mentioned, but probably only because it's not that different from redis at this point IIUC).
A person who wants transactions and a relational layer isn't going to use KVRocks, but sure, if you only care about the key-value part. I gave tikv/tidb as an example; there are others.
Writing your own Redis-like interface is trivial, so tidis et al don't matter to me. Even with Redis you should write an interface so you can swap it out.
Some Redis variants focus on persistence—KVrocks, for example, or MemoryDB, which emphasizes durability through redo logs to minimize data loss. However, they are not truly transactional, since they lack fundamental rollback semantics and distributed transaction.
EloqKV, by contrast, is fully transactional. It supports distributed Lua, MULTI/EXEC, and even SQL-style BEGIN/COMMIT/ROLLBACK syntax. This means you get the transactional guarantees of a database with Redis-level read performance. Writes are slightly slower since EloqKV ensures durability, but in return you gain full ACID safety. Most importantly, you no longer need to worry about cache coherence issues between a Redis cache and a separate SQL database—EloqKV unifies them into a single, reliable system.
I have tried TiKV and TiDB but they are quite slow. EloqKV is much faster, especially for in-memory reads. I use it to replace KVRocks, which is just a single node KV store wrapper around RocksDB with a Redis API, and EloqKV nicely outperforms it.
I’m Hubert, CTO of EloqData. That’s a great question.
Redis Cluster is often thought of as a distributed database, but in reality it’s not truly distributed. It relies on a smart client to route queries to the correct shard—similar to how mongos works in MongoDB. This design means Redis Cluster cannot perform distributed transactions, and developers often need to use hashtags to manually place related data on the same shard.
EloqKV takes a different approach. It’s a natively distributed database with direct interconnects between nodes. You can connect to any node with a standard Redis client, and still read or write data that physically resides on other nodes. This architecture enables true distributed transactions across shards, fully supporting MULTI/EXEC and Lua scripts without special client logic or manual sharding workarounds.
Distributed Lua is cool. Is your implementation similar to DragonflyDB, which doesn’t allow handling undeclared keys in Lua? For example, if I want to generate a new key dynamically inside a script like:
`local queue_key = "queue:user:" .. uid`
How does your system handle such cases?
Yes, Lua in EloqKV has no such limitations. You can freely read data, generate new keys, and even query those keys within Lua scripts. Underneath, EloqKV’s transaction layer is powered by our data substrate, which provides full ACID guarantees. FYI https://www.eloqdata.com/blog/2025/07/14/technology
Could you share a bit more about your specific use case? That will help me explain how EloqKV can best support it.
I built a SaaS app with per-tenant caches. Initially I used Redis but ran into scale-up issues, so I tried DragonflyDB. It works well in general, but my Lua script use case isn’t supported by default.
The use case is straightforward: each tenant has cached objects like:
`cache:{tenant_id}:{object_id} → cached JSON/doc`
I also maintain a tag index to find all object IDs with a given tag:
`tag:{tenant_id}:{tag} → set of object_ids (tag example: “pricing”, “profile”)`
When a tag changes (say “pricing”), I use a single Lua script to look up all object IDs in the tag set and then delete their cache entries in one atomic operation.
That use case aligns perfectly with EloqKV’s capabilities. In pure cache mode, batching multi-key deletions within Lua scripts can significantly reduce latency by minimizing client–server round trips.
Yes, we found it to be very useful for things that require durability and transactions. Previously we use JuiceFS community edition with Redis as metadata backend. The main issues are 1) potential metadata loss and 2) Memory capacity limit. We tried EloqKV and it seems to work really well. Anybody use EloqKV in production yet?
We’ve been using EloqKV to replace one of our largest Redis node (we didn’t want to run Redis Cluster, just a single big node). One pain point we had with Redis was the RDB fork causing latency jitter during persistence. EloqKV handles this much better — the fork-related stalls are gone, and so far it’s been a smooth drop-in replacement for our workload.
Thank you. Indeed we do already have several large multi-national companies using EloqKV in their production environments. Please contact us if you have any further questions. Moreover, we would be really interested to hear more details about your usage scenario. Metadata store for JuiceFS is a very interesting use case for us.
The reason is because we use a layer (our main innovation) called DataSubstrate to build modular databases that support distributed transactions [1]. Because we use some MariaDB code (GPLv2) and some MongoDB code (AGPLv3) in our other projects [2] [3], so we license our DataSubstrate code to be compatible with both, and therefore, we also license EloqKV under both licenses.
[1] https://www.eloqdata.com/blog/2025/07/14/technology
[2] https://github.com/eloqdata/eloqsql
[3] https://github.com/eloqdata/eloqdoc
I don’t think end users can pick a license because you have pieces licensed under GPLv2 and AGPLv3 . They will have to obey by both, which is tricky if they are conflicting.
The EloqKV developers could choose to pick just the AGPL one in the future and drop GPLv2 support while taking contributors dual licensed contributions with them...
I don't know that's the plan, but it's the best reason I see to dual license like this.
How do they properly accept contributions under dual licensing in a way that allows them to re-license those contributions? Through CLA? I'm not certain what you're saying is true - dropping one license might be as challenging as changing the license. Maybe I misunderstand
The same way you accept any contribution, because it isn't technically relicensing. You already granted them an AGPLv3 license when you uploaded your change to github without modifying the license file - that's what the "or" in the file means [1] - which entitles them (and anyone else) to create and distribute derivative works under only the AGPLv3 without any GPLv2 grant...
[1] Quoting the license file:
This software is provided under a dual license. You may choose to use it under the terms of either:
1. GNU General Public License, Version 2 (GPLv2), or
2. GNU Affero General Public License, Version 3 (AGPLv3).
As a contributor, couldn’t I use it under the terms of the GPL and make GPL-licensed derivative works and ignore their AGPL nonsense? If it is GPL then I understand that I am under no obligation to license my contributions under AGPL.
I am not a lawyer; this is not legal advice.
That’s right. You can simply choose GPL and ignore the AGPL part for EloqKV. The reason we use both is that, in other projects, we need to support both GPL and AGPL. EloqKV and all its dependencies are either developed by us or licensed under more permissive terms, so you can choose either license. However, EloqDoc is under AGPL (https://github.com/eloqdata/eloqdoc) and we cannot relicense it under GPL because it includes some AGPL-licensed code from MongoDB.
Yes, of course. Dropping GPL2 support (or you dropping AGPLv3 support in your fork) only affects the future changes made after dropping support... you can't revoke the GPL licenses on the existing commits.
Presumably they aren't going to merge any changes that drop the AGPL license...
Also not a lawyer.
You could license your contributions (if substantial enough to warrant copyright) with any license, for instance BSD 2 clause. (If your license is not compatible with the GPLs they should not accept your contribution.)
I would be honestly interested to know what sort of actual paid-for legal advice they got about this.
IANAL, but dual licensing with two conflicting syles of license just stinks of something that will come back and bite them in the backside big-time in the future.
From a purely practical perspective it is also not clear what the point is ? Because quite clearly the people you want to be subject to AGPL (i.e. the big-evil cloud providers) will simply take you at your word and run under GPL.
Really appreciate it. I am the CEO of EloqData. We submitted the ShowHN about a year ago [1]. Since then we made a lot of progress, much of the work is based on the feedback from the great HN community, including:
1) Open Source (GPL and AGPL) (thanks PeterZaitsev).
2) Session based transaction in Redis API. (thanks fizx)
3) Better explanation of the architecture [2] (thanks apavlo).
4) Testing with Jepsen (internally, we will do it officially when we have the resource) (thanks jacobn, among others).
Again, thanks, really appreciate the community support. Please go to our website [3] or join our discord channel to provide more feedback.
[1] https://news.ycombinator.com/item?id=41590905
[2] https://www.eloqdata.com/blog/2025/07/14/technology
[3] https://eloqdata.com
Very interesting project. Couple of notes:
* What is the back story here? Why create this in the first place?
* Seems like it's less performant than Dragonfly. Why not consolidate effort and help Dragonfly instead?
* To ride the AI wave, y'all need to gain Vector related features, similarity search, etc.
Great question. Currently, the database landscape is very fragmented. We are faced with a multitude of database choices (different ACID guarantees, data modalities, scalability, and so on). Data pipeline become very complicated, and we believe there must be a better solution. That’s why we developed a common architecture called DataSubstrate and built different APIs on top of it. EloqKV with a Redis API is just one of them; we also provide a MySQL-API RDBMS and a MongoDB-API JSON database (both open-sourced). Our goal is to create the next-generation database foundation to support the growing demand from new generation of applications. We believe future AI agent-driven applications will generate huge volumes of queries and data that will be difficult to handle with existing solutions.
EloqKV is only slightly slower than Dragonfly—about 10–20%—but for good reason. Dragonfly is a pure in-memory database with a highly optimized network layer and a very specialized design. EloqKV, on the other hand, is a full-featured database with all the checkboxes you can think of: fully consistent, durable, distributed transactions, fault-tolerant, tiered storage, and more. Despite this, we incur very little overhead compared with state-of-the-art, purpose-built databases when we have the same workload guarantee. Our thesis is that we may not need dedicated specialized solutions if we can achieve comparable performance (and cost) with general-purpose systems.
We will also add vector support very soon. Please stay tuned.
I have tested EloqKV for a pet project and seems it is quite solid. Performance is fantastic, far out-performing most databases with durability by a large margin. I am not sure about the distributed transaction correctness but all my tests seem to indicate it works as advertised, which is very interesting because the other distributed NewSql databases are all rather slow. Haven't tried their SQL and Mongo solutions, but they also look quite interesting.
Interesting that it supports SQL transaction syntax. Does make it easier for traditional SQL users to switch to a Redis interface database as its primary store
It's weird that they don't mention similar distributed NewSQL databases like TiKV, which also has a MySQL layer (TiDB), and position themselves as a Redis replacement.
Tikv is very (very) slow (compared to redis, not sure about elogkv) and it's redis layer seems unmaintained. TiDB is very interesting product ; we tried to replace our massive mysql (master with multi read replica/standby's) system and it is just so slow that it's unusable really. Maybe it's great for other goals though, I don't know; I like the idea so we tried some setups with it.
According to their benchmark results read latency is comparable to redis even under 'Persistent Transactional Mode'. Might be a solid choice for people who don't really need complicated relational database but need the data to be persisted
TiKV is a great project and we have a lot of respect for their work. EloqKV is based on a very different architecture [1], and we also have MySQL compatible [2] and MongoDB compatible [3] databases build on top of the same architecture. They all inherit the extreme performance, scalability, fault tolerance, and ACID properties due to the common underpinning.
[1] https://www.eloqdata.com/blog/2025/07/14/technology
[2] https://github.com/eloqdata/eloqsql
[3] https://github.com/eloqdata/eloqdoc
Titan/Tidis (the redis compatible servers built on top of TiKV) don't seem to have any recent activity in their public repos: - https://github.com/yongman/tidis - https://github.com/distributedio/titan
KVRocks (which _is_ mentioned) does, as does valkey (also not mentioned, but probably only because it's not that different from redis at this point IIUC).
A person who wants transactions and a relational layer isn't going to use KVRocks, but sure, if you only care about the key-value part. I gave tikv/tidb as an example; there are others.
Writing your own Redis-like interface is trivial, so tidis et al don't matter to me. Even with Redis you should write an interface so you can swap it out.
Some Redis variants focus on persistence—KVrocks, for example, or MemoryDB, which emphasizes durability through redo logs to minimize data loss. However, they are not truly transactional, since they lack fundamental rollback semantics and distributed transaction.
EloqKV, by contrast, is fully transactional. It supports distributed Lua, MULTI/EXEC, and even SQL-style BEGIN/COMMIT/ROLLBACK syntax. This means you get the transactional guarantees of a database with Redis-level read performance. Writes are slightly slower since EloqKV ensures durability, but in return you gain full ACID safety. Most importantly, you no longer need to worry about cache coherence issues between a Redis cache and a separate SQL database—EloqKV unifies them into a single, reliable system.
Last I looked the TiKV Redis layers hadn't been updated in years, and were missing many Redis features (such as streams).
https://github.com/yongman/tidis
https://github.com/distributedio/titan
I have tried TiKV and TiDB but they are quite slow. EloqKV is much faster, especially for in-memory reads. I use it to replace KVRocks, which is just a single node KV store wrapper around RocksDB with a Redis API, and EloqKV nicely outperforms it.
Isn’t Redis already a distributed database with Redis Cluster?
I’m Hubert, CTO of EloqData. That’s a great question.
Redis Cluster is often thought of as a distributed database, but in reality it’s not truly distributed. It relies on a smart client to route queries to the correct shard—similar to how mongos works in MongoDB. This design means Redis Cluster cannot perform distributed transactions, and developers often need to use hashtags to manually place related data on the same shard.
EloqKV takes a different approach. It’s a natively distributed database with direct interconnects between nodes. You can connect to any node with a standard Redis client, and still read or write data that physically resides on other nodes. This architecture enables true distributed transactions across shards, fully supporting MULTI/EXEC and Lua scripts without special client logic or manual sharding workarounds.
Distributed Lua is cool. Is your implementation similar to DragonflyDB, which doesn’t allow handling undeclared keys in Lua? For example, if I want to generate a new key dynamically inside a script like: `local queue_key = "queue:user:" .. uid` How does your system handle such cases?
Yes, Lua in EloqKV has no such limitations. You can freely read data, generate new keys, and even query those keys within Lua scripts. Underneath, EloqKV’s transaction layer is powered by our data substrate, which provides full ACID guarantees. FYI https://www.eloqdata.com/blog/2025/07/14/technology
Could you share a bit more about your specific use case? That will help me explain how EloqKV can best support it.
I built a SaaS app with per-tenant caches. Initially I used Redis but ran into scale-up issues, so I tried DragonflyDB. It works well in general, but my Lua script use case isn’t supported by default.
The use case is straightforward: each tenant has cached objects like: `cache:{tenant_id}:{object_id} → cached JSON/doc`
I also maintain a tag index to find all object IDs with a given tag: `tag:{tenant_id}:{tag} → set of object_ids (tag example: “pricing”, “profile”)`
When a tag changes (say “pricing”), I use a single Lua script to look up all object IDs in the tag set and then delete their cache entries in one atomic operation.
That use case aligns perfectly with EloqKV’s capabilities. In pure cache mode, batching multi-key deletions within Lua scripts can significantly reduce latency by minimizing client–server round trips.
Cool to see such a high throughput number with distributed cluster support
I like the idea, KV store but not a memory store with Redis simplicity
Yes, we found it to be very useful for things that require durability and transactions. Previously we use JuiceFS community edition with Redis as metadata backend. The main issues are 1) potential metadata loss and 2) Memory capacity limit. We tried EloqKV and it seems to work really well. Anybody use EloqKV in production yet?
We’ve been using EloqKV to replace one of our largest Redis node (we didn’t want to run Redis Cluster, just a single big node). One pain point we had with Redis was the RDB fork causing latency jitter during persistence. EloqKV handles this much better — the fork-related stalls are gone, and so far it’s been a smooth drop-in replacement for our workload.
Thank you. Indeed we do already have several large multi-national companies using EloqKV in their production environments. Please contact us if you have any further questions. Moreover, we would be really interested to hear more details about your usage scenario. Metadata store for JuiceFS is a very interesting use case for us.