Library(redis) pipelining?

One currently missing feature is the ability to pipeline message to Redis. This means sending multiple messages and start reading the replies after all messages have been sent. This reduces round trips and thus seriously improves performance. It is related to Redis transactions. I think we should support these. I’m wondering about the syntax though. One option could be

?- redis_multi(Server, 
               [ cmd1 -> result1,
                 cmd2 -> result2,

In theory would could overload this on redis/2 to create a pipeline by default and a transaction if multi and exec are used.

One issue is the Command -> Result. I kind of like the syntax, but it is not ISO compliant, which would ask for (Command -> Result), making it less attractive. Alternatively we could use

Result = Command

Which would be fully ISO compliant. On the other hand, the library uses threads, strings, dicts a bit of C code and quite a few non-ISO predicates, so portability is far away anyway.

What to do?

I have not used Redis or the Prolog Redis client yet so all of this is just thinking out loud.

Couldn’t the Redis transaction commands be parsed as a Quasi-quotation and then passed through to the Prolog into the C level interface?

?- redis_multi(Server, 
        "> MULTI
        > INCR foo
        > INCR bar
        > EXEC"

I am also thinking that because each individual command is getting a response and each individual command has to complete successfully before the next is started that this is like a JavaScript Promise.

I think pure Prolog is much more promising. Quasi quotations are particularly interesting for languages with a complex syntax. Redis commands are really easy: just a list of the command name and arguments. Then there is a pretty simple protocol to send these commands that have no issues with quoting as all strings are prefixed with their length.

The trick with pipelines is that you send N command, after which the server will send back N replies. The only gain is reducing (network) round-trips. Than you can put MULTI … EXEC around it, which doesn’t change anything except that the commands are executed atomically.

1 Like

I really like that syntax. Would using Command ~> Reply be ISO compliant (notice it is a tilde instead of a dash)? What about Command >> Reply?

That would work. We have had discussions about new operators before, notably with the Lambdas as provided by Ulrich and Paulo and I tend to agree with Paulo that we should try to minimize adding more operators. So, for portability we need a standard infix operator with priority < 1000, and preferably as close to 1000 as we can get. Hmmm. Running current_op(P, xfx, O), we find :=. That is in use for dicts, so we can use Result := Command. That is a little less ideal, but acceptable IMO.

1 Like

In that case, I would simply prefer Command - Reply; the reason is that Reply := Command is far from visually showing that Command generates a reply.

I’m using Command - Reply in the redis test suite, but I’m not happy with it. My poor bad eyes have too much problems spotting the - :frowning: Seems we both have a strong preference for “Command *** Result”, where *** is preferably something arrow-like. I think -> has my preference in that case, despite the portability issues.

1 Like

Pushed several updates that add pipeline and transaction support as well as many documentation fixes and enhancements. Looks like this is starting to be a usable library.

Anyone caring about this stuff: please try to use it and see where it goes wrong, e.g., bugs, things you cannot do, things that seem to cumbersome/ugly/…

1 Like

what about using => or some such more obviously special operator?

I still stand with this:

1 Like

By the way, I must say that I think this is a major addition to SWI-Prolog. Distributed prolog programs had (mostly) to be done by hand without this library, but now with this library it makes it much much easy to make distributed prolog programs.

In addition, redis is very minimal in terms of resources about 5MB of disk plus your persistent DB which is compressed (but it is optional), and can run in very small ARM devices, and perfect in terms of licensing: BSD, open source; every linux distribution has packages for it.

So I think this is a much more important addition than what it seems on the surface. Distributed systems are hard to do, and redis helps tremendously because of the heavy usage in most of the high-speed, high-throughput distributed systems industry.

Here I summarize some of the benefits (please feel free to add):

  • Prolog servers can go down (while the client still keeps going with the redis cache) and come back up without a glitch for the client.
  • Distributed prolog programs are much easier to develop
  • Special data structures are now easily available
    – Geo-Data: store latitude and longitude and easily calculate distance or presence within a radius with simple redis commands
    – Bitmaps for large data and fast bit operations
    – Fast operations can be done within the redis server using lua scripts
  • Prolog programs can easily interoperate with other programs in other languages in different computers, by just setting a key with data (no network coding, no necessity to write a prolog server/support a communication protocol)
    – Also, Redis provides MSGPACK support within lua scrips
  • More resilient prolog applications by providing clusters with fault-tolerance

These are just some of the advantages I can think of.


I completely agree. I’ve had several discussions in the past where people asked about horizontal scalability and Prolog cooperating in modern microservice architectures. My response was typically along the lines “We have all the data types, we have the concurrency, we have the reliability and resource management to run services 24x7 and we have all the interfaces to talk to networks and other languages, so yes, we can do this All this is true, but the message typically didn’t get really through :frowning: This library, and in particular if someone can create a showcase could be a big help. If we can prove we can do it using Redis, we have a much easier claim on that we can do it with e.g., Kafka as well.

This is still a bit of a problem. The issue is that in redis keys and values are just blobs (byte arrays). There are no types and e.g., numbers are simply stored in decimal notation. As you may recall, SWI-Prolog’s policy is that all encoding issues are left to I/O. At his moment we use the following policy:

  • Any blob (bulk data in Redis speak) is interpreted as a UTF-8 string. If the blob starts with “\u0000T\u0000”, the remainder of the blob is interpreted as a Prolog term in canonical syntax (and UTF-8 encoding).

I’m still unsure how to deal with binary blobs. Representing them in Prolog is not a big problem: strings can do this fine. The problem is when to interpret the Redis blobs as a UTF-8 string and when as a binary blob. For sending, we could do the same trick as with Prolog terms, wrap them into blob, as

redis(default, set(k, blob(Data)))

For reading it is a bit harder. For a single value we could go for

redis(default, get(k), blob(Data))

But what for an array of blobs? Well, maybe we make the wrapper a type indication for the nested content as well?

P.s. Pushed a quite modified API for redis_subscribe/2, which is now redis_subscribe/4.

I think a good solution for this would be to provide something of an extension to what we already do with library(regex). We can do something like this:

redis(default, get(mybinary)/stringblob, String)
... %etc ...

People store json, msgpack and other formats, so we can provide a user hook like this:

redis_convert_type(Command,Type,RedisData,UserData) :-
   % Here the user can convert RedisData to UserData
   % It is important to pass the Command also (in addition to the
   % Type) because there could be some commands that could pass
   % special data, like markdown

If the hook is defined, then library(redis) would call the user hook.

The question I have is whether the hook should be called only for types which are not provided by library(redis).

Thanks. That makes sense. I still have a little doubt about the syntax. For regex, these are really attributes of the regular expression and they refer to a known convention. Here they are not really attributes of the command, but of the return conversion. For redis/3, this would suggest a redis/4 as

redis(Server, Command, Type, Value).

This won’t work with the pipeline though. So, maybe still

redis(Server, Command, Type(Value)).

as this allows for

redis(Server, [ Cmd1 -> Type1(Value1), Cmd2 -> Type2(Value2) ]).

User-defined conversion may be troublesome as well. The main issue is how to make the initial (raw) value available? The most flexible is probably as an I/O stream, but generating an I/O stream for each value is a little costly :frowning: For stuff like JSON, XML, etc. it is pretty much accurate. This looks like the right direction to think about this problem though.

P.s. The Type(Value) does not allow for e.g. list(atom). We also have the as operator, so what about this?

redis(Server, Command, Value as Type).

Now we can predefine a good set of types. For any not defined type we create a binary stream pointing at the value and call some hook to translate the value. Most of that should be complicated stuff anyway. Built-in we could have

  • atom
  • string
  • codes
  • chars
  • term (parse as Prolog term)
  • All the above using (encoding), e.g., atom(utf8).
  • number (maybe integer and float?)
  • list(Type)
    I wonder whether that is needed as we know it is a list?
  • list(pair(KeyType,ValueType))
  • dict


Using the new library I frequently found myself calling

atomics_to_string([Prefix, something, Id], :, Key),
redis(Server, get(Key), Value).

This is a little awkward. I consider allowing for

redis(Server, get(Prefix:something:Id), Value)

i.e., make the low-level code translate a:b:c… into “a:b:c:…”. Would that make sense?

I think this is great, along with the hook, it will suffice. I presume the hook will be for non built-in types then?

RESP3 BigInts should become gmp’s numbers internally I would think, and we maybe add number(rational) to provide complete support for rational numbers. I think people dealing with finances will find rational numbers a plus (there are so many problems with floats).

This is a must for usability, almost all keys used in redis have colons embedded, and having to call atomics_to_string/3 just makes it a real pain to use the library.

1 Like

They are. I’m wondering when you get these though. All user keys and values are reported as bulk. The numbers Redis returns are generally small (sizes are often limited to 32 bits). Do these things come from Lua scripts?

I am not aware at the moment, but I would think that commands like INCR/DECR and INCRBY/DECRBY may start using these bigints in future versions, since they are limited to 64 bit numbers right now (64bit numbers work even in 32bit architectures like some ARM devices).

By the way, speaking about keys, I noticed the new http_redis_plugin (to store the http session) uses http:session as the key, I would suggest using swipl:http:session instead to avoid name clashes in the future.

Would anybody be up for making a showcase that uses it? If so, suggestions for what?

Pushed patches for that.

(as Type). Pushed patches to implement part of this. Now can deal with translating Redis bulk to the various text representations and numbers. Both this and using the a:b:... keys save a lot of typing and also avoids intermediate data structures and thus saves time. We are not there yet. Notably:

  • I’m happy that X as Type applies to arrays, but we need something else for maps/hashes, etc. Notably the value translation should typically depend on the key, e.g, receiving

    [ "label", "Hello world", "image", <JPEG bytes> ]    
  • We also need some of this stuff for writing. That is, notably to be able to arbitrary byte sequences or strings in a different encoding.

  • We still have messages where we have no clue what they contain. This holds for the PUB/SUB interface as well as for the XREAD and XREADGROUP messages. This is similar to maps/hashes above.

For now, the as Type does apply to data in collections (arrays/maps).

I think the idea of a user type is asking too much. It complicates stuff a lot and as is you can get a good type to do the final step. So, unless convincing use cases show up I think the set of types will be fixed.