Sophox vs QLever #1230
Replies: 5 comments 8 replies
-
The two projects have in common that they provide RDF data and a SPARQL endpoint for the OpenStreetMap data. Concerning the features and the performance they are very different. Here are the most obvious differences.
|
Beta Was this translation helpful? Give feedback.
-
@1: I was wrong. It seems that every OSM object has a location via the @2: QLever and Blazegraph are two completely different SPARQL engines, created by completely different groups of people. Your question is like asking why is a Porsche faster than a 2CV. Well, they are both cars that get you from A to B, but two completely different cars. Also, QLever is in active development, while Blazegraph ist not anymore for many years now. There are also many other SPARQL engines, see this comparison. @3: It is one of QLever's design principles to not require special hardware, in particular, no expensive hardware. In https://github.com/ad-freiburg/qlever-control/tree/python-qlever/Qleverfiles you find config files (called Qleverfiles) for various datasets. At the top of each Qleverfile, an estimate is given of the resources needed on a typical PC. For example, you can load the complete Wikidata (19 B triples) and run a server on a 1000 € PC in 4 - 5 hours. |
Beta Was this translation helpful? Give feedback.
-
An OSM user pointed out that QLever currently relies on TTL files that can be a couple weeks old. By comparison, as long as it’s healthy, Sophox typically keeps up with OSM’s minutely diffs. For example, it’s only 2 minutes behind as I’m writing this comment. Do you think this is an inherent tradeoff to being able to add the (very useful) GeoSPARQL triples, or could QLever eventually get closer to querying OSM and Wikidata in real time? |
Beta Was this translation helpful? Give feedback.
-
@1ec5 Good point. We currently rebuild the index weekly, but could also do it daily. So far, we download the PBF of the whole data. We haven't looked at the diffs yet. Good to hear that Sophox does because that means it's possible. Our goal is certainly to be able to query both datasets in real time. Though our experience is that the majority of users does not care if the dataset is a day or even a few days old. |
Beta Was this translation helpful? Give feedback.
-
@westnordost What would be the advantage for such expert users in using a SPARQL engine vs. using the Overpass API? One could argue that both approaches have their place. The SPARQL engine for convenient querying and visualization (for experts as well as non-experts) and the Overpass API for editing stuff. |
Beta Was this translation helpful? Give feedback.
-
To me, Sophox seemed to be kind of unmaintained / dead during the last years because the demo instance was kind of broken, but it just started working again some days ago, so I was wrong. Now, that there are two projects which (from my point of view) seem to do a very similar thing, I do wonder what's the difference between the two.
In particular, it would be helpful if you described how these two projects differ (in the project's readme?). I mean, not necessarily as far as currently supported features are concerned but concerning...
X =
different vision, different technology, different goals. If you knew that Sophox existed before, that is.Are there any common components from which both projects could profit in case due to different
X
, both projects should be developed further independently? In other words, any possibility for cooperation?Beta Was this translation helpful? Give feedback.
All reactions