Optimize Cypher Query Fired From Python
Solution 1:
Add appropriate indexes or uniqueness constraints so that your generated queries do not need to scan for the appropriate nodes to start working.
For example (based on your examples), you could add indexes to:
:subSubLocality(name_wr)
:subLocality(name_wr)
:locality(name_wr)
:city(name_wr)
Solution 2:
I can't say for sure what the cause is, but I have a few questions that should help us get closer to an answer.
• Have you tried benchmarking these queries individually? At first glance, they look like they are simple enough to complete, so I don't think this is the issue but it wouldn't hurt to know if you really need to be optimizing the queries themselves.
• You mentioned it takes "2 seconds", is that from the moment you hit 'enter' to execute your Python script (so things like initiating the connection to the Neo4j instance are included), or does it specifically take 2.0 seconds for the queries to execute?
• The docs note that prior to v3.2 of Neo4j, the Cypher planner wasn't always making the most efficient choices. If you have an earlier version, the docs mention you should default to the cost-based planner.
• Is this a local Neo4j instance? If it's hosted, what are the hardware specs of the host machine? Might not hurt to bump up the specs if possible.
• If you haven't added any custom indexing on properties and your queries always look the same, I would recommend looking into that option.
Post a Comment for "Optimize Cypher Query Fired From Python"