I am trying to make a search request using the new ElasticsearchClient 8.18 (C# .net) with a high number of aggregations and I get the error :
The maximum configured depth of 64 has been exceeded. Cannot read next JSON object
Is there a way to increase the max depth of the System.Text.Json.JsonSerializer above 64 so that my request can pass??
you can configure the JsonSerializerOptions for the serializer that is used for your documents:
There currently is no way to modify the internal serializer that is used for Elasticsearch specific types.
A depth of 64 is very very unusual. Would you mind sharing the JSON response (e.g. by sending the same search request using CURL / Kibana Dev Tools Console, etc.)?
Thanks for replying. I tried to configure the JsonSerialization option by setting a MaxDepth of 256 but I keep getting the error that exceeds 64. I can even see in my Client object that the MaxDepth is correctly set to 256.
Other than that I believe I am getting the error because of my aggregations json which I believe is close to the limit. I can post this if youwant. The problem is that the creation of the aggregations is dynamic and it could go much higher than 64...
I would like to clarify that the JsonSerialization override does not work, and try to find a workaround...
By the way if I make a call with the low level elasticsearch client and push the query as raw string then the request is successful, meaning that the problem is on the query end. The serializer finds it to exceed the 64 (which can not be overridden) limit.
this is quite an uncommon case (and I expect performance not to be great - but that’s a different story).
You are right that we would have to modify the allowed depth for the internal serializer. We currently don’t allow to tamper with this instance since we make lots of assumptions in our custom (de-)serialization code which we could no longer rely on, if the user is allowed to modify the internal serializer.
I could probably increase the limit to 128 or some other sufficiently high value, if that helps. Otherwise you could use reflection to modify the internal serializer instance.
if you could increase it to at least 256 that would be great.
Also, if you could share a solution to use reflection to increase it manually that would be great also...
I have created a software that depends solely on elasticsearch and have 500+ customers using it. I am working on a workaround to use sibling aggregations when possible, but there is a chance that the user can select combinations that go past the 64 maxdepth...
if you could increase it to at least 256 that would be great.
Also, if you could share a solution to use reflection to increase it manually that would be great also...
I have created a software that depends solely on elasticsearch and have 500+ customers using it. I am working on a workaround to use sibling aggregations when possible, but there is a chance that the user can select combinations that go past the 64 maxdepth...
we have discussed this internally and I'll increase the depth to 256 or even 512. This change will be included in the next release (probably next week).
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.