You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
SELECT itemchargingtype, itemdescription, count(*) AS `count`, sum(itemamount) AS `sum`
FROM `transaction_items`
where itemdescription = 'WHT'
GROUP BY itemdescription, itemchargingtype
result:
charge_type
charge_item_name
count
sum
tax
WHT
14
1,202.88
After disabling
snappydata.sql.hashAggregateSize=-1
snappydata.sql.useOptimizedHashAggregateForSingleKey=false
it produces correct values.
result:
charge_type
charge_item_name
count
sum
tax
WHT
14
-4040.00
The text was updated successfully, but these errors were encountered:
Tested this and it looks to be a bug in the new BufferHashMap based implementation that reduces memory overhead for large aggregates and DISTINCT. For now you can switch to using the older implementation "set snappydata.sql.optimizedHashAggregate=false" which is as fast (and in many cases faster) than the newer one albeit may fail for very large aggregation/DISTINCT results. If this works for your use-cases it is much better than turning it off completely.
Aggregating negative decimal values produces incorrect results.
table:
Spark SQL query:
result:
After disabling
snappydata.sql.hashAggregateSize=-1
snappydata.sql.useOptimizedHashAggregateForSingleKey=false
it produces correct values.
result:
The text was updated successfully, but these errors were encountered: