最近碰到的hbase刪除操作的兩個陷阱

最近在測試和線上運行的時候碰到了hbase刪除的兩個陷阱

1. 如果delete後有其他操作(1ms內),數據可能會丟失

http://hbase.apache.org/book.html#versions

Deletes mask puts, even puts that happened after the delete was entered Remember that a delete writes a tombstone, which only disappears after then next major compaction has run. Suppose you do a delete of everything <= T. After this you do a new put with a timestamp <= T. This put, even if it happened after the delete, will be masked by the delete tombstone. Performing the put will not fail, but when you do a get you will notice the put did have no effect. It will start working again after the major compaction has run. These issues should not be a problem if you use always-increasing versions for new puts to a row. But they can occur even if you do not care about time: just do delete and put immediately after each other, and there is some chance they happen within the same millisecond.

2.先delete後increment導致數據不正確

https://issues.apache.org/jira/browse/HBASE-3725

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章