會議rebutta

原文來自:https://www.cnblogs.com/baidut/p/6375371.html

https://www.st.cs.uni-saarland.de/zeller/onresearch/rebuttal-patterns.php3

http://hyunyoungsong.wordpress.com/2010/12/18/how-to-write-a-acm-sigchi-rebuttal/

學術會議 Rebuttal 模板

Note that the author rebuttal is optional, and serves to provide you with an opportunity to rebut factual errors in the reviews, or to supply additional information requested by the reviewers.

The rebuttal is limited to 4000 characters. Please be concise and polite. Comments that are not to the point or offensive will make rejection of your paper more likely. Make sure to preserve anonymity in your rebuttal. Links to websites that reveal the authors’identities are not allowed and will be considered a violation of the double-blind policy. Links to websites with new figures, tables, videos or other materials are not allowed.

撰寫原則

  • 言簡意賅(少說廢話,抓住重點,用最少的話讓反面意見或中立意見的審稿人改變態度)
  • 結構清晰,語法沒毛病 (排版組織)
  • 客觀陳述事實
  • 千萬不要犯“思想政治錯誤”
    • 不禮貌
    • 抓審稿人文字毛病
    • 破壞雙盲評審規定,添加鏈接等可能暴露身份的信息

閱讀之前,你需要清楚以下幾點:

  • 內容來自於 NIPS會議 公開的 Rebuttal,適用於計算機領域的學術會議 【 NIPS會議會把往年已接收的論文的 Rebuttal 貼出來,參見 http://papers.nips.cc/ (只給出了持正面意見的reviewer的意見)NIPS2013 NIPS2014 NIPS2015 】
  • rebuttal 只會在你論文處於接收邊緣的時候起作用,如果reviewers意見普遍很嚴厲,那麼rebuttal的作用幾乎可以忽略。當然如果 reviewers 意見普遍很好且沒有提問,也可以不寫 rebuttal。

  • Rebuttal 是給審稿人和 area chair 看的。Confidential comment to Area Chair 一般用於舉報審稿人(僅area chair可見),不用填。

內容組織

說明: 帶有星號*的部分爲重點內容,其餘爲可選內容。

首句表達感謝*

即使是反對你的審稿人,也拿出了很多寶貴的時間審閱你的論文,因此作爲作者要學會感恩。

  • We thank the reviewers for acknowledging the strong performance of this work and the quality of the presentation. We address the comments as follows.
  • 感謝好評 We thank the reviewers for their positive and constructive feedbacks.
  • We thank all the reviewers for helpful comments.
  • Thanks for all your feedback and suggestions. We will carefully incorporate them into our paper.
  • Thanks for the helpful comments!
  • Thank you for the feedback and suggestions, we will add clarification where needed and include suggestions as space permits.
  • We thank the reviewers for their careful consideration. We greatly appreciate the positive comments and address major concerns below.
  • 框架 We thank all the reviewers for their efforts. We start this rebuttal by reiterating our contributions, and then address specific concerns, especially those from AR6 where there has clearly been some misunderstanding leading to a serious error in his/her review. We kindly ask that AR6 revisits his/her review in light of our clarifications below.
  • We thank all the reviewers. And we apologize for typos, grammar mistakes, unclear notations and missing citations. They will be corrected such that the overall writing meet NIPS standards. The below clarifications will be added in the paper or supplement either in form of texts or figures.
  • Thanks to all the reviewers for their time and feedback. We provide some specific responses and clarifications.
  • We'd like to thank the reviewers for their careful readings and valuable comments. We believe the constructive feedback will improve the paper and increase its potential impact to the community.

第二句再次強調貢獻

  • First, we'd like to emphasize the contributions:
  • We would like to emphasize that the novelty of the method, which addresses how to efficiently learn the dependency between latent variables without explicit knowledge of the model, has been accepted as valid and legitimate by the reviewers. We are confident this is a useful contribution for making generic inference viable in practice. Omitted comments will be fixed in revision if accepted.
  • We thank the Reviewers for their constructive comments. We believe the model proposed is very powerful and theoretically deep. We agree with the reviewers that the exposition and experiments should be improved and will address this in the revision.

解答公共問題

如果有幾句話想讓所有與會審稿人看到,那麼這幾句話一定要放在最前方。原因也很簡單,每個人不可能把你的rebuttal全篇看完,但是前幾句大家還是都會瞧幾眼的。

We will re-structure the paper to improve clarity. We will also add more details (and add an example, space permitting) and clarify our contributions (Section 4.4) for better understanding. We will also fix minor typos.

解答特有問題*

Then we address major concerns below.

對於支持者的感謝

We thank the reviewer for the encouraging comments.

對於反對者的反駁*

一般而言,對於area chair,那個給分比較低的會自然吸引他的眼球,相對佔得的權重也就大(這些是area chair的自己之談),所以rebuttal就是要在有限的言辭裏重點反駁這些reviewer。

小毛病可以承認

寫作架構的批評認慫就行

R2: "The structure and writing is a concern"

We agree. This has been addressed in the arXiv version which has much cleaner structure and writing, including improved section on related work.

關於論述不清的回覆

B. Apologies for being unclear in these parts of our paper, we address the individual points below, and will be more explicit on all of these in the full version.

小錯誤

\4. A list of minor typos.
Corrected.

關鍵問題必須反駁*

For Section 2.3, there may be some misunderstanding. The reviewer is correct that a simple alternative to our approach would be to run MAP on the latent variables, and then hold the latent variables S fixed and Bayesian melding method on the model variables when fixing the latent variables. However, this is computationally expensive and does not scale to high dimensions, as the previous Bayesian melding method requires performing density estimation for the distribution tau. Instead we propose an approximate joint prior in Section 3, which allows us to infer the latent variables and model parameters jointly. Thus our algorithm scales better than the original Bayesian melding algorithm.

關於缺失引用的回覆

R4 Missing citations

Note that we do cite as [17] and discuss the work by Parks et al in L090-100. We further clarify the distinctions below. We will include the work by Demirkus et al in the next version and discuss head pose estimation below.

Thank you for the references, which are now included in the current draft.

然而並沒有拉回這一票,顯然這個評委(Reviewer_7)意見很大 nips28/reviews/1985.html

Thanks for pointing out some valuable related work. The first two works do not consider any feature and instead consider the noise that occurs in observations. The third work is more application oriented using metric learning. Although we also demonstrate our model on a similar application - semi-supervised clustering, our work aims to provide a more general treatment to noisy features on matrix completion. In addition, their "uncertain side information" in fact corresponds to similar/dissimilar information between items in semi-supervised clustering, which means the uncertainty they consider is also on observations, while the noise we consider is on features. We are happy to include these related work in our final submission.

To Reviewer 8

\1. This paper lacks the references for some related recent works (e.g. [1, Nesterov 2015] and [2, Lan 2014]).

We have included Lan's conditional gradient sliding paper in the reference [14], which we believe is more relevant than [2].

We will include [1] in the final version.

\3. We thank the reviewer for mentioning the papers of Burer and Monteiro (2005) and Lee and Bresler (2009). Both papers are certainly relevant related work, and should be discussed. The Burer and Monteiro (B&M) paper (with which we were previously familiar but neglected to cite) is important, and gives a helpful traceback of the factorization and nonconvex optimization idea in the optimization literature. While related, our algorithm and analysis are substantially different than these works. Essentially, B&M target the general semidefinite programming problem and have a more complex set of first order techniques for nonconvex optimization (BFGS and augmented Lagrangian techniques, etc.) It would not be easy to do a direct numerical comparison; but we would expect our methods to perform comparably. In contrast, our method is clean and simple, targets a more limited class of problems, and correspondingly allows us to obtain a strong theoretical convergence analysis (the pesky extra factor of r notwithstanding). As stated by Burer and Monteiro (2003) "Although we are able to derive some amount of theoretical justification for [global convergence], our belief that the method is not strongly affected by the inherent nonconvexity [of the objective function] is largely experimental." We hope that our work will contribute to and help spur the further development of this important class of techniques.

尾句模板

  • We apologize for not clarifying all questions given the limited space and many reviews. We will fix all typos and add missing references in the next revision.

  • We will address all remaining minor suggestions in the final revision

  • We will thoroughly check and fix grammatical errors in the final submission.

完整案例

其他資料 See Also

以下爲筆者撰寫本文時參考的資料,感興趣的可以繼續閱讀。

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章