You Don't use NetScape or Internet Explorer AGIC 2024
Special Track on
Academic Globalization and Inter-Cultural Communication: AGIC 2024©
in the context of
The 15th International Multi-Conference on Complexity, Informatics and Cybernetics: IMCIC 2024©
 
March 26 - 29, 2024  ~  Virtual Conference
Organized by IIIS
in Orlando, Florida, USA.
*** Registered authors can download their conference materials (receipt, certificate, and Proceedings) by clicking here. ***



CO-SPONSORS
  Acceptance Policy for Papers to Be Presented at Conferences Organized by IIIS
Acceptance Policy for Papers to Be Presented at Conferences Organized by IIIS

The acceptance policy which is usually applied to the submissions made to IMCIC, the symposia organized in its context, the collocated Conferences and other conferences organized by the International Institute of Informatics and Systemics (IIIS), is oriented by:

  1. The majority rule, when there is no agreement among the reviewers with regards to acceptance or non-acceptance, of a given submission.
  2. The non-acceptance of the submission when there is agreement among its reviewers for not accepting it.
  3. Acceptance of the paper when in doubt (a draw or a tie among the opinions of the reviewers, for example).

The reasoning that is supporting this acceptance policy is based on very well established facts:

  • There usually is a low level agreement among reviewers
  • A significant probability of refusing high quality papers when the acceptance policy is oriented in such a way as to just accept those papers with no disagreement for their respective acceptance.
  • The possible plagiarism (of some non-ethical reviewer) of the content of non-accepted papers.

Let us briefly discuss these facts and provide information about some of the sources that informed about them.


Some weaknesses of Peer Reviewing


Low level of agreement among reviewers

David Lazarus, Editor-in-Chief in 1982 for the American Physical Society, which publishes The Physical Review, Physical Review Letters and Review of Modern Physics, asserted that “In only about 10-15% of cases do two referees agree on acceptance or rejection the first time around”. In the special case of the organization of a conference and its respective proceedings there is usually no time for a re-submission, so we can infer with a significant level of certainty that, in conferences reviewing, there will be only about 20-15% of agreement among the reviewers of a given submissions (Lazarus, 1982).

Lindsey (1988) arrived to a similar conclusion, stating that “after reviewing the literature on interjudge reliability in the manuscript reviewing process…concludes that researchers agree that reliability is quite low” (in Speck, 1993, p.113).

Michael Mahoney, who conducted several studies on peer reviewing processes, “criticizes the journal publication system, because it is unreliable and prejudicial.” (Speck, p. 127). For example, he said that referees’ comments “are so divergent that one wonders whether they [referees] were actually reading the same manuscript.” (Mahoney, 1976; p.90). “To reform the journal publishing systems, Mahoney [1990] recommends eliminating referees or using graduate students as referee” (Speck, 1993).

Ernst and colleagues sent the same manuscript to 45 experts to review it. Each one of the experts held editorial board appointments with journals that publish articles in areas similar to that of the submitted paper. 20% rated the manuscript as excellent and recommended its acceptance. 12% found the statistics of the manuscripts unacceptable, 10% recommended its rejection and the rest of the experts classified the manuscript as good or fair (Ernst, et. al. 1993). Furthermore, they asked the experts to evaluate the paper against eight measures of quality. Almost every measure received the best and the worst evaluation from the reviewers. Ernst and colleagues concluded that “the absence of reliability…seems unacceptable for anyone aspiring to publish in peer-reviewed journals” (p. 296).

If peer reviewing is so unreliable and “philosophically faulty at its core” (as Horrobin, 1982, affirmed) for journals and research funding, then peer reviewing will be even less reliable in conferences organization. This is why, in our opinion, more conference reviewing are being done on abstracts or extended abstracts, and not on full papers. Some conferences stress the fact that any submission that exceeds the limit of a given number of words will not be considered.

Weller (2002) summarized 40 studies made on reviewing reliability in 32 journals and concluded that according to all these studies “An average of 44.9 percent of the reviewers agree when they make a rejection recommendation while an average of 22.0 percent agree when they make an acceptance recommendation.” (p. 193). This means that “reviewers are twice as likely to agree on rejection than on acceptance.” (p. 193). This fact supports strongly our acceptance policy that relies on the reviewers’ agreement related to the non acceptance of a submission, rather than on their agreement regarding its acceptance.

Other authors had similar conclusions. Franz Ingelfinger (1974) former editor of the New England Journal of Medicine affirmed that “outstandingly poor papers… are recognized with reasonable consistency.” (p. 342; from Weller, 2002, p. 193). This fact gives a strong support.

Weller (2002) found out that Journals’ editors seek more reviews when they have disagreement among the reviewers. She affirms that “between 30 percent and 40 percent of medical journal editors opted for more review when reviewers disagreed; the rest resolved the disagreement by themselves, sought an input from an associate editor or discussed the next steps at an additional meeting.” (p.196). Wilkes and Kravitz (1995) had similar results after examining editor policies in 221 leading medical journals. They found that “43 percent of responding editors sent manuscripts with opposing recommendations from reviewers out for more reviews.” (Cited in Weller, 2002; p.196).

Furthermore, to send the manuscript to more reviewers does not necessarily solve the disagreement problem being faced by the journal editor. The high level of disagreement that Ernst and colleagues found was based on a study where a manuscript was sent to 45 experts (Ernst, et. al. 1993; p. 296). Additionally, in conference reviewing processes, the inherent time restrictions make it unfeasible to send the manuscripts to more reviewers when the respective reviewers disagree. Consequently, when the reviewers of a given submission disagree (and this happens most of the times) a decision should be taken by the organizers or the Selection Committee. If this decision is to not accept the paper, high quality papers might be left out, as we will explain below, and reviewers with low ethical level might find the opportunity to plagiarize some ideas of the non-accepted paper. We will also give some details below on this issue.

If we take the facts, mentioned up to the present, into account, there will be two basic papers acceptance policies left for the selection of papers to be presented in a conference:

  1. To accept just those papers where the reviewers have agreed on such an acceptation.
  2. To refuse, or not to accept, those papers where the reviewers have agreed on such a refusal.

In the first case the conference will have a very low acceptance rate and there will be a higher probability of not accepting very good papers. In this case there will be no warranty of improving the quality average of the papers accepted for presentation. Let us briefly explain this statement.


Probability of Refusing High Quality Papers and How to Diminish It

Campanario (1995) affirms that eight authors won the Nobel Prize after their prize winning ideas were initially rejected by reviewers and editors. He also found out that about 11 percent of the most cited articles were first refused, and that the three most cited articles, of a set of 205, were initially rejected and eventually accepted by another journal editor (Campanario, 1996, p. 302). Rejection of innovative ideas is one of the weaknesses of peer reviewing that has frequently been reported. An increasing number of authors perceive this kind of reviewer bias. In a survey made by the National Cancer Institute where “active, resilient, generally successful scientist researchers” were interviewed, just 17.7 percent of them disagreed with the statement “reviewers are reluctant to support unorthodox or high-risk research”. 60.8 percent of them agreed, and 21.4 percent were neutral. Federal agencies tried to counterbalance the reviewers’ bias against new ideas by means of providing grants with no reviewing support. Chubin and Hackett (1990) affirm that an example of this kind of “strategy is the recent [1990] decision by NSF [National Science Foundation] allowing each program to set aside up to 5 percent of its budget for one-time grants of no more than $50,000 to be awarded, without external review, in support of risky, innovative proposals” (p. 201). This is one of the reasons why, in IMCIC Conferences, we accepted in the past non-reviewed papers taking the intrinsic risks of this kind of paper acceptances. Deception was a risk that was not perceived at the moment of examining the risks of this kind of acceptance policy.

So, it is evident that acceptance policies based on the positive agreement of reviewers will increase the probability of refusing good papers. The larger the level of agreement sought among reviewers in order to accept a paper, the higher the probability of refusing a very good paper; although it is also true that the larger the level of agreement among the reviewer the lower the probability of accepting a low quality paper. Consequently, it is a matter of a trade off: to increase the certainty of refusing poor papers has the cost of increasing the probability of refusing good papers. This trade off will depend on the journal or the conference quality objectives: whether they are related to refuse low quality papers with the cost of taking the risk of refusing good papers, or to increase the probability of the quality average. In the first case, the selection criteria would be oriented to the acceptance agreement among the reviewers, and in the second case it may be better related to the agreement among the reviewers who are recommending not accepting the paper. IMCIC Conferences (as well as its collocated Conferences and others organized by IIIS) have been mostly based on agreements among reviewers recommending refusal, or non-acceptance. Papers with disagreement among the reviewers have usually been accepted based mostly on a majority rule. This policy may be improved by the two tiers reviewing that are being applied for 2024 Conferences, where double blind reviewing is complemented by non-blind or open reviewing.

Furthermore, there is no study that can relate low acceptance rates, or high refusal rates, with high quality. Moravcsik (1982) asserts that “the rejection rate in the best physics journals is more like 20-30% and not 80%.” (p. 228). Weller (2002) examines about 65 studies related to the consequences of rejection rates and concluded that “the relationship between rejection rates and the importance of a journal has not been established. What has been established is merely that the more selective the criteria for including a journal in a study, the higher the rejection rate is for that journal. Almost every study discussed in this chapter –Weller emphasizes– has supported this finding, regardless of discipline. Each discipline has a set of journals with both high and low rejection rates; how these are translated into journal quality needs to be further investigated” (p. 71).

Consequently, to select the first option for an acceptance policy in a conference organization, has no proven quality benefit (related to its respective high refusal rate) and one proven quality risk, i.e., to refuse good papers because of the reviewers’ bias against new ideas or new paradigms. Therefore, it seems evident that the second of the two options we stated above, will have, in conference organization, a probably better cost/benefit ratio regarding quality average, than the first option. This is especially true if we take into account that “reviewers are twice as likely to agree on rejection than on acceptance” as well as the time and other inherent restrictions existing in conference reviewing processes.

The low reliability of peer reviewing and the low level of agreement among reviewers of the same manuscript are some of the peer reviewing weaknesses which have contributed to the skepticism regarding its real value, effectiveness and usefulness. Some authors and editors went as far as to relate peer reviewing to chance. Let us show a sample of this kind of statements. Lindsay (1979), for example, said that “interrater agreement is just a little better than what would be expected if manuscripts were selected by chance.” (cited in Speck, R. L., 1993, p. 115). Nine years later, Lindsay was even more emphatic, titling his paper “Assessing Precision in the Manuscript Review Process: a little better than a Dice Roll.” (Lindsay, 1988. Cited in Speck, R. L., 1993, p. 113).


Possibilities of Plagiarism and Fraud Generated by the Reviewing Process and How to Reduce them.

One of the explicitly stated functions of a conference and its proceedings is to be “a place to claim priority” (Walker and Hurt, 1990; p.79). This may counterbalance the plagiarism reported in journal peer reviewing, especially if we take into account that other explicitly stated function of conferences and their proceedings, is the informal publication that may precede the formal publication of the respective research in a journal. These two complementary functions are seriously taken, and will continue to be taken, into account in IMCIC Conferences organization.

Among the conclusions Weller (2002) made in her book, after examining more than 200 studies on peer reviewing in more than 300 journals, she affirmed that “Asking someone to volunteer personal time evaluating the work of another, possibly a competitor, by its very nature invites a host of potential problems, anywhere from holding a manuscript and not reviewing it to a careless review to fraudulent behavior” (p. 306).

Chubin and Hackett (1990) also indicate the same kind of possible situations when a competitor’s manuscript is blocked or delayed, or its results or arguments are stolen.

An epitome where peer reviewing resulted in plagiarism, where results or arguments were stolen, is what has been known as the Yale Scandal. In a two-part article in Science entitled “Imbroglio at Yale: Emergence of a Fraud,” William J. Broad (1980) thoroughly described such a fraud or plagiarism. Moran (1998) summarized it in the following terms: “A junior researcher at NIH [National Institute of Health], Helena Wachslicht-Roadbard, submitted an article to the New England Journal of Medicine (NEJOM). Her supervisor, Jesse Roth, was coauthor. An anonymous reviewer for NEJOM, Professor Philip Felig of Yale [‘a distinguished researcher with more than 200 publications who held an endowed chair at Yale and was vice chairman of the department of Medicine’ (Broad (1980, p. 38)], recommended rejection. Before returning its negative recommendation to NEJOM, Felig and his associate, Vijay Soman, read and comment on it. Soman made a photocopy of the manuscript, which he used for an article of his own in the same area of research. Soman sent his manuscript to the American Journal of Medicine, where Soman’s boss, Philip Felig, was an associate editor. Felig was also coauthor of the article. The manuscript was sent out for peer review to Roth, who had his assistant, Roadbard, read it. She read it and spotted plagiarism, ‘complete with verbatim passages.’ (Broad, 1980, p.39)…Roadbard sent a letter to NEJOM editor Arnold Relman, along with a photocopy of the Soman-Felig article. Relman was quoted as saying the plagiarism was ‘trivial’, that it was ‘bad judgment’ for Soman to have copied some of Roadbard’s work, and that it was a ‘conflict of interest’ for Soman and Felig to referee Roadbard’s paper (Broad, 1980, p. 39). Relman then called Felig, who said, according to Broad (1980), that peer-review judgment was based on the low quality of Roadbard’s paper, and that the work on the Soman-Felig paper had been completed before Felig received the Roadbard manuscript (Broad stated that this last statement by Felig was incorrect)…Relman published the Roadbard paper, in revised form. Roth called Felig (a long-time friend from school days) and they met to discuss the two papers, for which they were either coauthors or reviewers. Broad (1980) stated that prior to the meeting ‘Felig had not compared the Soman manuscript to the Roadbard manuscript’ (p. 39), even though Felig was coauthor of one article and referee for the other! When he returned to Yale, Felig questioned Soman, who admitted he used the Roadbard manuscript to write the Soman-Felig paper…Broad (1980) reported that Roadbard and Roth began to express disagreement about the extent of plagiarism involved. Roadbard wrote to the Dean of Yale’s School of Medicine, Robert Berliner, who did not believe all that she wrote. He was quoted as writing back to her, ‘I hope you will consider the matter closed’ (p. 38). NIH apparently put off (by dragging their feet or by stonewalling) an investigation. A subsequent audit of the records revealed, according to Broad, a ‘gross misrepresentation’ (p. 41). Soman admitted that he falsified, but claimed it was no ‘significantly different from what went on elsewhere’ (p. 41). After further investigations, at least 11 papers were retracted. Soman was asked to resign from Yale University, which he did. Felig became Chairman of Medicine at the Columbia College of Physicians and Surgeons.” (Moran, p.69) “After two months [in this position], Philip was forced to resign…At issue was a scandal that rocked the laboratory of one of Felig’s associates and coauthors back at Yale Medical School, where Felig previously worked.” (Broad, 1980, p. 38) Helena Wachslicht-Roadbard spent one year and a half writing letters, making phone calls, threatening to denounce Soman and Felig at national meetings, and threatening to quit her job. She wanted an investigation and she got it (Broad, 1980; p. 38).

Several cases like the Soman-Felig scandal have been reported, but - as Moran (1990) affirms - it is “impossible to tell precisely how many attempts at plagiarism by means of peer review secrecy have been successful.” (p. 118). It is to be thought that this kind of plagiarism is more frequent when the manuscripts are coming from what is called the Third World, to be reviewed by reviewers of the First Word. Verbal reports abound on this issue.

If one of the functions of a conference is to be “a place to claim priority”, then conference organizers should consider adequate measures to avoid that their reviewing process generates opportunities for plagiarism from some of its reviewers. One way to achieve this objective is to have a policy of “when-in-doubt-accept-the-paper” as opposed to the policy of “when-in-doubt-refuse-the-paper”. Arnold Relman, editor of the New England Journal of Medicine had another reviewer suggesting him to accept Helena Wachslicht-Roadbard’s paper. If he had accepted the paper, there would have been no opportunity for the plagiarism made by means of his reviewing process. This reinforces what we stated above with regards to IMCIC’s acceptance policy mostly based on agreements among reviewers recommending refusal, or non-acceptance. Papers with disagreement among the reviewers have usually been accepted based mostly on a majority rule. This policy might be improved by adding to it Gordon’s optional published refereeing. Accordingly, as we said above: When the reviews of a paper are non conclusive, the paper may be accepted under the condition of accompanying its presentation at the Conference, and its publication in the proceedings, with its respective reviewers comments.


Conclusions with Regards to our Acceptance Policy

The acceptance Policy we described has its quality benefits and its quality costs. The costs may be: 1) an increase in the number of low quality papers being accepted (which – as we argued above - is counterbalanced by an increase in the probability of accepting good papers which otherwise would have been refused); and 2) an increase in the probability of effective deceptions, or bogus papers acceptation. The benefits may be: 1) an increase in the quality average of the papers (due to the increase of the probability of accepting high quality, paradigm shift, papers which otherwise would have been refused); and 2) a decrease in the probability of plagiarism through some of the Conference’s reviewers.


Regarding Coauthors of the Work That Could Be Accepted

In the event that there are multiple coauthors, it is the responsibility of the corresponding author, or whoever is in communication with the Organizing Committee or its Secretariat, to ensure that 1) the other coauthors are informed of our policy and the acceptance process, and 2) to know the regulations and standards of their university or organization concerning who can and cannot be coauthors, as well as the academic ethical code required by their university or organization. The Conference Organizing Committee cannot have information regarding this, let alone interpret the content of what may be written.



Professor Nagib Callaos

IIIS’ President


References

Broad, W. J., 1980, Science, 210 (4465) October, pp. 38-41.

Campanario, J. M., 1995, On influential books and journal articles initially rejected because of negative referee’s evaluation. Science Communication, 16(3), March, pp. 304-325.

Campanario, J. M., 1996, Have referee rejected some of the most-cited articles of all times, Journal of American Society for Information Science, 47(4),April, pp.302-310.

Chubin, D. R. and Hackett E. J., 1990, Peerless Science, Peer Review and U.S. Science Policy; New York, State University of New York Press.

Ernst, E., Saradeth, T. and Resch, K. L., 1993, Drawbacks of peer review, Nature, 363, 296, May.

Horrobin, D. F., 1982, Peer Review: A Philosophically Faulty Concept which is Proving Disastrous for Science. The Behavioral and Brain Sciences, 5, No. 2, June 1982, pp. 217-218.

Ingelfinger, F. J., 1974, Peer review in biomedical publications. American Journal of Medicine, 56(5), May, pp. 686-692.

Lazarus, D, 1982, Interreferee agreement and acceptance rates in physics, The Behavioral and Brain Sciences, 5, No. 2, June 1982.

Lindsay, D., 1979, Imprecision in the Manuscript Review Process. In proceedings 1979 S[ociety for] S[scholarly] P[ublishing], pp.63-66. Washington: Society for Scholarly Publishing, 1980.

Lindsay, D., 1988, Assessing Precision in the Manuscript Review Process: A little better than a Dice Roll.” Scientometrics 14, Nos. 1-2.

Mahoney, M. J., 1977, Publication prejudices: an experimental study of confirmatory bias in the peer review system, Cognitive Therapy and Researc., 1(2), pp. 161-175. Cited by Speck, R. L., 1993, Publication Peer Review: An Annotated Bibliography, Westport, Connecticut, Greenwood Press, p.127.

Mahoney, M.J., 1990, Bias, Controversy, and Abuse in the Study of the Scientific Publication Systems.” Science, Technology and Human Values, 15, no. 1; pp. 50-55. Cited by Speck, R. L., 1993, Publication Peer Review: An Annotated Bibliography, Westport, Connecticut, Greenwood Press, p.127.

Moran, G., 1998, Silencing Scientists and Scholars in Other Fields: Power, Paradigm Controls, Peer Review, and Scholarly Communications. London, England: Ablex Publishing Corporation

Moravcsik, M., 1982, Rejecting published work: It couldn’t happen in Physics! (or could it). The Behavioral and Brain Sciences, 5, No. 2, June, p. 229.

Speck, R. L., 1993, Publication Peer Review: An Annotated Bibliography, Westport, Connecticut, Greenwood Press.

Walker R. D. and Hurt C. D., 1990, Scientific and Technical Literature, Chicago: American Library Association.

Weller, A. C., 2002, Editorial Peer Review, its Strength and Weaknesses; Medford, New Jersey.

Wilkes, M. S. and Kravitz, R. L. Policies, practices, and attitudes of North American medical journal editors. Journal of General Internal Medicine, 10(8), pp. 443-450.




   Special Tracks








© 2006-2024 International Institute of Informatics and Systemics. All rights reserved. 

About the Conference  |  Hotel Information  |  Ways of Participation  |  Submission Format  |  Program Committee  |  Organizing Committee  |  Major Themes/Areas  |  Papers/Abstracts Submission  |  How to Organize an Invited Session  |  Invited Sessions Organizers  |  Contact Us