The 16th International Conference on
Society and Information Technologies: ICSIT 2025©
 
March 25 - 28, 2025
Organized by IIIS
in Orlando, Florida, USA.
*** Submissions for VIRTUAL PARTICIPATION are now open ***



CO-SPONSORS
  Functions of Conferences’ Proceedings
Functions of Conference's Proceedings


Our purpose in this brief article is to describe the functions of conferences’ proceedings and to derive accordingly the acceptance policy for the papers submitted to WMSCI Conferences to its collocated symposia, and to other conferences organized by the International Institute of Informatics and Systemics (IIIS).

We will base our description on facts found by scholars and journals editors, as well as on the conclusions they arrived at by means of the research they have done on the subject, or by means of the prolonged experience they have had as editors.

Accordingly we will show 1) that the basic function of the proceedings is to relieve authors of the burden of distribution of their presentations and, consequently, papers’ authors are responsible of the content of their respective papers, as well as for their respective proofreading and copyediting; and 2) some weaknesses of peer reviewing and how they might be lessened in the context of the functions that conferences have as informal means of academic, professional and managerial communications. Based on these peer reviewing weaknesses and the possible ways to lessen them in the context of conferences’ presentations and proceedings, we will derive an acceptance policy for the submissions made to conference organizers.

A brief statement of this policy is to apply the majority rule among the submission’s reviewers and when in doubt to accept the submissions. We will show why this acceptance policy will diminish the probability of refusing good papers and the possibilities of plagiarism and fraud generated by the same reviewing process, while complying with one of the explicitly mentioned objective of conference’s presentations and proceedings which is to claim credit for the ideas presented before submitting the respective paper to a more lengthy reviewing process done by some journal.


Proceedings as a Means of Relieving Authors of the Burden of Distribution of their Papers

Richard D. Walker, professor in the School of Library and Information Studies at the University of Wisconsin-Madison, and C. D. Hurt, Director of the Graduate Library School at the University of Arizona, assert that “the ‘real’ value of conferences and other meetings lies in the informal communication that takes place during, between and after the formal presentation of prepared conference papers” (Walker and Hurt, 1990, Scientific and Technical Literature. Chicago: American Library Association. p. 82).

Walker and Hurt (1990) also affirm that “Often all the contributed papers to be presented at the meeting…are provided to those attending the meeting as it convenes or even beforehand. Such pre-publication has many advantages and disadvantages. On the plus side it provides a simple way of ensuring the integration of the meeting into a communication process; it also relieves the authors of the burden of distribution of their presentations. Additionally, it ensures earlier and wider distribution - to more than those assembled for the meeting. However, wider and earlier distribution of material of untested quality may not be desirable. To wait until the presented material meets the test of public exposure and discussion may be preferred” (p. 86).

Since an essential function and value of the proceedings provided to those attending the conference as it convenes, i.e. as pre-publication, is to relieve authors of the burden of distributing copies of their presentations, it is evident that the respective authors have the complete responsibility of the contents of the papers included in the proceedings as if they would have distributed their own copies of their presentation or the papers which are the basis of their respective presentations. This complete responsibility of the authors with regards to the papers included in the proceedings is being manifested in a clear and explicit way in an increasing number of proceedings. As example is the following text that is being included in the proceedings of several conferences organized by the IEEE:

“The papers of this book comprise the proceedings of the meeting mentioned on the cover and title page. They reflect the authors’ opinions and, in the interest of timely disseminations, are published as presented and without change. Their inclusion in this publication does no necessarily constitute endorsement by the editors, the IEEE Computer Society, or the Institute of Electrical and Electronics Engineers, Inc.”

Walker and Hurt (1990) emphasize it saying “don’t confuse the purpose of a proceedings with that of a Journal” (p. 94), and they add (quoting UNESCO’s Bulletin for Libraries 24:82-97, 1970) that “Valuable oral exchange does not usually become valuable publication simply by printing it”. They insist that “An important element in the process of transmission of scientific information and knowledge is oral communication” (p. 95). If this applies to scientific conferences, there are more reasons to apply it for conferences designed to be a forum for the exchange of ideas and oral interaction among scientists, engineers, practicing professionals, consultants and managers.

This kind of communication, not just among scientists, but also between them and their socio-economical context is stressed by several authors. Walker and Hurt (1990) affirm that “Communication is not only necessary among colleagues and peers…but is a necessary part of the interplay among several disciplines or branches of science, between the scholarly community and industry, between the scholarly community and the government, between the scholarly community and the lay public, and among all these segments of society” (p. XVI). Conferences which main purpose is to serve as a forum for interaction of scholars, practicing engineers, consultants, managers and users of science and technology are contributing to this kind of communication, cited by Walker and Hurt. Such conferences are more informal than scientific conferences, which are, according to Walker and Hurt, in a middle point between the formality of the journal and the informality of other means of scientific communication. These conferences cannot follow the middle level formality of the scientific conferences without risking their main purpose, which is to relate scientist of different disciplines and engineers, consultants, practicing professionals and managers. A conference that announces, as its main purpose, to be a forum for scientists and non scientists cannot be valued according to scientific conferences standards, if there is any.

Conferences of this kind would not be, for example, peer reviewed or peer refereed, and in such a case, their proceedings should not be announced as such. Such kind of conferences may accept just very short abstract submissions. And, if this is the case, there is no more reviewing process than the required for the topic of the presentation, whether it fits some area of the conference or not. Very important and prestigious conferences have been repeatedly and explicitly stating that the papers included in their proceedings cannot be considered as peer refereed. For example, The Thirty-Seventh Annual ACM Symposium on Theory of Computing (STOC 2005) stated clearly that:

“The submissions were not refereed, and many of these represent reports of continuing research. It is expected that most of them will appear in a more polished and complete form in scientific journals.”

Ronald Fagin (IBM Almaden Research Center),
Forward, STOC 2005 Program Chair.

Another example can be found in the proceedings of the 56th Annual Symposium on Foundation of Computer Science (FOCS’ 05), in which the foreword, written by Éva Tardos, FOCS ’05 Program Committee Chair, included the text shown above with exactly the same words.

In FOCS 2004 we can find another example in its proceedings. The following text, similar to the one shown above, was included in its Foreword:

“The submissions were reviewed as carefully as time permitted, but they were not formally refereed, it is expected that most of them will appear in a more polished and complete form in scientific journals.”

Eli Upfal, Foreword,
FOCS 2004 Program Committee Chair

If highly scientific and focused prestigious conferences are not making peer reviewing and they are explicitly stating so, let alone in interdisciplinary conferences where the stated objective is to serve as a forum for ideas interchange and oral interaction among not just scientists but also with engineers, practicing professionals, consultants and managers.

WMSCI Conferences, and others organized in their context and or by the International Institute of Informatics and Systemics (IIIS), have been accepting abstracts and paper drafts submissions. Abstract reviews are based on the topic and the intent of the author. Draft paper review is made with the purpose to identify the best 25%-30% of the submitted papers in order to invite their respective authors to publish them in the related journals. All papers presented at the Conference go through another selection process: each session chair selects the best paper of his, or her, session and the Journal reviewers select the best 30% of these sessions’ best papers, in order to invite their authors to publish a journal version of them. In these Conferences, research papers are part of the submissions accepted, another kind of papers and abstracts also accepted are position papers, case studies, tutorials, reports, panel presentations, etc. Consequently, formal peer review of scientific papers cannot be applied, at least not for the proceedings, although they may be applied later for the selection of the best papers for their publication in the journal related to the conference or to other journal. Consequently, in these Conferences there is no target related to papers refusal ratio, but there is a target like this for the journal publications.


Acceptance policy for papers to be presented at the Conference and, hence, to be included in the Conferences’ Proceedings

The acceptance policy which is usually applied to the submissions made to WMSCI, the symposia organized in its context, the collocated Conferences and other conferences organized by the International Institute of Informatics and Systemics (IIIS), is oriented by:

  1. The majority rule, when there is no agreement among the reviewers with regards to acceptance or non-acceptance, of a given submission.
  2. The non-acceptance of the submission when there is agreement among its reviewers for not accepting it.
  3. Acceptance of the paper when in doubt (a draw or a tie among the opinions of the reviewers, for example).

The reasoning that is supporting this acceptance policy is based on very well established facts:

  • There usually is a low level agreement among reviewers
  • A significant probability of refusing high quality papers when the acceptance policy is oriented in such a way as to just accept those papers with no disagreement for their respective acceptance.
  • The possible plagiarism (of some non-ethical reviewer) of the content of non-accepted papers.

Let us briefly discuss these facts and provide information about some of the sources that informed about them.


Some weaknesses of Peer Reviewing


Low level of agreement among reviewers

David Lazarus, Editor-in-Chief in 1982 for the American Physical Society, which publishes The Physical Review, Physical Review Letters and Review of Modern Physics, asserted that “In only about 10-15% of cases do two referees agree on acceptance or rejection the first time around”. In the special case of the organization of a conference and its respective proceedings there is usually no time for a re-submission, so we can infer with a significant level of certainty that, in conferences reviewing, there will be only about 20-15% of agreement among the reviewers of a given submissions (Lazarus, 1982).

Lindsey (1988) arrived to a similar conclusion, stating that “after reviewing the literature on interjudge reliability in the manuscript reviewing process…concludes that researchers agree that reliability is quite low” (in Speck, 1993, p.113).

Michael Mahoney, who conducted several studies on peer reviewing processes, “criticizes the journal publication system, because it is unreliable and prejudicial”. (Speck, p. 127). For example, he said that referees’ comments “are so divergent that one wonders whether they [referees] were actually reading the same manuscript.” (Mahoney, 1976; p.90). “To reform the journal publishing systems, Mahoney [1990] recommends eliminating referees or using graduate students as referee.” (Speck, 1993).

Ernst and colleagues sent the same manuscript to 45 experts to review it. Each one of the experts held editorial board appointments with journals that publish articles in areas similar to that of the submitted paper. 20% rated the manuscript as excellent and recommended its acceptance. 12% found the statistics of the manuscripts unacceptable, 10% recommended its rejection and the rest of the experts classified the manuscript as good or fair (Ernst, et. al. 1993). Furthermore, they asked the experts to evaluate the paper against eight measures of quality. Almost every measure received the best and the worst evaluation from the reviewers. Ernst and colleagues concluded that “the absence of reliability…seems unacceptable for anyone aspiring to publish in peer-reviewed journals” (p. 296).

If peer reviewing is so unreliable and “philosophically faulty at its core” (as Horrobin, 1982, affirmed) for journals and research funding, then peer reviewing will be even less reliable in conferences organization. This is why, in our opinion, more conference reviewing are being done on abstracts or extended abstracts, and not on full papers. Some conferences stress the fact that any submission that exceeds the limit of a given number of words will not be considered.

Weller (2002) summarized 40 studies made on reviewing reliability in 32 journals and concluded that according to all these studies “An average of 44.9 percent of the reviewers agree when they make a rejection recommendation while an average of 22.0 percent agree when they make an acceptance recommendation.” (p. 193). This means that “reviewers are twice as likely to agree on rejection than on acceptance.” (p. 193). This fact supports strongly our acceptance policy that relies on the reviewers’ agreement related to the non acceptance of a submission, rather than on their agreement regarding its acceptance.

Other authors had similar conclusions. Franz Ingelfinger (1974) former editor of the New England Journal of Medicine affirmed that “outstandingly poor papers… are recognized with reasonable consistency.” (p. 342; from Weller, 2002, p. 193). This fact gives a strong support.

Weller (2002) found out that Journals’ editors seek more reviews when they have disagreement among the reviewers. She affirms that “between 30 percent and 40 percent of medical journal editors opted for more review when reviewers disagreed; the rest resolved the disagreement by themselves, sought an input from an associate editor or discussed the next steps at an additional meeting.” (p.196). Wilkes and Kravitz (1995) had similar results after examining editor policies in 221 leading medical journals. They found that “43 percent of responding editors sent manuscripts with opposing recommendations from reviewers out for more reviews.” (Cited in Weller, 2002; p.196).

Furthermore, to send the manuscript to more reviewers does not necessarily solve the disagreement problem being faced by the journal editor. The high level of disagreement that Ernst and colleagues found was based on a study where a manuscript was sent to 45 experts (Ernst, et. al. 1993; p. 296). Additionally, in conference reviewing processes, the inherent time restrictions make it unfeasible to send the manuscripts to more reviewers when the respective reviewers disagree. Consequently, when the reviewers of a given submission disagree (and this happens most of the times) a decision should be taken by the organizers or the Selection Committee. If this decision is to not accept the paper, high quality papers might be left out, as we will explain below, and reviewers with low ethical level might find the opportunity to plagiarize some ideas of the non-accepted paper. We will also give some details below on this issue.

If we take the facts, mentioned up to the present, into account, there will be two basic papers acceptance policies left for the selection of papers to be presented in a conference:

  1. To accept just those papers where the reviewers have agreed on such an acceptation.
  2. To refuse, or not to accept, those papers where the reviewers have agreed on such a refusal.

In the first case the conference will have a very low acceptance rate and there will be a higher probability of not accepting very good papers. In this case there will be no warranty of improving the quality average of the papers accepted for presentation. Let us briefly explain this statement.

Probability of Refusing High Quality Papers and How to Diminish It

Campanario (1995) affirms that eight authors won the Nobel Prize after their prize winning ideas were initially rejected by reviewers and editors. He also found out that about 11 percent of the most cited articles were first refused, and that the three most cited articles, of a set of 205, were initially rejected and eventually accepted by another journal editor (Campanario, 1996, p. 302). Rejection of innovative ideas is one of the weaknesses of peer reviewing that has frequently been reported. An increasing number of authors perceive this kind of reviewer bias. In a survey made by the National Cancer Institute where “active, resilient, generally successful scientist researchers” were interviewed, just 17.7 percent of them disagreed with the statement “reviewers are reluctant to support unorthodox or high-risk research”. 60.8 percent of them agreed, and 21.4 percent were neutral. Federal agencies tried to counterbalance the reviewers’ bias against new ideas by means of providing grants with no reviewing support. Chubin and Hackett (1990) affirm that an example of this kind of “strategy is the recent [1990] decision by NSF [National Science Foundation] allowing each program to set aside up to 5 percent of its budget for one-time grants of no more than $50,000 to be awarded, without external review, in support of risky, innovative proposals” (p. 201). This is one of the reasons why, in WMSCI Conferences, we accepted in the past non-reviewed papers taking the intrinsic risks of this kind of paper acceptances. Deception was a risk that was not perceived at the moment of examining the risks of this kind of acceptance policy.

So, it is evident that acceptance policies based on the positive agreement of reviewers will increase the probability of refusing good papers. The larger the level of agreement sought among reviewers in order to accept a paper, the higher the probability of refusing a very good paper; although it is also true that the larger the level of agreement among the reviewer the lower the probability of accepting a low quality paper. Consequently, it is a matter of a trade off: to increase the certainty of refusing poor papers has the cost of increasing the probability of refusing good papers. This trade off will depend on the journal or the conference quality objectives: whether they are related to refuse low quality papers with the cost of taking the risk of refusing good papers, or to increase the probability of the quality average. In the first case, the selection criteria would be oriented to the acceptance agreement among the reviewers, and in the second case it may be better related to the agreement among the reviewers who are recommending not accepting the paper. WMSCI Conferences (as well as its collocated Conferences and others organized by IIIS) have been mostly based on agreements among reviewers recommending refusal, or non-acceptance. Papers with disagreement among the reviewers have usually been accepted based mostly on a majority rule. This policy may be improved by the two tiers reviewing that are being applied for 2025 Conferences, where double blind reviewing is complemented by non-blind or open reviewing.

Furthermore, there is no study that can relate low acceptance rates, or high refusal rates, with high quality. Moravcsik (1982) asserts that “the rejection rate in the best physics journals is more like 20-30% and not 80%.” (p. 228). Weller (2002) examines about 65 studies related to the consequences of rejection rates and concluded that “the relationship between rejection rates and the importance of a journal has not been established. What has been established is merely that the more selective the criteria for including a journal in a study, the higher the rejection rate is for that journal. Almost every study discussed in this chapter –Weller emphasizes– has supported this finding, regardless of discipline. Each discipline has a set of journals with both high and low rejection rates; how these are translated into journal quality needs to be further investigated.” (p. 71).

Consequently, to select the first option for an acceptance policy in a conference organization, has no proven quality benefit (related to its respective high refusal rate) and one proven quality risk, i.e., to refuse good papers because of the reviewers’ bias against new ideas or new paradigms. Therefore, it seems evident that the second of the two options we stated above, will have, in conference organization, a probably better cost/benefit ratio regarding quality average, than the first option. This is especially true if we take into account that “reviewers are twice as likely to agree on rejection than on acceptance” as well as the time and other inherent restrictions existing in conference reviewing processes.

The low reliability of peer reviewing and the low level of agreement among reviewers of the same manuscript are some of the peer reviewing weaknesses which have contributed to the skepticism regarding its real value, effectiveness and usefulness. Some authors and editors went as far as to relate peer reviewing to chance. Let us show a sample of this kind of statements. Lindsay (1979), for example, said that “interrater agreement is just a little better than what would be expected if manuscripts were selected by chance.” (cited in Speck, R. L., 1993, p. 115). Nine years later, Lindsay was even more emphatic, titling his paper “Assessing Precision in the Manuscript Review Process: a little better than a Dice Roll.” (Lindsay, 1988. Cited in Speck, R. L., 1993, p. 113).

Possibilities of Plagiarism and Fraud Generated by the Reviewing Process and How to Reduce them

One of the explicitly stated functions of a conference and its proceedings is to be “a place to claim priority” (Walker and Hurt, 1990; p.79). This may counterbalance the plagiarism reported in journal peer reviewing, especially if we take into account that other explicitly stated function of conferences and their proceedings, is the informal publication that may precede the formal publication of the respective research in a journal. These two complementary functions are seriously taken, and will continue to be taken, into account in WMSCI Conferences organization.

Among the conclusions Weller (2002) made in her book, after examining more than 200 studies on peer reviewing in more than 300 journals, she affirmed that “Asking someone to volunteer personal time evaluating the work of another, possibly a competitor, by its very nature invites a host of potential problems, anywhere from holding a manuscript and not reviewing it to a careless review to fraudulent behavior.” (p. 306)

Chubin and Hackett (1990) also indicate the same kind of possible situations when a competitor’s manuscript is blocked or delayed, or its results or arguments are stolen.

An epitome where peer reviewing resulted in plagiarism, where results or arguments were stolen, is what has been known as the Yale Scandal. In a two-part article in Science entitled “Imbroglio at Yale: Emergence of a Fraud,” William J. Broad (1980) thoroughly described such a fraud or plagiarism. Moran (1998) summarized it in the following terms: “A junior researcher at NIH [National Institute of Health], Helena Wachslicht-Roadbard, submitted an article to the New England Journal of Medicine (NEJOM). Her supervisor, Jesse Roth, was coauthor. An anonymous reviewer for NEJOM, Professor Philip Felig of Yale [‘a distinguished researcher with more than 200 publications who held an endowed chair at Yale and was vice chairman of the department of Medicine’ (Broad (1980, p.38)], recommended rejection. Before returning its negative recommendation to NEJOM, Felig and his associate, Vijay Soman, read and comment on it. Soman made a photocopy of the manuscript, which he used for an article of his own in the same area of research. Soman sent his manuscript to the American Journal of Medicine, where Soman’s boss, Philip Felig, was an associate editor. Felig was also coauthor of the article. The manuscript was sent out for peer review to Roth, who had his assistant, Roadbard, read it. She read it and spotted plagiarism, ‘complete with verbatim passages.’ (Broad, 1980, p.39)…Roadbard sent a letter to NEJOM editor Arnold Relman, along with a photocopy of the Soman-Felig article. Relman was quoted as saying the plagiarism was ‘trivial’, that it was ‘bad judgment’ for Soman to have copied some of Roadbard’s work, and that it was a ‘conflict of interest’ for Soman and Felig to referee Roadbard’s paper (Broad, 1980, p. 39). Relman then called Felig, who said, according to Broad (1980), that peer-review judgment was based on the low quality of Roadbard’s paper, and that the work on the Soman-Felig paper had been completed before Felig received the Roadbard manuscript (Broad stated that this last statement by Felig was incorrect)…Relman published the Roadbard paper, in revised form. Roth called Felig (a long-time friend from school days) and they met to discuss the two papers, for which they were either coauthors or reviewers. Broad (1980) stated that prior to the meeting ‘Felig had not compared the Soman manuscript to the Roadbard manuscript’ (p. 39), even though Felig was coauthor of one article and referee for the other! When he returned to Yale, Felig questioned Soman, who admitted he used the Roadbard manuscript to write the Soman-Felig paper…Broad (1980) reported that Roadbard and Roth began to express disagreement about the extent of plagiarism involved. Roadbard wrote to the Dean of Yale’s School of Medicine, Robert Berliner, who did not believe all that she wrote. He was quoted as writing back to her, ‘I hope you will consider the matter closed’ (p. 38). NIH apparently put off (by dragging their feet or by stonewalling) an investigation. A subsequent audit of the records revealed, according to Broad, a ‘gross misrepresentation’ (p. 41). Soman admitted that he falsified, but claimed it was no ‘significantly different from what went on elsewhere’ (p. 41). After further investigations, at least 11 papers were retracted. Soman was asked to resign from Yale University, which he did. Felig became Chairman of Medicine at the Columbia College of Physicians and Surgeons.” (Moran, p.69) “After two months [in this position], Philip was forced to resign…At issue was a scandal that rocked the laboratory of one of Felig’s associates and coauthors back at Yale Medical School, where Felig previously worked.” (Broad, 1980, p. 38) Helena Wachslicht-Roadbard spent one year and a half writing letters, making phone calls, threatening to denounce Soman and Felig at national meetings, and threatening to quit her job. She wanted an investigation and she got it (Broad, 1980; p. 38).

Several cases like the Soman-Felig scandal have been reported, but - as Moran (1990) affirms - it is “impossible to tell precisely how many attempts at plagiarism by means of peer review secrecy have been successful.” (p. 118). It is to be thought that this kind of plagiarism is more frequent when the manuscripts are coming from what is called the Third World, to be reviewed by reviewers of the First Word. Verbal reports abound on this issue.

If one of the functions of a conference is to be “a place to claim priority”, then conference organizers should consider adequate measures to avoid that their reviewing process generates opportunities for plagiarism from some of its reviewers. One way to achieve this objective is to have a policy of “when-in-doubt-accept-the-paper” as opposed to the policy of “when-in-doubt-refuse-the-paper”. Arnold Relman, editor of the New England Journal of Medicine had another reviewer suggesting him to accept Helena Wachslicht-Roadbard’s paper. If he had accepted the paper, there would have been no opportunity for the plagiarism made by means of his reviewing process. This reinforces what we stated above with regards to WMSCI’s acceptance policy mostly based on agreements among reviewers recommending refusal, or non-acceptance. Papers with disagreement among the reviewers have usually been accepted based mostly on a majority rule. This policy might be improved by adding to it Gordon’s optional published refereeing. Accordingly, as we said above: When the reviews of a paper are non conclusive, the paper may be accepted under the condition of accompanying its presentation at the Conference, and its publication in the proceedings, with its respective reviewers comments.

Conclusions with Regards to our Acceptance Policy

The acceptance Policy we described has its quality benefits and its quality costs. The costs may be: 1) an increase in the number of low quality papers being accepted (which – as we argued above - is counterbalanced by an increase in the probability of accepting good papers which otherwise would have been refused); and 2) an increase in the probability of effective deceptions, or bogus papers acceptation. The benefits may be: 1) an increase in the quality average of the papers (due to the increase of the probability of accepting high quality, paradigm shift, papers which otherwise would have been refused); and 2) a decrease in the probability of plagiarism through some of the Conference’s reviewers.


Professor Nagib Callaos

IIIS’ President

References

Broad, W. J., 1980, Science, 210 (4465) October, pp. 38-41.

Campanario, J. M., 1995, On influential books and journal articles initially rejected because of negative referee’s evaluation. Science Communication, 16(3), March, pp. 304-325.

Campanario, J. M., 1996, Have referee rejected some of the most-cited articles of all times, Journal of American Society for Information Science, 47(4),April, pp.302-310.

Chubin, D. R. and Hackett E. J., 1990, Peerless Science, Peer Review and U.S. Science Policy; New York, State University of New York Press.

Ernst, E., Saradeth, T. and Resch, K. L., 1993, Drawbacks of peer review, Nature, 363, 296, May.

Horrobin, D. F., 1982, Peer Review: A Philosophically Faulty Concept which is Proving Disastrous for Science. The Behavioral and Brain Sciences, 5, No. 2, June 1982, pp. 217-218.

Lazarus, D, 1982, Interreferee agreement and acceptance rates in physics, The Behavioral and Brain Sciences, 5, No. 2, June 1982.

Lindsay, D., 1979, Imprecision in the Manuscript Review Process. In proceedings 1979 S[ociety for] S[cholarly] P[ublishing], pp.63-66. Washington: Society for Scholarly Publishing, 1980.

Lindsay, D., 1988, Assessing Precision in the Manuscript Review Process: A little better than a Dice Roll.” Scientometrics 14, Nos. 1-2.

Mahoney, M. J., 1977, Publication prejudices: an experimental study of confirmatory bias in the peer review system, Cognitive Therapy and Research, 1(2), pp. 161-175. Cited by Speck, R. L., 1993, Publication Peer Review: An Annotated Bibliography, Westport, Connecticut, Greenwood Press, p.127.

Mahoney, M.J., 1990, Bias, Controversy, and Abuse in the Study of the Scientific Publication Systems.” Science, Technology and Human Values, 15, no. 1; pp. 50-55. Cited by Speck, R. L., 1993, Publication Peer Review: An Annotated Bibliography, Westport, Connecticut, Greenwood Press, p.127.

Moran, G., 1998, Silencing Scientists and Scholars in Other Fields: Power, Paradigm Controls, Peer Review, and Scholarly Communications. London, England: Ablex Publishing Corporation

Moravcsik, M., 1982, Rejecting published work: It couldn’t happen in Physics! (or could it). The Behavioral and Brain Sciences, 5, No. 2, June, p. 229.

Speck, R. L., 1993, Publication Peer Review: An Annotated Bibliography, Westport, Connecticut, Greenwood Press.

Walker R. D. and Hurt C. D., 1990, Scientific and Technical Literature, Chicago: American Library Association.

Weller, A. C., 2002, Editorial Peer Review, its Strength and Weaknesses; Medford, New Jersey.

Wilkes, M. S. and Kravitz, R. L. Policies, practices, and attitudes of North American medical journal editors. Journal of General Internal Medicine, 10(8), pp. 443-450.




© 2006-2024 International Institute of Informatics and Systemics. All rights reserved. 

About the Conference  |  Hotel Information  |  Ways of Participation  |  Submission Format  |  Program Committee  |  Organizing Committee  |  Major Themes/Areas  |  Papers/Abstracts Submission  |  How to Organize an Invited Session  |  Invited Sessions Organizers  |  Contact Us