Fixing Peer Review

July 10th – August 10th

At Project Aiur we are building tools for fact-checking scientific knowledge with a goal to fix the incredibly misaligned incentives of academic publishing. It is widely known, as discussed in our recent blog post, that one of the major issues in the current system is the peer review process that lacks openness and accountability.

Peer review is supposed to be a process where academic peers provide feedback on research through expert analysis. It is of massive importance, not only to the researchers whose careers depend on the reviews, but also to the entire the system of academic communication. In order to improve transparency and openness in peer review, we’re inviting you to join us ideating ways in which you think the system should be fixed!


Share your ideas around on how the peer review system should work & receive a token airdrop!

We invite you to join the Project Aiur community and ideate ways to tackle the challenges around peer review through this airdrop campaign. An airdrop is way to award tokens free of charge to the campaign participants. As a part of this airdrop we give 5,000 AIUR tokens (worth an estimated €20,000 at the intended token sale price) as a reward to researchers and students who send ideas and give votes for the best ones.  

Join the ideation by following these steps:



In the ideation phase we gathered your ideas on how to fix the problems of the current peer review system.



In the voting phase the community voted for the TOP3 best ideas. 



Tweet your ideas with the tag @ProjectAiur and hashtag #MyLifeForScience


View the results

The results of the voting can be found from the table below. Thank you for participating!

* We at team Aiur will let you know via email how to cast your votes after the idea submission phase is completed. Please note that we reserve the right to discard submissions that are clearly not ideas.

The AIUR tokens will be distributed accordingly

  • 2 AIUR tokens for each submitted idea

  • 1 AIUR token for voting the best 3 ideas

  • Prize for the providers of the TOP3 ideas:
    20 AIUR tokens for the best idea
    12 AIUR tokens for the second best idea
    6 AIUR tokens for the third best idea


  • The peer review ideation Aiur Airdrop is only available for researchers and students. To join you need to have a valid university email address or a link to a research paper that you’ve authored.

  • If you participated the first rounds of Aiur Airdrops, you are invited to join unless you already surpassed the limit of 60 AIUR tokens.

Top most voted ideas

See the best ideas selected by our community

 TimestampIdea descriptionRank
Reward authors of papers with coins if down the road, their paper was right. For example, if I share an idea in a paper today, and four years later it turns out to be a major breakthrough, I would be given coins. If what I wrote turned out to be wrong or not very important, nothing would happen. I suggest this as a way to encourage people to work on solving problems vs. gaining citations.TOP 1
Consider creating a real-time collaborative system among the lead researcher, other students or researchers, and companies or organizations who might use the research. For example, I might write about an experiment that I want to do growing food in zero gravity, and I could write up the experiment and invite NASA, a major food company, and a farm labor union to run the experiment with me and share their results simultaneously. This would be to speed up the R&D process by running lots of simultaneous experiments. As the originator of the experiment I would receive a certain amount of coin and the participating entities would also receive a certain amount of coin. There would also be a system for rewarding the originator and participants if one of the participants commercializes the experiment (and this reward could potentially replace the patent system or create a new hybrid patent system of payments and shared IP.) 
Peer review can be difficult, especially when reading papers from different journals. All seem to use different styles. Personally, I do not care too much about formal editing, but the paper must be readable and organized. Often this is not the case, which could prevent good results from being published. It is therefore important for new open access journals to standardize reporting of scientific data and results. Get rid of formal language and editing. Use a type of generic reference handling. Use wizards to guide researchers to fill in all parts, including references to other work. Demand at least one reference to a fresh review paper, which is the authors think is relevant and which can put their paper into a context. Do everything possible to standardize the way papers are organized and presented. In this way, we get used to a certain look of the papers, will navigate the paper easily, and quickly make up an opinion about the novelty and value of the paper. The authors should be forced to reference “ONE RECENT REVIEW PAPER” best covering the scientific field, and giving a minimum reading for the reviewers for comparing results and value the new paper. The authors would be advised to choose this review paper carefully.TOP 3
We need to get rid of the rubbish and extract the gold. For understanding the experiments, the reviewer must have access to as many details as possible. Some journals try to hide the experimental part (supplementary etc) which is bad, since this is the most important part of the paper, in my opinion. There should be standardized reporting of experimental conditions, similar to questionnaires that are used to sign up for scientific magazines (“what instruments do you use” etc). Instruments are not that different, so all details could easily be filled in. Also, it is possible to submit the instrument experimental log file in many cases. Reporting of results should be done this way. Figures should be accompanied by a hyperlink to the raw data files, even proprietary ones that need instrument software to be read. Don’t underestimate the reviewers, they often have the same software and can check validity of results. Get reviewers that do have the software to evaluate raw data files. 
I was recently given a published paper by a big publisher, who though the paper could be of interest to me. I found a fundamental error in the results, and told them the paper was clearly based on a wrong identification (computer algorithm threshold error) and should be retracted. Typical example of an incompetent or too busy reviewer. Reviewers should be forced to compete with each other in making the paper be published or not. I do not know how this should be done exactly, but the process could be a type of escalating judgment of the comments and recommendations given by previous reviewers. Hence, more and more reviewers will look at the paper + comments by fellow reviewers, submitting new comments, and this way we could free ourselves of poor / lazy reviews. All participating reviewers could then vote for the final decision. A quick look at a paper by an expert is of more value than a full reading by an incompetent reviewer. 
Distributed reproducibility: reviewers often struggle to reproduce computational claims made in scientific papers (this covers various domains, e.g. life sciences or digital history). Hence they cannot always access all aspects of a publication necessary to review the quality. By embedding the computational reproduction of figures, scripts and computational claims into a decentralized knowledge repository, reviewers are given new opportunities and quality of fairness can be raised. In a blockchain environment where every user can participate and even earn credit by conducting the necessary computations, failed as well as successful reproducibility of scientific papers become measurable and begin to work as an incentive of researchers until eventually the additional efforts in documentation and preparation of the code underlying a paper will be valued in the scientific community.TOP 1
Sustainability of the publication system: the use of scientific papers to measure and indicate the scientific quality of a researcher or institution can be exploited by excessively seeding new publications and creating citation cartels. On the other hand the work of reviewing many papers is taking away valuable research time of the reviewers themselves. Opening up this situation by regarding both the publication process and the reviewing process as public transactions, could enable the scientific community to initiate anti-inflation arrangements by economic means and thus bring back some of the sustainability that was characteristic for science in the past. This would require an in-depth analysis of the status quo as well as identifying or defining the granular steps of the dissemination of knowledge, e.g. normalize review instructions across communities. 
Distinguish between methodological reviewer and thematic reviewer with respect to the object of research. The methodological review can be performed by any professional with research experience. On the other hand, the thematic review will be conducted by a professional with expertise in the subject matter of the research, as well as in the context in which the research problem develops.
It would not even be necessary for researchers to do the thematic review, it could be done by practitioners with broad thematic expertise. If this is the case, the availability of reviewers would increase, solving a common problem, which is the lack of reviewers. In addition, research would have a practical approach, which is often minimized or ignored by academic researchers.
An understanding of the cultural context in which the research problem develops is also important. It is undeniable that more academic research is being conducted in developed countries. In addition, the most important journals are located in these countries. So the papers developed in emerging countries have a disadvantage at the time of the peer review that they do not know the cultural context in the paper was written. The comments or observations do not consider this socio-cultural factor, so it is necessary that the reviewers are preferably from socio-cultural areas similar to the context in which the research was developed. 
The review times are long. Journal intermediation between researchers and reviewers could be eliminated if smart contracts and blockchain technology are used. This way the papers would reach the reviewers in real time. In addition, the observations and comments of the reviewers would be delivered in real time to the researchers. Once the papers are approved by the reviewers, they will have a paper ready for publication. Thus, Journals using the same technology would compete to publish the paper, where less time for publication could be a variable to be considered. 
To get high quality peer reviews I think a good idea would be to put the actual peer review in an escrow service (as a middle station) where it gets an evaluation from other sources. This evaluation answers to criteria like “research field/knowledge” “competitor bias” “too vague or too detailed” “methodology” etc. If the peer review score high from at least X other sources the reward is also higher. The incentive thus becomes higher to write an unbiased and balanced peer review. 
127/15/2018 br 17:25:5To get more people engaged into writing peer reviews the threshold sometimes must be lowered. One such example would be to make it easier to find projects to do peer reviews on. Imagine building a database of existing projects much like the Kickstarter model where you can find relevant projects by using a search and filter model, and where incentives to participate are clearly visible.TOP 2
Using blockchain technology, reviewers can earn tokens for delivering reviews accurately and on time. The token distribution can be tied to a smart contract for transparency to avoid bias and fraud. Said tokens can then be used to publish your own work, buy articles or exchange for cash. For instance 3 punctual reviews delivered can earn 1 self publication. Citations will be stored, updated and searchable using AI. Scientific journals that provide and update their database of reviewers will be provided with token interests scalable to the “quality” of their database. 
Build an academic platform on the network for a direct peer-to-peer based earning/voting system. 
Adopting Agile and Scrum methodologies, the review system should have public “demos” where different stakeholders (academics, citizens, policymakers, etc) could provide feedback on “MVP-like” findings and manuscripts. This means that the incremental development of a research project (especially in relation to outputs) would have an early-feedback phase from different perspectives. 
Authors and reviewers discussions when accepting and re-writing an accepted paper could be open for scrutiny on a dedicated page, where readers could check the level of discussion and agreements about suggestions, improvements, bias, etc. 
A collaborative writing platform that could track all changes properly, for automatically generating a report of the different authors real level of contribution based on text provided, comments or edits at different levels. For example, giving a simple chart of % really written by each author. 
The current peer review system generates revenue which is directed towards the publisher/journal, to sustain an infrastructure which is no longer optimal. More efficient publishing is possible with modern electronic infrastructure, and should be used to lower the cost of publishing peer reviewed articles. Revenue could still be generated, and should be directed more towards the reviewers because quality review is the best way to ensure overall value of the system. 
The cost and benefits of interactions with the peer review system is really important, and not trivial. At least the following actions must be considered, and please consider the values listed as suggestions or initial values:
1) Author submitting paper: cost should be low (for all nationalities) but not nil
2) Author of accepted paper: submit cost paid back, possible bonus if single round review
3) Reviewer: benefit of access to full text articles, bonus for reviewers whose first review is aligned with the final review result.
4) Individual user access: cost should be fair
5) Organization user access: providing mirror/repository or other way of supporting distributed (cloud) infrastructure for the system should be the way for organizations to earn the benefit of access to full text articles for all their users.
A mostly automated system of picking reviewers should be implemented. The system should pick reviewers that have relevant expert knowledge on the topic of the submitted paper, but also prefer reviewers that have a consistent record of getting it right in the first round of review. That is, if several rounds of review is needed for an article, give credit for reviewers who are earlier aligned with the final review result. It is better for all that the review process takes several rounds only when that is really needed. Also, the system should if possible machine learn how to ensure a fair, unbiased review and follow good scientific principles and ethics. This would also influence the preference of reviewers. 
Reward Quality – Give incentive for Quality in reviewing and editorial work and not only on Quantity. Editors / PC Chairs involved in taking the final decision on a paper should evaluate reviews in a few simple criteria and at least 50% of the tokens/counts/pay for doing a review should be based on the quality rather than on the fact that one did the review. Simple, shared and “standardized” quality models should be used to judge this so that it is not simply subjective. 
Reward also higher-ups – Important that also people doing work “higher up the food chain” such as editors in chief and editorial board members also get rewarded for their work. Some % of the tokens/coins/pay handed out to reviewers could go to these people. This connect to Reward Quality idea above in that getting this % could be tied to also grading the reviews that go into a decision. In this way it is clear that the work of the higher-up is valued but that something needs to be provided in order to get it. I’m an Editor-in-Chief myself and I know that we have to do a lot of work in the back that is often invisible to others but very important for the system as a whole to work. 
Reward Open Science practices – Since the quality of peer review is higher the more precise data, analysis scripts etc that is provided both authors should be encouraged to use Open Science practices, and share their data and scripts etc, but reviewers should also get something extra for the often extra work involved in reviewing this material. 
Make peer review open and collaborate so that the openness requires a higher level of mentoring instead of gatekeeping that authors will better learn from and engage with. 
Allow for synchronous reviewing of submissions by editorial board members/reviewers so that reviewing can be more of a dialog where each reviewer can capitalize on their strengths and also get it done more quickly during an hour-ish virtual meeting. 
We need more training of editors to guide the peer review system and welcome newcomers to disciplines. That training should include leadership, management, conflict resolution, diversity and equity, and (foremost) writing-based pedagogy. 
Obvious idea (but foundational to the rest of my ideas): Complete the shift to web-native/online scientific articles and eliminate our over-reliance on PDF files. Make articles web pages. Whatever improvements we want to make to the peer review system, we’ll likely need to embrace the web more to do it.
Advantages of web-native publishing for peer review:
1) Unlimited article length and content types. Attaching data, R code, interactive figures, methods videos, and measures is impractical and expensive with paper journals, but easy to do with web journals.
2) Unlimited number of articles can be published at once. No ‘waiting on the next issue’ backlog. No cutting good articles because they’re not novel enough. Less novelty bias.
3) Enables all-new peer review approaches like crowd-sourced peer-review and post-publication peer review.
4) Allows articles to be updated after publication.
5) And you can still print off a PDF of a web article, but why not just email a link?
Visual mockup of this idea:
No more judging articles pass/fail. Publish every submitted article and rate it on continuous (e.g., ‘1 to 5 stars’), metrics for methods, theoretical contribution, novelty, practical value, etc. Display these rating metrics alongside every paper, and let ratings change/mature over time as studies are replicated and new methods come out.
Advantages over our current ‘unreliable gatekeeper’ model of peer review:
1) Every paper that contributes the slightest insight is published and added to our collective knowledge, leaving no insight on the cutting room floor.
2) It’s better to publish bad papers and indicate how they are bad, than to let bad articles slip into lower-end journals, then list them right alongside good articles in search results.
3) Null-result and replication studies can all be published and celebrated (and rated 5 stars for ‘methods’ and 5 stars for ‘contribution’).
4) Allows sorting/filtering by metric!
Assuming crowd-sourced, online peer review with distinct rating metrics for novelty, methods, practical value, etc. displayed alongside every paper (as pictured in idea #25).
Give reviewers the option of reviewing only one aspect of the paper. Let methods people critique methods, theory people critique the theoretical arguments, and practitioners critique practical value. Weight these ratings by the ratings that the reviewers have received for the same metrics on their own papers.
If the readers could rate the work that was done, eg. with stars etc. this would aid in the search for “good articles”. This could be done for the entire article or to different sections, in particular to the methods and results section. In general I think that the impact factor of the journal gives a very wrong picture of each article, and in this way the public can rate the work that was done. Currently the “quality” of an article is indicated by the number of citations, but this can often be misleading as older articles often have more citations than more recent articles.
I also believe that there are a lot of good research out there which were never published due to both peer review processes and also due to the time and costs involved in article writing. I also think that a lot of articles are overvaluing there own results (part of the peer review process, these findings are so great etc.). Thus short communications , eg. presentation of results and methods, together with a short main conclusion may be one way to get more research out to the community. Further, the community should have the possibility discuss and comment on the results.
In my work I would have appreciated if it was easier to find and link relevant articles to each other, regardless of where they were published. Maybe this could be done by computer learning creating linkage maps of some sort. 
How to fix the current peer review system:
Reddit for science: A community based online forum for discussion of topics, papers and reviews. Instead of separate subtopics (“subreddits”), there will be circles for each field of research that connect to each other in a network. One circle/field can be connected to another, and there can be research topics across these fields and they may have a smaller circle that overlaps or connects to the others.
– The members of the circles are the peers. Only scientists with a verified identity can join the circle.
– Each circle have a knowledge repository that consists of all openly available research information and knowledge about the topic/field.
– The knowledge repository is open for the public / everyone. The discussions and reviews are not. They are only for circle members, peers.
– When someone wants to add new information to the knowledge repository, the circle members will review the paper and approve it to be a part of the knowledge repository.
There are some things that needs to be figured out for this type of thing.
It needs to have community guidelines and someone to monitor and make sure that everyone is following the rules and guidelines (moderators, administrators etc., like any online forum).
The best is probably to have to verify with your full name and what institute you belong to, so that your work and reputation is on the line. Then I hope that people will behave.
A part of the idea is that people can post research questions and the scientists can discuss among themselves. With discussion and talking about problems, maybe more of them can be solved because people with different knowledge will talk together! As long as the discussion climate is good, so that nobody is bullied and no persons are attacked (ref; moderators).
How to submit and review the papers in a fair way is also something that needs to be figured out. An open discussion about a paper could not be the best way as open critique may not always be the best. People are not always honest (too nice), or if someone is too harsh, it can be worse to know that all your colleagues are seeing the harsh critique. Maybe there can be some form of submission system, where the members of a circle takes turns to review the papers (anonymously?).
This community/forum system can work well together with the tokens system of a blockchain. Tokens will be rewarded for reviewing and validating other peoples work, and you will gain tokens for high quality research? Something like the karma-system on Reddit.
The reddit for science would make the need for journals disappear. The papers would be judged based on the quality of the paper and not on the quality of the journal. There will be a reward for active community members that participate, and science could move forward faster because the scientists have a place to meet and discuss their research in a safe environment.
Change the format of the scientific papers into a two-sided process;
1) First it is submitted to be validated. Someone checks that all the science is correct, so that the results can be trusted. These validators should have access to the raw data.
2) Then it is reviewed and rated based on the quality of the research. The science can be valid but be of good or bad quality. And also if the result is “no significance”, it is also a results that should be added to the knowledge repository.
The important thing in my opinion is that the science is valid and that you can trust the results. You could have one paper / group of files in a repository for the validation that goes more in depth of the science and how things were done, for the validation process. This might not need to be openly available to the public, but available to peers on request. In addition, one short paper that presents it more to the public with emphasis on the results and discussion, written in a less “scientific” language, so that it is more easy to understand.
For this to work, there needs to be a knowledge base with repositories for each field where one can have different levels of access to the base. Public access, research access and maybe an on-demand need-to-know access for validations (for industry raw data information that would otherwise not available to anyone due to competition, patents etc.)
Peer review via open source by qualified academic personal on a free platform, where author can submit paper. Paper review by EU association – so association can be created with academic staff from multiple universities. DLT / blockchain bacesd platform with any one can participate for the review of the paper. 
There should be a two-tier system: papers that do not provide an open-source implementation during the review process on one track and papers that DO provide an open-source implementation during the review process on another track. Over time, the prestige of the open-source track will overshadow that of the other. 
Current peer-review system allows the author of a submitted paper to suggest potential reviewers and potential non-preferred reviewers. In many cases, one might argue that the selection/de-selection of reviewers introduce a publication bias in the review process. Information about this is currently unavailable for others than the journal. In an improved review system, any information give from authors about non-preferred/preferred reviewers should be available to the public. 
Funding of scientific research is highly related to amount of publications/publication points. It is important that high quality research groups and highly productive researchers participate in an «open peer review» process. Incentives to participate is therefore necessary, and I suggest that reviewers should be rewarded with “review-points” for their effort in this process, similar to publishing points in the current system. 
A possible incentive for participation in an «open peer review» process could be “review-points” for reviewer`s effort. A “review-point” system should reflect the amount of work/time spent during the review, with more earned points for larger publications and more extended publications. 


What is an airdrop?

An airdrop means distributing tokens for free to a preferred audience. In the case of Aiur, we’re giving tokens to students and researchers. In order to participate you need to have an Ethereum wallet (and if you don’t, no worries, we’ll help you to set it up).

This is not your average Airdrop campaign where people around the world with crypto wallets get spammed. Quite the opposite. We will only airdrop tokens for validated researchers requesting the tokens after completion of simple tasks.

What is project Aiur about?

Project Aiur aims to democratize science through blockchain-enabled disintermediation.

There are a number of problems in the world of science today hampering global progress. In an almost monopolized industry with incentive misalignments, a radical change is needed. The only way to change this is with a grassroots movement – of researchers and scientists, librarians, scientific societies, R&D departments, universities, students, and innovators – coming together. We need to remove the powerful intermediaries, create new incentive structures, build commonly owned tools to validate all research and build a common Validated Repository of human knowledge. A combination of blockchain and artificial intelligence provides the technology framework, but as with all research, the scientist herself needs to be in the Center. We hope you’ll join us.

Read more about Aiur

Why distribute tokens for free?

Simply because we believe in building a solid foundation for our community. . The peer review ideation Aiur Airdrop is an invitation for students and researchers of the project to join the community.

How many tokens will be distributed as part of the airdrop campaign?

Through the third round of our airdrop campaign 5,000 tokens (worth up to €20,000) will be distributed free of charge to the participants as an invitation to join the community and to spread the message about Project Aiur together with us. Each participant can get up to 60 AIUR tokens through all the three Airdrop campaigns and the Aiur Bounty program combined. Detailed information on the token distribution of the Aiur Airdrop Phase 3 is available above.

When will I see the tokens in my wallet?

In order to maintain a healthy and balanced ecosystem, we will be distributing the airdrop tokens to our community after our public sale has been successfully done, and the token can be exchanged within our ecosystem.

Who is the airdrop campaign for?

The Aiur Airdrop is for students and researchers who can prove their university affiliation by either sending us their university email address or a link to a research paper that they have authored.

Is there going to be a token sale?

Yes, the token sale is scheduled to begin in September. Learn more on our website:

What can I do with the tokens?

That’s a great question. Read more about the Project Aiur here:

How can I create an Ethereum wallet?

To participate in Aiur Airdrop, you need to set up an Ethereum ERC20 compatible wallet where you hold its private keys. Your private keys are necessary for interaction with smart contracts to transfer and receive tokens.

There are a number of wallets that support Ethereum ERC20 tokens. One of the most popular ones is MyEtherWallet. Here’s how to set it up:

  1. Go to
  2. Create a password.Use a combination of letters, numbers and symbols to make it as strong as possible. Then, click “Create New Wallet.”
  3. Download and store your keystore file. Store your keystore file in a secure location and click continue.
  4. Save your private key.It may look just like a string of symbols, but this is your private key and its safety is of critical importance. Note that there is no way to retrieve your forgotten or lost private key and password, therefore, take all necessary safety measures suggested by the page.
  5. Use your private key or keystore file to open your wallet.
  6. Congratulations, you’ve just opened your fully functional Ethereum wallet.Next time you want to access it, go to, click “View Wallet Info” in the top right corner and authorize yourself again.
  7. Send us the public Ethereum wallet ID to claim your tokens.To receive Aiur tokens, send your public Ethereum wallet address (the “0x45…” number) to us. We will send you the tokens upon completion the successful main sale. By checking “View Wallet Info” you’ll be able to see your balance.

Terms and conditions for the Aiur Airdrop

  1. The Aiur Airdrop provides all its supporters an opportunity to get free Aiur Tokens.
  2. In order to participate in the Aiur Airdrop campaign, the participant needs to have a valid Ethereum Wallet.
  3. may stop running the Aiur Airdrop campaign at anytime for any reason.
  4. Tokens will be distributed after the Aiur public token sale has been successfully completed.
  5. The value of Aiur tokens depends on several factors. We currently expect the price of AIUR tokens to be initially set at ETH 0.01.
  6. All participants of the token sale are responsible for providing all the necessary information to distribute the tokens.
  7. Due to regulatory uncertainty with regards to the treatment of utility token offerings, AIUR tokens will not be advertised or sold to some countries and jurisdictions. Please check our AML Policy.
  8. There’s a limit of 60 AIUR tokens that one participant can gain from all the three Aiur Airdrop campaigns and the Aiur Bounty program combined.
  9. More details on how we handle data is available here Privacy Policy and the general terms for project Aiur are available here Terms of Service
OUR COOKIE POLICYIn order to provide our services and give better more secure experience uses cookies. By continuing to browse the site you are agreeing to our use of cookies.