State of the Map/Planning/Scoring session proposals
This step covers the scoring of the session proposals. The final selection and development of a progam is covered elsewhere.
The main steps are:
- Decide if you want to use just a selection committee, or a committee plus public scoring.
- Establish a broad selection committee.
- Prepare the proposals for scoring (prepare the spreadsheet/survey).
- Gather scores
Before you start, you should familiarise yourself with the information below. The State of the Map working group has copies from previous years that it can share on request. You may also want to review the guidance of other events such as FOSS4G.
Committee vs Public
Traditionally the international State of the Map has used a small selection committee to score the proposals and make final selections. An alternate approach is to have a public mechanism for scoring the talks. This brings some advantages and disadvantages:
Advantages
- You get a lot of people involved, quickly.
- It is a good way to build community engagement.
- It enables you to keep the selection committee to a small number of people (which will keep discussions brief).
Disadvantages
- The people who participate might not be the people who attend SotM.
- You don't know if it represents a balance of views (see the vision).
- Technically more challenging (requires specialist survey tools).
- Could gives the impression of a binding vote.
If a public survey is used then this should always be in addtion to the selection committee. The results should be used as guidance and not a binding vote.
Selection committee
Forming a committee
A selection committee needs to be formed in order to assess the session proposals and select the successful talks and workshops for inclusion in the program. Depending upon the number of proposals received, the selection committee should be between 5 and 10 people.
The committee should include members from the permanent (remote) State of the Map working group, in addition to members from the local host community. It may also include people from outside the State of the Map working group in order to bring in certain expertise. The aim is to have a group that represents a wide range of people and interests and covers the areas set out in the vision.
The selection committee should be formed as early as possible (during or before the call for proposals).
Scoring process
These are the key process steps. They are broadly in line with the scholarship scoring process.
- The application form is closed to new applicants.
- Prepare the spreadsheet:
- Sort the data by proposal type (Talk, Lightning talk, Workshop). This helps compare proposals from the same session type.
- A "ref" (reference) column. This makes it easier to refer to specific proposals. You may want separate references for Talks (T1, T2, ...), Lightning Talks (L1, L2,...) and Workshops (W1, W2,...).
- A copy of the responses is made (e.g. by duplicating the worksheet) and the original is protected to prevent changes (e.g. by using the "protect range" setting within the spreadsheet). The copy becomes the new working sheet. You may want to separate out Talks, Lightning Talks and Workshops on to separate sheets at this stage.
- On the working sheet(s) extra columns may be added to better present the data (e.g. to split out multiple checkbox selections into individual columns).
- Extra columns are added for each member of the selection committee (to add their scores).
- Score the written responses:
- Each member reads and scores the proposals. Scoring is a simple "yes", "no" or "maybe" (scored as +1, -1 and 0 respectively).
- If there are too many proposals then it is possible to reduce the workload. For example one selection committee subgroup can score the first half of the applicants, whilst the other subgroup scores the remainder. It is worth ensuring that at least 4 people score each proposal.
Public scoring
For State of the Map 2016 we trialled a public scoring mechanism (in addition to the selection committee). The OSM community was asked to score the talks (i.e. excluding workshops) which appeared in a randomised order and were anonymised. They could score as many or as few as they had time for and scoring was based on three levels ("Not Interested", "Slightly Interested", and "Very Interested"). Circa 190 people participated and on average, each talk was scored by 47 people.
The results of this were not used directly, but did help to guide the selection committee. If this approach is to be repeated then we should review the comments here (restricted access) and seek feedback from others who run a similar process (e.g. FOSS4G).