Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Version History

« Previous Version 4 Next »

Feature Brief

Help the team understand the context behind why we are developing this feature.

This feature will will help users understand the grade distribution of the courses and in the end help in deciding the courses and prof. to choose.

I am user who relies on past data to help me plan my courses. I am trying to choose a prof. and a course for my degree plan. But its tough to decide the prof. because there are lots of courses/prof. to choose from which makes me feel tired and frustrated.

  • To organize the course/prof. data in a visually appealing manner.

  • To help the user decide the prof/course easily using appropriate metrics easily understandable by the user.

  • To help user decide with ease meaning in as least time as possible.

If we <achieve/enable X>, then <user behavior Y changes in this way> leading to positive metrics <Z>. Include guesses for the size of the win on specific metrics, using past launches as a baseline.

If we <achieve/enable X>, then <user behavior Y changes in this way> leading to positive metrics <Z>. Include guesses for the size of the win on specific metrics, using past launches as a baseline.

If we <achieve/enable X>, then <user behavior Y changes in this way> leading to positive metrics <Z>. Include guesses for the size of the win on specific metrics, using past launches as a baseline.

If we <achieve/enable X>, then <user behavior Y changes in this way> leading to positive metrics <Z>. Include guesses for the size of the win on specific metrics, using past launches as a baseline.

If we <achieve/enable X>, then <user behavior Y changes in this way> leading to positive metrics <Z>. Include guesses for the size of the win on specific metrics, using past launches as a baseline.

If we <achieve/enable X>, then <user behavior Y changes in this way> leading to positive metrics <Z>. Include guesses for the size of the win on specific metrics, using past launches as a baseline.

If we <achieve/enable X>, then <user behavior Y changes in this way> leading to positive metrics <Z>. Include guesses for the size of the win on specific metrics, using past launches as a baseline.

Tell your use cases in story format, starting before the user encounters your feature and including their thoughts and motivations. Show how the feature fits into the users' lives and has a significant impact.

  • At a high level, what’s included in V1 vs. later versions?

  • How big of a project is this?

  • What’s the rollout/testing plan?

  • For example, were there any alternatives considered?

Include some mocks or a prototype to illustrate the concept. (Add links)

Review Feature Brief before continuing


Feature Proposal

Detailed mocks & feature requirements. You can start by expanding on the scoping section from the brief. Work with your engineers & designer to ensure you’ve gone into enough detail and covered all the cases.

Brainstorm things that could go wrong with your team and partner teams. For each risk, plan appropriate mitigations.

Gather open questions here while the spec is in progress.

  • Useful research such as competitive analysis, metrics, or surveys

  • User testing

  • Customer interviews

  • No labels