Audience
Internal
Displayed Description
Page Type
Article
For CSM
- We should explain how parent/child works to the customer, and make sure it matches their workflow.
- Get them to provide us the exact name of the rejection reason that they’ll use to denote “moved to child req” (they’ll likely have to create this) - engineer needs this in order to turn it on
- making sure they know that this rej reason can only be used for ONE parent app (any other parent apps must be closed with a different rej reason, otherwise parent/child won’t work). I.e., you don’t need a different rejection reason for every parent job. You need every candidate to have no more than one app that’s treated as the parent req. They may need to clean up historical data if they’re using an existing rej reason for “moved to child”.
- All ATSs currently support parent/child as of 08/23
To check if a team has parent_child enabled
- Find the team id (e.g. via the support dashboard)
- Query like so in numeracy
select team_name, use_parent_child_pa_queries from teams where id=708;
For Eng
- Use this query to look up the ATS ID for the special rejection reason
- GH:
select name, greenhouse_id from greenhouse_rejection_reason r
join greenhouse_data_version v on r.version=v.id
where r.team_id=<teamid> and r.name ilike <name> and v.is_current;
- Other ATS:
- (adapt from the GH query)
- Set Team.ats_app_grouping_rejection_reason_ats_ids to an array of the ATS IDs for the rejection reasons that should be used to indicate a parent app that was closed as “moved to child”
- Usually teams don’t know the ID, but they can give us the name of the rej reason and we can query for it in our ATS DB tables to get the ATS ID.
- Typically this would just be one ID, but if for historical reasons teams want to include more than one, we support that.
- It’s very important that only one “parent” application is closed out with this rej reason, otherwise the data may end up incorrect.
- Set Team.use_parent_child_pa_queries = True
- Rebuild all PMAs:
- Let CSM know you’re doing this, as it’ll take down PA while it runs
- Run the below command (replace
with the team’s ID):
heroku run:detached -a zensourcer --size=performance-m -- "yes | python scripts/action/rebuild_pipeline_stats_data.py --team-ids <team_id> --sleep-between-teams --objects apps"
* make sure things are running fine by tailing the logs (heroku cli will print a run number):heroku logs --app zensourcer --dyno <run_number> -t
* follow up to your message on slack with the command you ran so teammates can also tail the same logs
* MAKE SURE THE RUN COMPLETES SUCCESSFULLY - otherwise PA will stay down for the team.