Skip to content

Commit

Permalink
Merge pull request #2063 from conda-forge/2024-01-24-meeting-notes
Browse files Browse the repository at this point in the history
Add meeting notes 2024-01-24
  • Loading branch information
jaimergp authored Jan 25, 2024
2 parents ddd490d + 1f2d690 commit 77bb241
Showing 1 changed file with 69 additions and 0 deletions.
69 changes: 69 additions & 0 deletions sphinx/src/orga/minutes/2024-01-24.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
---
tags: [meeting-notes]
---
# conda-forge core meeting 2024-01-24

Add new agenda items under the `Your __new__() agenda items` heading

- [Zoom link](https://zoom.us/j/9138593505?pwd=SWh3dE1IK05LV01Qa0FJZ1ZpMzJLZz09)
- [What time is the meeting in my time zone](https://dateful.com/convert/utc?t=5pm)
- [Last week's meeting](https://hackmd.io/#REPLACE_ME#)

## Attendees

| Name | Initials | GitHub ID | Affiliation |
| ----------------------- | -------- | --------------- | --------------------------- |
| Jaime Rodríguez-Guerra | JRG | jaimergp | Quansight/cf |
| Michael Sarahan | MCS | msarahan | NVIDIA/CF |
| Marius van Niekerk | MvN | mariusvniekerk | Voltron Data/CF |
| Cheng H. Lee | CHL | chenghlee | Anaconda/CF |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |

12 people total

### Standing items

- [ ]

### From previous meeting(s)

- [ ]

### Active votes

- [ ]

### Your `__new__()` agenda items

- [x] (HV) How do we introduce `{{ stdlib("c") }}`?
- Design [objections](https://github.com/conda/conda-build/issues/5053) were withdrawn, functionality is released in conda-build 3.28 already.
- Main question is how to roll out this change broadly. I suggest to keep it tightly scoped to just the C stdlib for now (though the possibility of linking it with a dedicated [migration](https://github.com/conda-forge/conda-forge.github.io/issues/2015) for `error_overlinking: true` remains an option, if we want).
- (IF) Think we should have a mini-migrator (piggyback) so we don't have to rebuild all C/C++ packages; only rebuild when we really have to.
- (MvN) Agrees that we should take a lazy approach to migration; maybe have a list of some packages to eagerly migrate
- (MvN/IF) We can safely assume that the "world is flat" -> dependent packages won't need to be migrated as well.
- [x] (HV) [Migrate](https://github.com/conda-forge/conda-forge-pinning-feedstock/pull/5191) boost 1.84?
- Recently we've migrated every 4th release of boost; previously it happened more often (every second release). There was a suggestion by a core member (Uwe) to migrate for 1.84; I think it'd be worth doing.
- After the big refactor with 1.82 (splitting off header-only lib, adding run-export), it should be easier to migrate nowadays.
- (IF) We should collect/review data on how long it takes to perform a boost migration, and use that to judge how often we should update. e.g. if it takes 3 months, then maybe once a year is sensible; if it takes 1 month, then every six months?
- [x] (KE) Can we create an sccache store to reduce build redundancy?
- (MvN) Big question is, "where do we put the cache?"
- (MCS) Do we have contacts at MSFT or other cloud providers we can talk with?
- (MvN) `conda-build` behavior complicates caching; e.g., use of timestamps in build env names can leak into cache if not careful
- (IF) When do we need sccache? E.g., does building different build numbers vs [upstream] versions benefit from cache?
- (MCS/KE) Opinion at Nvidia is Rapids can't get on conda-forge because it would take too long to build. Exploring if sccache would make conda-forge distribution feasible.
- (IF) using Quansight-hosted builder may be an option
- (KK) Building all of Rapids likely a bigger job than PyTorch or TensorFlow. May also want to consider splitting into per-device architecture builds
- (MCS) Need clear motivation to distribute Rapids via c-f; don't want to overload c-f infrastructure otherwise
- (KK) Currently not possible to [easily] depend on cuDF, cuML

### Pushed to next meeting

- [ ]

### CFEPs

- [ ]

0 comments on commit 77bb241

Please sign in to comment.