Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 1
900
| labels
stringlengths 4
522
| body
stringlengths 5
218k
| index
stringclasses 6
values | text_combine
stringlengths 96
219k
| label
stringclasses 2
values | text
stringlengths 96
102k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
318,683
| 23,734,541,985
|
IssuesEvent
|
2022-08-31 06:50:08
|
codeing999/CLIPs-backend
|
https://api.github.com/repos/codeing999/CLIPs-backend
|
closed
|
git 협업 방법 변경
|
documentation
|
기존 브랜치
main : 최종적으로 배포할 브랜치
submain : 각자의 작업 브랜치에서 작업하다가 에러 없이 작업 완료될 때마다 수시로 푸쉬할 브랜치
dev/닉네임 : 각자의 작업 브랜치.
변경된 브랜치
main : 배포할 브랜치
develop : 개발 단계에서 각각의 기능이 완성될 때 마다 합칠 브랜치
feature/기능명, fix/에러 : 각 기능 단위로 브랜치를 새로 만들고 완성 시마다 develop에 합치고 이 브랜치는 삭제.
이 내용은 동의 하시면 docs에도 반영하여 수정하도록 하겠습니다.
|
1.0
|
git 협업 방법 변경 - 기존 브랜치
main : 최종적으로 배포할 브랜치
submain : 각자의 작업 브랜치에서 작업하다가 에러 없이 작업 완료될 때마다 수시로 푸쉬할 브랜치
dev/닉네임 : 각자의 작업 브랜치.
변경된 브랜치
main : 배포할 브랜치
develop : 개발 단계에서 각각의 기능이 완성될 때 마다 합칠 브랜치
feature/기능명, fix/에러 : 각 기능 단위로 브랜치를 새로 만들고 완성 시마다 develop에 합치고 이 브랜치는 삭제.
이 내용은 동의 하시면 docs에도 반영하여 수정하도록 하겠습니다.
|
non_architecture
|
git 협업 방법 변경 기존 브랜치 main 최종적으로 배포할 브랜치 submain 각자의 작업 브랜치에서 작업하다가 에러 없이 작업 완료될 때마다 수시로 푸쉬할 브랜치 dev 닉네임 각자의 작업 브랜치 변경된 브랜치 main 배포할 브랜치 develop 개발 단계에서 각각의 기능이 완성될 때 마다 합칠 브랜치 feature 기능명 fix 에러 각 기능 단위로 브랜치를 새로 만들고 완성 시마다 develop에 합치고 이 브랜치는 삭제 이 내용은 동의 하시면 docs에도 반영하여 수정하도록 하겠습니다
| 0
|
11,270
| 14,060,158,658
|
IssuesEvent
|
2020-11-03 05:16:59
|
gfx-rs/naga
|
https://api.github.com/repos/gfx-rs/naga
|
opened
|
Typifier -> Classifier
|
area: processing help wanted kind: feature kind: question
|
Currently, we have the typifier module that assigns expression types. We need more information induced from the expressions though. One such bit is what is the variability of an expression: global, uniform, or local.
We could go at least two ways from here:
1. Introduce an expression visitor, which will be used by the typifier as well as other things, potentially in user space as well.
2. Rename the typifier to "classifier" and make it derive the variability together with the type.
The (2) seems more straightforward to me, although I do wonder about a way to let users easily process our IR modules. Maybe it's just too early for that, and we'll need to pick the easier solution.
|
1.0
|
Typifier -> Classifier - Currently, we have the typifier module that assigns expression types. We need more information induced from the expressions though. One such bit is what is the variability of an expression: global, uniform, or local.
We could go at least two ways from here:
1. Introduce an expression visitor, which will be used by the typifier as well as other things, potentially in user space as well.
2. Rename the typifier to "classifier" and make it derive the variability together with the type.
The (2) seems more straightforward to me, although I do wonder about a way to let users easily process our IR modules. Maybe it's just too early for that, and we'll need to pick the easier solution.
|
non_architecture
|
typifier classifier currently we have the typifier module that assigns expression types we need more information induced from the expressions though one such bit is what is the variability of an expression global uniform or local we could go at least two ways from here introduce an expression visitor which will be used by the typifier as well as other things potentially in user space as well rename the typifier to classifier and make it derive the variability together with the type the seems more straightforward to me although i do wonder about a way to let users easily process our ir modules maybe it s just too early for that and we ll need to pick the easier solution
| 0
|
1,423
| 5,892,275,610
|
IssuesEvent
|
2017-05-17 19:04:08
|
gctools-outilsgc/gcconnex
|
https://api.github.com/repos/gctools-outilsgc/gcconnex
|
closed
|
design prompt for auto suggesting tags / audience on new content creation
|
enhancement high-level design Information Architecture - Controlled Vocabulary
|
design mock up for user prompt suggesting tags / audiences for content.
|
1.0
|
design prompt for auto suggesting tags / audience on new content creation - design mock up for user prompt suggesting tags / audiences for content.
|
architecture
|
design prompt for auto suggesting tags audience on new content creation design mock up for user prompt suggesting tags audiences for content
| 1
|
6,669
| 15,014,277,621
|
IssuesEvent
|
2021-02-01 06:18:02
|
burespe1/FRAME
|
https://api.github.com/repos/burespe1/FRAME
|
closed
|
Functional View to Physical View
|
EA Development architecture methodologies automation on hold physical view
|
following are the steps to be followed to build a physical view.
1)

2)

3)

can we automate this process within EA?
|
1.0
|
Functional View to Physical View - following are the steps to be followed to build a physical view.
1)

2)

3)

can we automate this process within EA?
|
architecture
|
functional view to physical view following are the steps to be followed to build a physical view can we automate this process within ea
| 1
|
7,842
| 19,649,865,557
|
IssuesEvent
|
2022-01-10 04:56:42
|
Vector35/binaryninja-api
|
https://api.github.com/repos/Vector35/binaryninja-api
|
closed
|
arm64 `fcmp` doesn't work with many condition codes
|
enhancement architecture ARM64 Effort: Low Impact: Medium
|
**Version and Platform (required):**
- Binary Ninja Version: Version 2.5.3140-dev (Build ID 532595b6)
- OS: macOS
- OS Version: 11.6
**Bug Description:**
[This binary](https://github.com/Vector35/binaryninja-api/files/7790320/fcmp.zip) demonstrates using all 14 ARM condition codes with `fcmp`. Among these:
- `eq`, `ne`, `cs`, and `cc` look good.
- `ge`, `lt`, `gt`, and `le` are decompiled using `unimplemented`, e.g.:
```
000000f0 int64_t test_ge(int32_t arg1 @ v0, int32_t arg2 @ v1)
000000f0 int64_t x0 = 0
000000f4 arg1 f- arg2
000000f4 bool v = unimplemented {fcmp s0, s1}
000000f4 bool n = unimplemented {fcmp s0, s1}
000000f8 if (n == v)
00000100 x0 = 1
000000fc return x0
```
This is unfortunate, since those condition codes are very common.
- `ls` and its inverse `hi` are decompiled correctly but suboptimally. For `ls`:
```
000000d8 int64_t test_ls(float arg1 @ v0, float arg2 @ v1)
000000d8 int64_t x0 = 0
000000dc arg1 - arg2
000000e0 if (arg1 == arg2 || arg1 < arg2)
000000e8 x0 = 1
000000e4 return x0
```
This could be `arg1 <= arg2`.
- `pl` is decompiled incorrectly:
```
00000078 int64_t test_pl(int32_t arg1 @ v0, int32_t arg2 @ v1)
00000078 int64_t x0 = 0
00000080 if (arg1 f- arg2 s>= 0)
00000088 x0 = 1
00000084 return x0
```
In reality it should be `not(arg1 < arg2)` (see below table), which is not the same as `arg1 - arg2 >= 0`. For example, if one or both arguments is NaN, `not(arg1 < arg2)` is true, but `arg1 - arg2 >= 0` is false.
- `mi` is similarly oddly decompiled as `arg1 f- arg2 s< 0` when it should be `arg1 < arg2`. I can't think of any cases within standard IEEE floating point where these expressions aren't equivalent, but under IEEE floating point with subnormals disabled (common in games), [there are pairs of floats](https://stackoverflow.com/a/54532647) `a`, `b` such that `a < b` but `a - b == 0.0`.
Here is the relevant table from the ARM manual:
<img width="819" alt="image" src="https://user-images.githubusercontent.com/47517/147696978-bd8616b1-e459-4cbd-bdaf-7db3e19bffda.png">
|
1.0
|
arm64 `fcmp` doesn't work with many condition codes - **Version and Platform (required):**
- Binary Ninja Version: Version 2.5.3140-dev (Build ID 532595b6)
- OS: macOS
- OS Version: 11.6
**Bug Description:**
[This binary](https://github.com/Vector35/binaryninja-api/files/7790320/fcmp.zip) demonstrates using all 14 ARM condition codes with `fcmp`. Among these:
- `eq`, `ne`, `cs`, and `cc` look good.
- `ge`, `lt`, `gt`, and `le` are decompiled using `unimplemented`, e.g.:
```
000000f0 int64_t test_ge(int32_t arg1 @ v0, int32_t arg2 @ v1)
000000f0 int64_t x0 = 0
000000f4 arg1 f- arg2
000000f4 bool v = unimplemented {fcmp s0, s1}
000000f4 bool n = unimplemented {fcmp s0, s1}
000000f8 if (n == v)
00000100 x0 = 1
000000fc return x0
```
This is unfortunate, since those condition codes are very common.
- `ls` and its inverse `hi` are decompiled correctly but suboptimally. For `ls`:
```
000000d8 int64_t test_ls(float arg1 @ v0, float arg2 @ v1)
000000d8 int64_t x0 = 0
000000dc arg1 - arg2
000000e0 if (arg1 == arg2 || arg1 < arg2)
000000e8 x0 = 1
000000e4 return x0
```
This could be `arg1 <= arg2`.
- `pl` is decompiled incorrectly:
```
00000078 int64_t test_pl(int32_t arg1 @ v0, int32_t arg2 @ v1)
00000078 int64_t x0 = 0
00000080 if (arg1 f- arg2 s>= 0)
00000088 x0 = 1
00000084 return x0
```
In reality it should be `not(arg1 < arg2)` (see below table), which is not the same as `arg1 - arg2 >= 0`. For example, if one or both arguments is NaN, `not(arg1 < arg2)` is true, but `arg1 - arg2 >= 0` is false.
- `mi` is similarly oddly decompiled as `arg1 f- arg2 s< 0` when it should be `arg1 < arg2`. I can't think of any cases within standard IEEE floating point where these expressions aren't equivalent, but under IEEE floating point with subnormals disabled (common in games), [there are pairs of floats](https://stackoverflow.com/a/54532647) `a`, `b` such that `a < b` but `a - b == 0.0`.
Here is the relevant table from the ARM manual:
<img width="819" alt="image" src="https://user-images.githubusercontent.com/47517/147696978-bd8616b1-e459-4cbd-bdaf-7db3e19bffda.png">
|
architecture
|
fcmp doesn t work with many condition codes version and platform required binary ninja version version dev build id os macos os version bug description demonstrates using all arm condition codes with fcmp among these eq ne cs and cc look good ge lt gt and le are decompiled using unimplemented e g t test ge t t t f bool v unimplemented fcmp bool n unimplemented fcmp if n v return this is unfortunate since those condition codes are very common ls and its inverse hi are decompiled correctly but suboptimally for ls t test ls float float t if return this could be pl is decompiled incorrectly t test pl t t t if f s return in reality it should be not for example if one or both arguments is nan not is false mi is similarly oddly decompiled as f s when it should be i can t think of any cases within standard ieee floating point where these expressions aren t equivalent but under ieee floating point with subnormals disabled common in games a b such that a b but a b here is the relevant table from the arm manual img width alt image src
| 1
|
4,840
| 11,757,762,280
|
IssuesEvent
|
2020-03-13 14:16:01
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
Investigate Transitive Deps from docker/libnetwork
|
area/code-organization area/dependency kind/feature lifecycle/rotten sig/architecture sig/network
|
**What would you like to be added**:
github.com/docker/libnetwork has a lot of transitive deps but we only use its (relatively small) `ipvs` package for kube-proxy. Luckily because it doesn't use go modules yet, updating it doesn't actually update it's transitive deps, but if/when it does, managing it's transitive deps will be a pain because of its overlapping transitive deps to our other dependencies.
From doing a quick search of the Kubernetes repo, we only use the `ipvs` package from github.com/docker/libnetwork. Some options going forward would be:
* fork that repo with only the `ipvs` package
* copy the `ipvs` package to k8s.io/kubernetes
* ask the docker maintainers to put the ipvs package into a separate repo
* ???
Open to other options I haven't considered yet.
**Why is this needed**:
Will significantly improve the maintainability of our dependencies once github.com/docker/libnetwork uses go modules.
|
1.0
|
Investigate Transitive Deps from docker/libnetwork - **What would you like to be added**:
github.com/docker/libnetwork has a lot of transitive deps but we only use its (relatively small) `ipvs` package for kube-proxy. Luckily because it doesn't use go modules yet, updating it doesn't actually update it's transitive deps, but if/when it does, managing it's transitive deps will be a pain because of its overlapping transitive deps to our other dependencies.
From doing a quick search of the Kubernetes repo, we only use the `ipvs` package from github.com/docker/libnetwork. Some options going forward would be:
* fork that repo with only the `ipvs` package
* copy the `ipvs` package to k8s.io/kubernetes
* ask the docker maintainers to put the ipvs package into a separate repo
* ???
Open to other options I haven't considered yet.
**Why is this needed**:
Will significantly improve the maintainability of our dependencies once github.com/docker/libnetwork uses go modules.
|
architecture
|
investigate transitive deps from docker libnetwork what would you like to be added github com docker libnetwork has a lot of transitive deps but we only use its relatively small ipvs package for kube proxy luckily because it doesn t use go modules yet updating it doesn t actually update it s transitive deps but if when it does managing it s transitive deps will be a pain because of its overlapping transitive deps to our other dependencies from doing a quick search of the kubernetes repo we only use the ipvs package from github com docker libnetwork some options going forward would be fork that repo with only the ipvs package copy the ipvs package to io kubernetes ask the docker maintainers to put the ipvs package into a separate repo open to other options i haven t considered yet why is this needed will significantly improve the maintainability of our dependencies once github com docker libnetwork uses go modules
| 1
|
134,670
| 30,113,591,798
|
IssuesEvent
|
2023-06-30 09:40:04
|
FerretDB/FerretDB
|
https://api.github.com/repos/FerretDB/FerretDB
|
opened
|
Implement `$tsSecond` timestamp expression operator
|
code/feature not ready area/aggregations
|
### What should be done?
It should be supported in all pipeline stages that support raw expressions and other pipelines that allow the `$expr` operator.
* https://www.mongodb.com/docs/manual/reference/operator/aggregation/tsSecond/#mongodb-expression-exp.-tsSecond
|
1.0
|
Implement `$tsSecond` timestamp expression operator - ### What should be done?
It should be supported in all pipeline stages that support raw expressions and other pipelines that allow the `$expr` operator.
* https://www.mongodb.com/docs/manual/reference/operator/aggregation/tsSecond/#mongodb-expression-exp.-tsSecond
|
non_architecture
|
implement tssecond timestamp expression operator what should be done it should be supported in all pipeline stages that support raw expressions and other pipelines that allow the expr operator
| 0
|
10,997
| 27,734,774,944
|
IssuesEvent
|
2023-03-15 10:28:09
|
OasisLMF/OasisPlatform
|
https://api.github.com/repos/OasisLMF/OasisPlatform
|
opened
|
Fix Helm customization Readme
|
bug Documentation scalable architecture
|
## Issue Description
> **carlfischerjba:** Helm appears to be combining the default values files from `OasisPlatform/kubernetes/charts/oasis-models/values.yaml` with the file I specify on the command line.
>
> Apparently, the `workers` (a mapping) get merged so we have the default `piwind-demo` as well as the New workers I've defined, but the `modelVolumes` (a sequence) are overwritten so we only have the New volumes and not `piwind-model-data-pv`, this leads to the error.
>
> This means [the methods recommended in the readme](https://github.com/OasisLMF/OasisPlatform/blob/6dd90eb3ced94e48464de158af88417de3b49b9a/kubernetes/charts/README.md?plain=1#L242) don't work. I guess it's not been spotted until now because everyone has kept the PiWind model in place. Trouble starts once you decide you don't need it. Merging values vs maps vs lists is a problem with docker-compose.yml and other types of config files too, including json. It's surprising there's no way to tell Helm to ignore the defaults. Without such an option, the ways to keep everything working are not very satisfactory:
> * edit the values files in place
> * rename or delete the default values files
> * or copy the template as suggested but only add to it, never remove anything
>
> Another alternative would be for you to rename models/values.yaml to models/values_sample.yaml so it's ignored by Helm and update the instructions in the readme accordingly.
>
> The same could occur with the platform and monitoring charts but I guess that's less likely because in general a few values are modified and override the defaults instead of adding extra items to mappings or sequences (are those the correct YAML terms?).
At a minimum the documentation instructions should be updated to note/fix this problem
|
1.0
|
Fix Helm customization Readme - ## Issue Description
> **carlfischerjba:** Helm appears to be combining the default values files from `OasisPlatform/kubernetes/charts/oasis-models/values.yaml` with the file I specify on the command line.
>
> Apparently, the `workers` (a mapping) get merged so we have the default `piwind-demo` as well as the New workers I've defined, but the `modelVolumes` (a sequence) are overwritten so we only have the New volumes and not `piwind-model-data-pv`, this leads to the error.
>
> This means [the methods recommended in the readme](https://github.com/OasisLMF/OasisPlatform/blob/6dd90eb3ced94e48464de158af88417de3b49b9a/kubernetes/charts/README.md?plain=1#L242) don't work. I guess it's not been spotted until now because everyone has kept the PiWind model in place. Trouble starts once you decide you don't need it. Merging values vs maps vs lists is a problem with docker-compose.yml and other types of config files too, including json. It's surprising there's no way to tell Helm to ignore the defaults. Without such an option, the ways to keep everything working are not very satisfactory:
> * edit the values files in place
> * rename or delete the default values files
> * or copy the template as suggested but only add to it, never remove anything
>
> Another alternative would be for you to rename models/values.yaml to models/values_sample.yaml so it's ignored by Helm and update the instructions in the readme accordingly.
>
> The same could occur with the platform and monitoring charts but I guess that's less likely because in general a few values are modified and override the defaults instead of adding extra items to mappings or sequences (are those the correct YAML terms?).
At a minimum the documentation instructions should be updated to note/fix this problem
|
architecture
|
fix helm customization readme issue description carlfischerjba helm appears to be combining the default values files from oasisplatform kubernetes charts oasis models values yaml with the file i specify on the command line apparently the workers a mapping get merged so we have the default piwind demo as well as the new workers i ve defined but the modelvolumes a sequence are overwritten so we only have the new volumes and not piwind model data pv this leads to the error this means don t work i guess it s not been spotted until now because everyone has kept the piwind model in place trouble starts once you decide you don t need it merging values vs maps vs lists is a problem with docker compose yml and other types of config files too including json it s surprising there s no way to tell helm to ignore the defaults without such an option the ways to keep everything working are not very satisfactory edit the values files in place rename or delete the default values files or copy the template as suggested but only add to it never remove anything another alternative would be for you to rename models values yaml to models values sample yaml so it s ignored by helm and update the instructions in the readme accordingly the same could occur with the platform and monitoring charts but i guess that s less likely because in general a few values are modified and override the defaults instead of adding extra items to mappings or sequences are those the correct yaml terms at a minimum the documentation instructions should be updated to note fix this problem
| 1
|
10,129
| 26,364,651,544
|
IssuesEvent
|
2023-01-11 15:43:36
|
mehab/DTKafkaPOC
|
https://api.github.com/repos/mehab/DTKafkaPOC
|
opened
|
Add architecture diagrams
|
documentation 📃 architecture 🔮
|
In order for us (and others) to better understand what we're building here, we should have architecture diagrams.
Preferably there should be multiple "resolutions" from a high-level overview to individual services. The topology diagrams we can generate using Kafka Streams will be helpful for the latter.
|
1.0
|
Add architecture diagrams - In order for us (and others) to better understand what we're building here, we should have architecture diagrams.
Preferably there should be multiple "resolutions" from a high-level overview to individual services. The topology diagrams we can generate using Kafka Streams will be helpful for the latter.
|
architecture
|
add architecture diagrams in order for us and others to better understand what we re building here we should have architecture diagrams preferably there should be multiple resolutions from a high level overview to individual services the topology diagrams we can generate using kafka streams will be helpful for the latter
| 1
|
445,871
| 12,837,462,604
|
IssuesEvent
|
2020-07-07 15:49:31
|
code-ready/crc
|
https://api.github.com/repos/code-ready/crc
|
closed
|
Add 'Experimental' messages when `podman-env` command is used.
|
priority/critical status/stale
|
We need to add output to the `podman-env` command, as for the time being no changes will happen to this functionality
Note: adding an `echo`, so when `podman-env` is `eval`-ed it would still show
|
1.0
|
Add 'Experimental' messages when `podman-env` command is used. - We need to add output to the `podman-env` command, as for the time being no changes will happen to this functionality
Note: adding an `echo`, so when `podman-env` is `eval`-ed it would still show
|
non_architecture
|
add experimental messages when podman env command is used we need to add output to the podman env command as for the time being no changes will happen to this functionality note adding an echo so when podman env is eval ed it would still show
| 0
|
4,114
| 10,584,831,316
|
IssuesEvent
|
2019-10-08 16:11:07
|
fga-eps-mds/2019.2-Over26
|
https://api.github.com/repos/fga-eps-mds/2019.2-Over26
|
closed
|
Elaborar Plano de Qualidade
|
Architecture Documentation EPS
|
## Descrição da Mudança *
<!--- Forneça um resumo geral da _issue_ -->
Criar a primeira versão do plano de qualidade para o projeto.
## Checklist *
<!-- Essa checklist propõe a criação de uma boa issue -->
<!-- Se a issue é sobre uma história de usuário, seu nome deve ser "USXX - Nome da história-->
<!-- Se a issue é sobre um bug, seu nome deve ser "BF - Nome curto do bug"-->
<!-- Se a issue é sobre outra tarefa o nome deve ser uma simples descrição da tarefa-->
- [x] Esta issue tem um nome significativo.
- [x] O nome da issue está no padrão.
- [x] Esta issue tem uma descrição de fácil entendimento.
- [x] Esta issue tem uma boa definição de critérios de aceitação.
- [x] Esta issue tem labels associadas.
- [ ] Esta issue está associada à uma milestone.
- [ ] Esta issue tem uma pontuação estimada.
## Tarefas *
<!-- Adicione aqui as tarefas necessárias para concluir a issue -->
- [ ] Criar plano de qualidade
## Critérios de Aceitação *
<!-- Liste aqui o conjunto de aspectos mecessários para considerar a atividade como completa-->
<!-- Os itens serão adicionados pelo Product Owner -->
- [ ] A primeira versão do plano de qualidade deve estar elaborada
|
1.0
|
Elaborar Plano de Qualidade - ## Descrição da Mudança *
<!--- Forneça um resumo geral da _issue_ -->
Criar a primeira versão do plano de qualidade para o projeto.
## Checklist *
<!-- Essa checklist propõe a criação de uma boa issue -->
<!-- Se a issue é sobre uma história de usuário, seu nome deve ser "USXX - Nome da história-->
<!-- Se a issue é sobre um bug, seu nome deve ser "BF - Nome curto do bug"-->
<!-- Se a issue é sobre outra tarefa o nome deve ser uma simples descrição da tarefa-->
- [x] Esta issue tem um nome significativo.
- [x] O nome da issue está no padrão.
- [x] Esta issue tem uma descrição de fácil entendimento.
- [x] Esta issue tem uma boa definição de critérios de aceitação.
- [x] Esta issue tem labels associadas.
- [ ] Esta issue está associada à uma milestone.
- [ ] Esta issue tem uma pontuação estimada.
## Tarefas *
<!-- Adicione aqui as tarefas necessárias para concluir a issue -->
- [ ] Criar plano de qualidade
## Critérios de Aceitação *
<!-- Liste aqui o conjunto de aspectos mecessários para considerar a atividade como completa-->
<!-- Os itens serão adicionados pelo Product Owner -->
- [ ] A primeira versão do plano de qualidade deve estar elaborada
|
architecture
|
elaborar plano de qualidade descrição da mudança criar a primeira versão do plano de qualidade para o projeto checklist esta issue tem um nome significativo o nome da issue está no padrão esta issue tem uma descrição de fácil entendimento esta issue tem uma boa definição de critérios de aceitação esta issue tem labels associadas esta issue está associada à uma milestone esta issue tem uma pontuação estimada tarefas criar plano de qualidade critérios de aceitação a primeira versão do plano de qualidade deve estar elaborada
| 1
|
1,694
| 6,553,962,218
|
IssuesEvent
|
2017-09-06 02:15:13
|
City-Bureau/documenters-aggregator
|
https://api.github.com/repos/City-Bureau/documenters-aggregator
|
opened
|
What geocoder service should we use?
|
architecture: spiders priority: high (must have)
|
See https://github.com/City-Bureau/documenters-aggregator/pull/85#issuecomment-327325384
Leaning towards Mapbox for now and will use to close #85.
|
1.0
|
What geocoder service should we use? - See https://github.com/City-Bureau/documenters-aggregator/pull/85#issuecomment-327325384
Leaning towards Mapbox for now and will use to close #85.
|
architecture
|
what geocoder service should we use see leaning towards mapbox for now and will use to close
| 1
|
2,097
| 7,276,508,762
|
IssuesEvent
|
2018-02-21 16:34:13
|
AnalyticalGraphicsInc/cesium
|
https://api.github.com/repos/AnalyticalGraphicsInc/cesium
|
closed
|
CesiumMath vs Math naming ambiguity
|
category - architecture / api category - doc
|
It's not clear that the `CesiumMath` class is included in the namespace as `Cesium.Math`. This is also inconsistent with other classes that have the Cesium prefix, like `Cesium3DTileset`. If this is not something we want to change in the API, this should be made clear in the documentation.
Relevant forum thread: https://groups.google.com/forum/#!topic/cesium-dev/icpMxc_bea8
|
1.0
|
CesiumMath vs Math naming ambiguity - It's not clear that the `CesiumMath` class is included in the namespace as `Cesium.Math`. This is also inconsistent with other classes that have the Cesium prefix, like `Cesium3DTileset`. If this is not something we want to change in the API, this should be made clear in the documentation.
Relevant forum thread: https://groups.google.com/forum/#!topic/cesium-dev/icpMxc_bea8
|
architecture
|
cesiummath vs math naming ambiguity it s not clear that the cesiummath class is included in the namespace as cesium math this is also inconsistent with other classes that have the cesium prefix like if this is not something we want to change in the api this should be made clear in the documentation relevant forum thread
| 1
|
179,282
| 21,557,595,193
|
IssuesEvent
|
2022-04-30 17:37:48
|
NixOS/nixpkgs
|
https://api.github.com/repos/NixOS/nixpkgs
|
closed
|
Vulnerability roundup 113: ffmpeg-5.0.1: 1 advisory [7.5]
|
1.severity: security
|
[search](https://search.nix.gsc.io/?q=ffmpeg&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=ffmpeg+in%3Apath&type=Code)
* [ ] [CVE-2021-38291](https://nvd.nist.gov/vuln/detail/CVE-2021-38291) CVSSv3=7.5 (nixos-unstable)
## CVE details
### CVE-2021-38291
FFmpeg version (git commit de8e6e67e7523e48bb27ac224a0b446df05e1640) suffers from a an assertion failure at src/libavutil/mathematics.c.
-----
Scanned versions: nixos-unstable: ff9efb0724d.
Cc @codyopel
|
True
|
Vulnerability roundup 113: ffmpeg-5.0.1: 1 advisory [7.5] - [search](https://search.nix.gsc.io/?q=ffmpeg&i=fosho&repos=NixOS-nixpkgs), [files](https://github.com/NixOS/nixpkgs/search?utf8=%E2%9C%93&q=ffmpeg+in%3Apath&type=Code)
* [ ] [CVE-2021-38291](https://nvd.nist.gov/vuln/detail/CVE-2021-38291) CVSSv3=7.5 (nixos-unstable)
## CVE details
### CVE-2021-38291
FFmpeg version (git commit de8e6e67e7523e48bb27ac224a0b446df05e1640) suffers from a an assertion failure at src/libavutil/mathematics.c.
-----
Scanned versions: nixos-unstable: ff9efb0724d.
Cc @codyopel
|
non_architecture
|
vulnerability roundup ffmpeg advisory nixos unstable cve details cve ffmpeg version git commit suffers from a an assertion failure at src libavutil mathematics c scanned versions nixos unstable cc codyopel
| 0
|
4,072
| 10,552,476,500
|
IssuesEvent
|
2019-10-03 15:14:11
|
dotnet/docs
|
https://api.github.com/repos/dotnet/docs
|
closed
|
Multuple IHostedService registration
|
:book: guide - .NET Microservices :books: Area - .NET Architecture Guide Source - Docs.ms
|
If i try to register two or more services, only one could work properly.
For example:
```
services.AddSingleton<IHostedService, ServiceA>();
services.AddSingleton<IHostedService, ServiceB>();
```
Implementations are simplest as possible:
```
public class ServiceA: IHostedService
{
public Task StartAsync(CancellationToken cancellationToken)
{
DoWork();
return Task.CompletedTask;
}
public Task StopAsync(CancellationToken cancellationToken)
{
return Task.CompletedTask;
}
private void DoWork()
{
while (true)
{
Console.WriteLine("ServiceA");
Thread.Sleep(2000);
}
}
}
```
and
```
public class ServiceB: IHostedService
{
public Task StartAsync(CancellationToken cancellationToken)
{
DoWork();
return Task.CompletedTask;
}
public Task StopAsync(CancellationToken cancellationToken)
{
return Task.CompletedTask;
}
private void DoWork()
{
while (true)
{
Console.WriteLine("ServiceB");
Thread.Sleep(1000);
}
}
}
```
In output getting messages only from ServiceA
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: d49a03c0-a844-26eb-48a5-33a612dd3ead
* Version Independent ID: 0707865f-9db7-0d71-42a5-bc1a1e89680a
* Content: [Implement background tasks in microservices with IHostedService and the BackgroundService class](https://docs.microsoft.com/en-us/dotnet/architecture/microservices/multi-container-microservice-net-applications/background-tasks-with-ihostedservice#feedback)
* Content Source: [docs/architecture/microservices/multi-container-microservice-net-applications/background-tasks-with-ihostedservice.md](https://github.com/dotnet/docs/blob/master/docs/architecture/microservices/multi-container-microservice-net-applications/background-tasks-with-ihostedservice.md)
* Product: **dotnet**
* Technology: **dotnet-ebooks**
* GitHub Login: @nishanil
* Microsoft Alias: **nanil**
|
1.0
|
Multuple IHostedService registration - If i try to register two or more services, only one could work properly.
For example:
```
services.AddSingleton<IHostedService, ServiceA>();
services.AddSingleton<IHostedService, ServiceB>();
```
Implementations are simplest as possible:
```
public class ServiceA: IHostedService
{
public Task StartAsync(CancellationToken cancellationToken)
{
DoWork();
return Task.CompletedTask;
}
public Task StopAsync(CancellationToken cancellationToken)
{
return Task.CompletedTask;
}
private void DoWork()
{
while (true)
{
Console.WriteLine("ServiceA");
Thread.Sleep(2000);
}
}
}
```
and
```
public class ServiceB: IHostedService
{
public Task StartAsync(CancellationToken cancellationToken)
{
DoWork();
return Task.CompletedTask;
}
public Task StopAsync(CancellationToken cancellationToken)
{
return Task.CompletedTask;
}
private void DoWork()
{
while (true)
{
Console.WriteLine("ServiceB");
Thread.Sleep(1000);
}
}
}
```
In output getting messages only from ServiceA
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: d49a03c0-a844-26eb-48a5-33a612dd3ead
* Version Independent ID: 0707865f-9db7-0d71-42a5-bc1a1e89680a
* Content: [Implement background tasks in microservices with IHostedService and the BackgroundService class](https://docs.microsoft.com/en-us/dotnet/architecture/microservices/multi-container-microservice-net-applications/background-tasks-with-ihostedservice#feedback)
* Content Source: [docs/architecture/microservices/multi-container-microservice-net-applications/background-tasks-with-ihostedservice.md](https://github.com/dotnet/docs/blob/master/docs/architecture/microservices/multi-container-microservice-net-applications/background-tasks-with-ihostedservice.md)
* Product: **dotnet**
* Technology: **dotnet-ebooks**
* GitHub Login: @nishanil
* Microsoft Alias: **nanil**
|
architecture
|
multuple ihostedservice registration if i try to register two or more services only one could work properly for example services addsingleton services addsingleton implementations are simplest as possible public class servicea ihostedservice public task startasync cancellationtoken cancellationtoken dowork return task completedtask public task stopasync cancellationtoken cancellationtoken return task completedtask private void dowork while true console writeline servicea thread sleep and public class serviceb ihostedservice public task startasync cancellationtoken cancellationtoken dowork return task completedtask public task stopasync cancellationtoken cancellationtoken return task completedtask private void dowork while true console writeline serviceb thread sleep in output getting messages only from servicea document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product dotnet technology dotnet ebooks github login nishanil microsoft alias nanil
| 1
|
255,598
| 21,939,757,136
|
IssuesEvent
|
2022-05-23 16:49:11
|
ooni/probe
|
https://api.github.com/repos/ooni/probe
|
closed
|
oonimkall: setRunType: cannot find symbol
|
bug testing ooni/probe-mobile priority/high platform/android platform/ios ooni/probe-engine
|
Linking with the oonimkall.aar engine fails with this error:
```
> Task :engine:compileExperimentalReleaseJavaWithJavac FAILED
$monorepo/repo/probe-android/engine/src/main/java/org/openobservatory/engine/OONICheckInConfig.java:67: error: cannot find symbol
c.setRunType(runType);
^
symbol: method setRunType(String)
location: variable c of type CheckInConfig
1 error
FAILURE: Build failed with an exception.
```
We need to fix this error in order to release 3.15.0.
(cc: @hellais @aanorbel)
|
1.0
|
oonimkall: setRunType: cannot find symbol - Linking with the oonimkall.aar engine fails with this error:
```
> Task :engine:compileExperimentalReleaseJavaWithJavac FAILED
$monorepo/repo/probe-android/engine/src/main/java/org/openobservatory/engine/OONICheckInConfig.java:67: error: cannot find symbol
c.setRunType(runType);
^
symbol: method setRunType(String)
location: variable c of type CheckInConfig
1 error
FAILURE: Build failed with an exception.
```
We need to fix this error in order to release 3.15.0.
(cc: @hellais @aanorbel)
|
non_architecture
|
oonimkall setruntype cannot find symbol linking with the oonimkall aar engine fails with this error task engine compileexperimentalreleasejavawithjavac failed monorepo repo probe android engine src main java org openobservatory engine oonicheckinconfig java error cannot find symbol c setruntype runtype symbol method setruntype string location variable c of type checkinconfig error failure build failed with an exception we need to fix this error in order to release cc hellais aanorbel
| 0
|
1,559
| 6,335,238,819
|
IssuesEvent
|
2017-07-26 18:24:54
|
LearnersGuild/echo
|
https://api.github.com/repos/LearnersGuild/echo
|
reopened
|
Move changefeed listeners to web service
|
architecture chore
|
Currently, the workers set up listening to db changefeeds and, effectively, queue items for their _own_ work queues. This is problematic because if changes to the database occur that should be handled by a worker while the worker is not running, that change event is lost and the job is never processed. Instead of connecting the changefeed listeners in the worker process, we should do it in the standing web service. This also makes it easier to choose to alternative mechanisms for background task execution (instead of having always-running worker dynos).
|
1.0
|
Move changefeed listeners to web service - Currently, the workers set up listening to db changefeeds and, effectively, queue items for their _own_ work queues. This is problematic because if changes to the database occur that should be handled by a worker while the worker is not running, that change event is lost and the job is never processed. Instead of connecting the changefeed listeners in the worker process, we should do it in the standing web service. This also makes it easier to choose to alternative mechanisms for background task execution (instead of having always-running worker dynos).
|
architecture
|
move changefeed listeners to web service currently the workers set up listening to db changefeeds and effectively queue items for their own work queues this is problematic because if changes to the database occur that should be handled by a worker while the worker is not running that change event is lost and the job is never processed instead of connecting the changefeed listeners in the worker process we should do it in the standing web service this also makes it easier to choose to alternative mechanisms for background task execution instead of having always running worker dynos
| 1
|
796,666
| 28,123,191,979
|
IssuesEvent
|
2023-03-31 15:32:50
|
thetrevorharmon/thetrevorharmon.com
|
https://api.github.com/repos/thetrevorharmon/thetrevorharmon.com
|
closed
|
Add landing page for mailing list
|
enhancement low priority
|
Now that there is a mailing list on the site, It would be good to have a landing page for the signup. Something like `/signup` with a simple form and nice explanation of what the signup gets them.
|
1.0
|
Add landing page for mailing list - Now that there is a mailing list on the site, It would be good to have a landing page for the signup. Something like `/signup` with a simple form and nice explanation of what the signup gets them.
|
non_architecture
|
add landing page for mailing list now that there is a mailing list on the site it would be good to have a landing page for the signup something like signup with a simple form and nice explanation of what the signup gets them
| 0
|
5,105
| 12,098,281,050
|
IssuesEvent
|
2020-04-20 10:02:29
|
stsrki/Blazorise
|
https://api.github.com/repos/stsrki/Blazorise
|
closed
|
Unit testing of components
|
Status: Investigate Type: Architecture
|
Investigate more about the new unit testing made by Steve Sanderson, after the release of Blazor preview 9.
Sources:
http://blog.stevensanderson.com/2019/08/29/blazor-unit-testing-prototype/
https://github.com/SteveSandersonMS/BlazorUnitTestingPrototype
|
1.0
|
Unit testing of components - Investigate more about the new unit testing made by Steve Sanderson, after the release of Blazor preview 9.
Sources:
http://blog.stevensanderson.com/2019/08/29/blazor-unit-testing-prototype/
https://github.com/SteveSandersonMS/BlazorUnitTestingPrototype
|
architecture
|
unit testing of components investigate more about the new unit testing made by steve sanderson after the release of blazor preview sources
| 1
|
357,018
| 10,600,740,255
|
IssuesEvent
|
2019-10-10 10:43:48
|
robotology/whole-body-controllers
|
https://api.github.com/repos/robotology/whole-body-controllers
|
opened
|
Investigate if it makes sense to port matlab-multi-body-sim in wbc
|
feature priority: normal
|
I would like to port `matlab-multi-body-sim` in the wbc, but before I need to understand the ratio between effort and benefits
|
1.0
|
Investigate if it makes sense to port matlab-multi-body-sim in wbc - I would like to port `matlab-multi-body-sim` in the wbc, but before I need to understand the ratio between effort and benefits
|
non_architecture
|
investigate if it makes sense to port matlab multi body sim in wbc i would like to port matlab multi body sim in the wbc but before i need to understand the ratio between effort and benefits
| 0
|
6,035
| 13,541,185,291
|
IssuesEvent
|
2020-09-16 15:33:34
|
MicrosoftDocs/architecture-center
|
https://api.github.com/repos/MicrosoftDocs/architecture-center
|
closed
|
Naming conventions of Icons
|
Pri2 architecture-center/svc assigned-to-author doc-enhancement triaged
|
We are doing some work with ARM templates and attempting to use the latest Icon sets.
The problem we are seeing is there is no consistence way in which the icons are named which can map to templates or schemas.
Please could it be considered .
e.g Template schemas refer to Virtual scale sets as 'virtualMachineScaleSets', but the icon is buried in the Compute directory called "10034-icon-service-VM-Scalte-Sets".
How are we meant to quick / auto identify between the too. Before the naming convention did link back to the recourses types
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 6624d44c-f9f6-02f8-2c67-0f0637fed62e
* Version Independent ID: 9346a8ff-bab5-be2c-9dc1-d546fda1efa9
* Content: [Azure Icons - Azure Architecture Center](https://docs.microsoft.com/en-us/azure/architecture/icons/)
* Content Source: [docs/icons/index.md](https://github.com/microsoftdocs/architecture-center/blob/master/docs/icons/index.md)
* Service: **architecture-center**
* GitHub Login: @doodlemania2
* Microsoft Alias: **pnp**
|
1.0
|
Naming conventions of Icons -
We are doing some work with ARM templates and attempting to use the latest Icon sets.
The problem we are seeing is there is no consistence way in which the icons are named which can map to templates or schemas.
Please could it be considered .
e.g Template schemas refer to Virtual scale sets as 'virtualMachineScaleSets', but the icon is buried in the Compute directory called "10034-icon-service-VM-Scalte-Sets".
How are we meant to quick / auto identify between the too. Before the naming convention did link back to the recourses types
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 6624d44c-f9f6-02f8-2c67-0f0637fed62e
* Version Independent ID: 9346a8ff-bab5-be2c-9dc1-d546fda1efa9
* Content: [Azure Icons - Azure Architecture Center](https://docs.microsoft.com/en-us/azure/architecture/icons/)
* Content Source: [docs/icons/index.md](https://github.com/microsoftdocs/architecture-center/blob/master/docs/icons/index.md)
* Service: **architecture-center**
* GitHub Login: @doodlemania2
* Microsoft Alias: **pnp**
|
architecture
|
naming conventions of icons we are doing some work with arm templates and attempting to use the latest icon sets the problem we are seeing is there is no consistence way in which the icons are named which can map to templates or schemas please could it be considered e g template schemas refer to virtual scale sets as virtualmachinescalesets but the icon is buried in the compute directory called icon service vm scalte sets how are we meant to quick auto identify between the too before the naming convention did link back to the recourses types document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service architecture center github login microsoft alias pnp
| 1
|
42,411
| 5,444,055,594
|
IssuesEvent
|
2017-03-07 01:15:24
|
dotnet/roslyn
|
https://api.github.com/repos/dotnet/roslyn
|
closed
|
Alternative approach to introduction of variables into scope.
|
Area-Language Design Discussion
|
There have a been a few raised voices that the current and suggested approach to how the new language features introduce variable into scope. He is my considered approach.
---
**Default to nearest outer scope**
The variable being introduce should be to **nearest outer scope** by default.
``` c#
if( !(o is int i) )
{
/* i is in scope and definitely not assigned */
throw new ArgumentException("Not an int", nameof(o));
}
/* i is in scope and definitely assigned. */
```
It simplifies one of the initial use cases, and I think the most likely usage.
``` c#
int value;
if( int.TryParse( text , out value )
```
into
``` c#
if( int.TryParse( text, out int value )
```
---
It is the more eccentric usage of localizing to the nearest inner scope (of this particular code block). eg the `then` and `else` blocks. That is cause of tension for the community. especially around where it is introduce in "patterns". I propose that in these cases the change from the default (nearest outer) should made explicit and require the coder to specify a change to the **nearest inner scope**.
For example a lightweight approach to this is by prefixing `~` on the variable identifier.
``` c#
if (o is int ~i)
{
/* i is scope and definitely assigned */
}
/* i is not in scope. /*
```
``` c#
if( !(o is int ~i) )
{
/* i is scope finitely not assigned */
}
else
{
/* I is scope and definitely assigned */
}
/* i is not in scope. /*
```
---
Then there are the case where the coder would like the variable introduced in one of nearest blocks and not the other. Use case: Guards.
``` c#
if( !int.TryParst( text, value ) )
{
/* value not in scope. */
}
else
{
/* value is in scope and assigned */
}
```
In pattern I propose we borrow from VB.net and use `IsNot` to indicate the negation of the pattern.
``` c#
if (o isnot int ~i)
{
/* i is scope not assigned a new value*/
/* also is an error as i is being reused for a variable declaration */
}
else
{
/* i is scope and definitely assigned a value.*/
/* also is an error as i is being reused for a variable declaration */
}
/* i is in scope. /*
```
This should be easier to read ( ie not missing the easily missed `!` at the start. ), and allow us to know that the usage of variable introduction could be different.
---
**When the variable identifier prior exists**
What about the case where the variable already exist prior the usage?
With the default ( nearest outer scope) the variable is reused.
``` c#
int i;
...
if( !(o is int i) )
{
/* i is in scope and definitely maybe assigned */
throw new ArgumentException("Not an int", nameof(o));
}
/* i is in scope and definitely assigned. */
```
And in the nearest inner scope the variable can not reused and thus produces an compile-time error.
Forcing the coder to make a explicit choice, and either make scope outer or use a different variable identifier.
``` c#
int i;
...
if (o is int ~i)
{
/* i is scope and definitely assigned */
/* also is and error as i is be reused for a variable declaration */
}
/* i is in scope. /*
```
``` c#
int i;
...
if (o isnot int ~i)
{
/* i is scope not assigned a new value*/
/* also is an error as i is being reused for a variable declaration */
}
else
{
/* i is scope and definitely assigned a value.*/
/* also is an error as i is being reused for a variable declaration */
}
/* i is in scope. /*
```
if the type of `i` is incompatible with the one in the pattern, it is an error in all cases.
|
1.0
|
Alternative approach to introduction of variables into scope. - There have a been a few raised voices that the current and suggested approach to how the new language features introduce variable into scope. He is my considered approach.
---
**Default to nearest outer scope**
The variable being introduce should be to **nearest outer scope** by default.
``` c#
if( !(o is int i) )
{
/* i is in scope and definitely not assigned */
throw new ArgumentException("Not an int", nameof(o));
}
/* i is in scope and definitely assigned. */
```
It simplifies one of the initial use cases, and I think the most likely usage.
``` c#
int value;
if( int.TryParse( text , out value )
```
into
``` c#
if( int.TryParse( text, out int value )
```
---
It is the more eccentric usage of localizing to the nearest inner scope (of this particular code block). eg the `then` and `else` blocks. That is cause of tension for the community. especially around where it is introduce in "patterns". I propose that in these cases the change from the default (nearest outer) should made explicit and require the coder to specify a change to the **nearest inner scope**.
For example a lightweight approach to this is by prefixing `~` on the variable identifier.
``` c#
if (o is int ~i)
{
/* i is scope and definitely assigned */
}
/* i is not in scope. /*
```
``` c#
if( !(o is int ~i) )
{
/* i is scope finitely not assigned */
}
else
{
/* I is scope and definitely assigned */
}
/* i is not in scope. /*
```
---
Then there are the case where the coder would like the variable introduced in one of nearest blocks and not the other. Use case: Guards.
``` c#
if( !int.TryParst( text, value ) )
{
/* value not in scope. */
}
else
{
/* value is in scope and assigned */
}
```
In pattern I propose we borrow from VB.net and use `IsNot` to indicate the negation of the pattern.
``` c#
if (o isnot int ~i)
{
/* i is scope not assigned a new value*/
/* also is an error as i is being reused for a variable declaration */
}
else
{
/* i is scope and definitely assigned a value.*/
/* also is an error as i is being reused for a variable declaration */
}
/* i is in scope. /*
```
This should be easier to read ( ie not missing the easily missed `!` at the start. ), and allow us to know that the usage of variable introduction could be different.
---
**When the variable identifier prior exists**
What about the case where the variable already exist prior the usage?
With the default ( nearest outer scope) the variable is reused.
``` c#
int i;
...
if( !(o is int i) )
{
/* i is in scope and definitely maybe assigned */
throw new ArgumentException("Not an int", nameof(o));
}
/* i is in scope and definitely assigned. */
```
And in the nearest inner scope the variable can not reused and thus produces an compile-time error.
Forcing the coder to make a explicit choice, and either make scope outer or use a different variable identifier.
``` c#
int i;
...
if (o is int ~i)
{
/* i is scope and definitely assigned */
/* also is and error as i is be reused for a variable declaration */
}
/* i is in scope. /*
```
``` c#
int i;
...
if (o isnot int ~i)
{
/* i is scope not assigned a new value*/
/* also is an error as i is being reused for a variable declaration */
}
else
{
/* i is scope and definitely assigned a value.*/
/* also is an error as i is being reused for a variable declaration */
}
/* i is in scope. /*
```
if the type of `i` is incompatible with the one in the pattern, it is an error in all cases.
|
non_architecture
|
alternative approach to introduction of variables into scope there have a been a few raised voices that the current and suggested approach to how the new language features introduce variable into scope he is my considered approach default to nearest outer scope the variable being introduce should be to nearest outer scope by default c if o is int i i is in scope and definitely not assigned throw new argumentexception not an int nameof o i is in scope and definitely assigned it simplifies one of the initial use cases and i think the most likely usage c int value if int tryparse text out value into c if int tryparse text out int value it is the more eccentric usage of localizing to the nearest inner scope of this particular code block eg the then and else blocks that is cause of tension for the community especially around where it is introduce in patterns i propose that in these cases the change from the default nearest outer should made explicit and require the coder to specify a change to the nearest inner scope for example a lightweight approach to this is by prefixing on the variable identifier c if o is int i i is scope and definitely assigned i is not in scope c if o is int i i is scope finitely not assigned else i is scope and definitely assigned i is not in scope then there are the case where the coder would like the variable introduced in one of nearest blocks and not the other use case guards c if int tryparst text value value not in scope else value is in scope and assigned in pattern i propose we borrow from vb net and use isnot to indicate the negation of the pattern c if o isnot int i i is scope not assigned a new value also is an error as i is being reused for a variable declaration else i is scope and definitely assigned a value also is an error as i is being reused for a variable declaration i is in scope this should be easier to read ie not missing the easily missed at the start and allow us to know that the usage of variable introduction could be different when the variable identifier prior exists what about the case where the variable already exist prior the usage with the default nearest outer scope the variable is reused c int i if o is int i i is in scope and definitely maybe assigned throw new argumentexception not an int nameof o i is in scope and definitely assigned and in the nearest inner scope the variable can not reused and thus produces an compile time error forcing the coder to make a explicit choice and either make scope outer or use a different variable identifier c int i if o is int i i is scope and definitely assigned also is and error as i is be reused for a variable declaration i is in scope c int i if o isnot int i i is scope not assigned a new value also is an error as i is being reused for a variable declaration else i is scope and definitely assigned a value also is an error as i is being reused for a variable declaration i is in scope if the type of i is incompatible with the one in the pattern it is an error in all cases
| 0
|
9,595
| 24,873,438,277
|
IssuesEvent
|
2022-10-27 16:59:01
|
Azure/azure-sdk
|
https://api.github.com/repos/Azure/azure-sdk
|
opened
|
Board Review: <client library name>
|
architecture board-review
|
Thank you for submitting this review request. Thorough review of your client library ensures that your APIs are consistent with the guidelines and the consumers of your client library have a consistently good experience when using Azure.
**The Architecture Board reviews [Track 2 libraries](https://azure.github.io/azure-sdk/general_introduction.html) only.** If your library does not meet this requirement, please reach out to [Architecture Board](adparch@microsoft.com) before creating the issue.
Please reference our [review process guidelines](https://azure.github.io/azure-sdk/policies_reviewprocess.html) to understand what is being asked for in the issue template.
To ensure consistency, all Tier-1 languages (C#, TypeScript, Java, Python) will generally be reviewed together. In expansive libraries, we will pair dynamic languages (Python, TypeScript) together, and strongly typed languages (C#, Java) together in separate meetings.
For Tier-2 languages (C, C++, Go, Android, iOS), the review will be on an as-needed basis.
**Before submitting, ensure you adjust the title of the issue appropriately.**
**Note that the required material must be included before a meeting can be scheduled.**
## Contacts and Timeline
* Responsible service team: Liftr Nginx
* Main contacts: @SpencerOfwiti @limingu
* Expected code complete date: Not Applicable
* Expected release date:
## About the Service
* Link to documentation introducing/describing the service: https://learn.microsoft.com/en-us/azure/partner-solutions/nginx/
* Link to the service REST APIs: https://github.com/Azure/azure-rest-api-specs/tree/main/specification/nginx/resource-manager/NGINX.NGINXPLUS/stable/2022-08-01
* Link to GitHub issue for previous review sessions, if applicable:
## About the client library
* Name of the client library:
* Languages for this review:
The SDKs are autogenerated from the swagger, this review is only for namespace approval.
## Thank you!
|
1.0
|
Board Review: <client library name> - Thank you for submitting this review request. Thorough review of your client library ensures that your APIs are consistent with the guidelines and the consumers of your client library have a consistently good experience when using Azure.
**The Architecture Board reviews [Track 2 libraries](https://azure.github.io/azure-sdk/general_introduction.html) only.** If your library does not meet this requirement, please reach out to [Architecture Board](adparch@microsoft.com) before creating the issue.
Please reference our [review process guidelines](https://azure.github.io/azure-sdk/policies_reviewprocess.html) to understand what is being asked for in the issue template.
To ensure consistency, all Tier-1 languages (C#, TypeScript, Java, Python) will generally be reviewed together. In expansive libraries, we will pair dynamic languages (Python, TypeScript) together, and strongly typed languages (C#, Java) together in separate meetings.
For Tier-2 languages (C, C++, Go, Android, iOS), the review will be on an as-needed basis.
**Before submitting, ensure you adjust the title of the issue appropriately.**
**Note that the required material must be included before a meeting can be scheduled.**
## Contacts and Timeline
* Responsible service team: Liftr Nginx
* Main contacts: @SpencerOfwiti @limingu
* Expected code complete date: Not Applicable
* Expected release date:
## About the Service
* Link to documentation introducing/describing the service: https://learn.microsoft.com/en-us/azure/partner-solutions/nginx/
* Link to the service REST APIs: https://github.com/Azure/azure-rest-api-specs/tree/main/specification/nginx/resource-manager/NGINX.NGINXPLUS/stable/2022-08-01
* Link to GitHub issue for previous review sessions, if applicable:
## About the client library
* Name of the client library:
* Languages for this review:
The SDKs are autogenerated from the swagger, this review is only for namespace approval.
## Thank you!
|
architecture
|
board review thank you for submitting this review request thorough review of your client library ensures that your apis are consistent with the guidelines and the consumers of your client library have a consistently good experience when using azure the architecture board reviews only if your library does not meet this requirement please reach out to adparch microsoft com before creating the issue please reference our to understand what is being asked for in the issue template to ensure consistency all tier languages c typescript java python will generally be reviewed together in expansive libraries we will pair dynamic languages python typescript together and strongly typed languages c java together in separate meetings for tier languages c c go android ios the review will be on an as needed basis before submitting ensure you adjust the title of the issue appropriately note that the required material must be included before a meeting can be scheduled contacts and timeline responsible service team liftr nginx main contacts spencerofwiti limingu expected code complete date not applicable expected release date about the service link to documentation introducing describing the service link to the service rest apis link to github issue for previous review sessions if applicable about the client library name of the client library languages for this review the sdks are autogenerated from the swagger this review is only for namespace approval thank you
| 1
|
213,119
| 23,966,109,535
|
IssuesEvent
|
2022-09-13 01:12:33
|
DavidSpek/kubeflow
|
https://api.github.com/repos/DavidSpek/kubeflow
|
opened
|
CVE-2022-36083 (Medium) detected in jose-2.0.5.tgz
|
security vulnerability
|
## CVE-2022-36083 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jose-2.0.5.tgz</b></p></summary>
<p>JSON Web Almost Everything - JWA, JWS, JWE, JWK, JWT, JWKS for Node.js with minimal dependencies</p>
<p>Library home page: <a href="https://registry.npmjs.org/jose/-/jose-2.0.5.tgz">https://registry.npmjs.org/jose/-/jose-2.0.5.tgz</a></p>
<p>Path to dependency file: /components/crud-web-apps/volumes/frontend/package.json</p>
<p>Path to vulnerable library: /components/crud-web-apps/volumes/frontend/node_modules/jose/package.json,/components/crud-web-apps/jupyter/frontend/node_modules/jose/package.json,/components/crud-web-apps/common/frontend/kubeflow-common-lib/node_modules/jose/package.json,/components/crud-web-apps/tensorboards/frontend/node_modules/jose/package.json</p>
<p>
Dependency Hierarchy:
- client-node-0.12.2.tgz (Root Library)
- openid-client-4.2.2.tgz
- :x: **jose-2.0.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/DavidSpek/kubeflow/commit/00cbc9d11a3306fed1e979d79dff6ae36749d4bd">00cbc9d11a3306fed1e979d79dff6ae36749d4bd</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
JOSE is "JSON Web Almost Everything" - JWA, JWS, JWE, JWT, JWK, JWKS with no dependencies using runtime's native crypto in Node.js, Browser, Cloudflare Workers, Electron, and Deno. The PBKDF2-based JWE key management algorithms expect a JOSE Header Parameter named `p2c` PBES2 Count, which determines how many PBKDF2 iterations must be executed in order to derive a CEK wrapping key. The purpose of this parameter is to intentionally slow down the key derivation function in order to make password brute-force and dictionary attacks more expensive. This makes the PBES2 algorithms unsuitable for situations where the JWE is coming from an untrusted source: an adversary can intentionally pick an extremely high PBES2 Count value, that will initiate a CPU-bound computation that may take an unreasonable amount of time to finish. Under certain conditions, it is possible to have the user's environment consume unreasonable amount of CPU time. The impact is limited only to users utilizing the JWE decryption APIs with symmetric secrets to decrypt JWEs from untrusted parties who do not limit the accepted JWE Key Management Algorithms (`alg` Header Parameter) using the `keyManagementAlgorithms` (or `algorithms` in v1.x) decryption option or through other means. The `v1.28.2`, `v2.0.6`, `v3.20.4`, and `v4.9.2` releases limit the maximum PBKDF2 iteration count to `10000` by default. It is possible to adjust this limit with a newly introduced `maxPBES2Count` decryption option. If users are unable to upgrade their required library version, they have two options depending on whether they expect to receive JWEs using any of the three PBKDF2-based JWE key management algorithms. They can use the `keyManagementAlgorithms` decryption option to disable accepting PBKDF2 altogether, or they can inspect the JOSE Header prior to using the decryption API and limit the PBKDF2 iteration count (`p2c` Header Parameter).
<p>Publish Date: 2022-09-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-36083>CVE-2022-36083</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/panva/jose/security/advisories/GHSA-jv3g-j58f-9mq9">https://github.com/panva/jose/security/advisories/GHSA-jv3g-j58f-9mq9</a></p>
<p>Release Date: 2022-09-07</p>
<p>Fix Resolution (jose): 2.0.6</p>
<p>Direct dependency fix Resolution (@kubernetes/client-node): 0.12.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2022-36083 (Medium) detected in jose-2.0.5.tgz - ## CVE-2022-36083 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jose-2.0.5.tgz</b></p></summary>
<p>JSON Web Almost Everything - JWA, JWS, JWE, JWK, JWT, JWKS for Node.js with minimal dependencies</p>
<p>Library home page: <a href="https://registry.npmjs.org/jose/-/jose-2.0.5.tgz">https://registry.npmjs.org/jose/-/jose-2.0.5.tgz</a></p>
<p>Path to dependency file: /components/crud-web-apps/volumes/frontend/package.json</p>
<p>Path to vulnerable library: /components/crud-web-apps/volumes/frontend/node_modules/jose/package.json,/components/crud-web-apps/jupyter/frontend/node_modules/jose/package.json,/components/crud-web-apps/common/frontend/kubeflow-common-lib/node_modules/jose/package.json,/components/crud-web-apps/tensorboards/frontend/node_modules/jose/package.json</p>
<p>
Dependency Hierarchy:
- client-node-0.12.2.tgz (Root Library)
- openid-client-4.2.2.tgz
- :x: **jose-2.0.5.tgz** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/DavidSpek/kubeflow/commit/00cbc9d11a3306fed1e979d79dff6ae36749d4bd">00cbc9d11a3306fed1e979d79dff6ae36749d4bd</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
JOSE is "JSON Web Almost Everything" - JWA, JWS, JWE, JWT, JWK, JWKS with no dependencies using runtime's native crypto in Node.js, Browser, Cloudflare Workers, Electron, and Deno. The PBKDF2-based JWE key management algorithms expect a JOSE Header Parameter named `p2c` PBES2 Count, which determines how many PBKDF2 iterations must be executed in order to derive a CEK wrapping key. The purpose of this parameter is to intentionally slow down the key derivation function in order to make password brute-force and dictionary attacks more expensive. This makes the PBES2 algorithms unsuitable for situations where the JWE is coming from an untrusted source: an adversary can intentionally pick an extremely high PBES2 Count value, that will initiate a CPU-bound computation that may take an unreasonable amount of time to finish. Under certain conditions, it is possible to have the user's environment consume unreasonable amount of CPU time. The impact is limited only to users utilizing the JWE decryption APIs with symmetric secrets to decrypt JWEs from untrusted parties who do not limit the accepted JWE Key Management Algorithms (`alg` Header Parameter) using the `keyManagementAlgorithms` (or `algorithms` in v1.x) decryption option or through other means. The `v1.28.2`, `v2.0.6`, `v3.20.4`, and `v4.9.2` releases limit the maximum PBKDF2 iteration count to `10000` by default. It is possible to adjust this limit with a newly introduced `maxPBES2Count` decryption option. If users are unable to upgrade their required library version, they have two options depending on whether they expect to receive JWEs using any of the three PBKDF2-based JWE key management algorithms. They can use the `keyManagementAlgorithms` decryption option to disable accepting PBKDF2 altogether, or they can inspect the JOSE Header prior to using the decryption API and limit the PBKDF2 iteration count (`p2c` Header Parameter).
<p>Publish Date: 2022-09-07
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-36083>CVE-2022-36083</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.3</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: Low
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://github.com/panva/jose/security/advisories/GHSA-jv3g-j58f-9mq9">https://github.com/panva/jose/security/advisories/GHSA-jv3g-j58f-9mq9</a></p>
<p>Release Date: 2022-09-07</p>
<p>Fix Resolution (jose): 2.0.6</p>
<p>Direct dependency fix Resolution (@kubernetes/client-node): 0.12.3</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_architecture
|
cve medium detected in jose tgz cve medium severity vulnerability vulnerable library jose tgz json web almost everything jwa jws jwe jwk jwt jwks for node js with minimal dependencies library home page a href path to dependency file components crud web apps volumes frontend package json path to vulnerable library components crud web apps volumes frontend node modules jose package json components crud web apps jupyter frontend node modules jose package json components crud web apps common frontend kubeflow common lib node modules jose package json components crud web apps tensorboards frontend node modules jose package json dependency hierarchy client node tgz root library openid client tgz x jose tgz vulnerable library found in head commit a href found in base branch master vulnerability details jose is json web almost everything jwa jws jwe jwt jwk jwks with no dependencies using runtime s native crypto in node js browser cloudflare workers electron and deno the based jwe key management algorithms expect a jose header parameter named count which determines how many iterations must be executed in order to derive a cek wrapping key the purpose of this parameter is to intentionally slow down the key derivation function in order to make password brute force and dictionary attacks more expensive this makes the algorithms unsuitable for situations where the jwe is coming from an untrusted source an adversary can intentionally pick an extremely high count value that will initiate a cpu bound computation that may take an unreasonable amount of time to finish under certain conditions it is possible to have the user s environment consume unreasonable amount of cpu time the impact is limited only to users utilizing the jwe decryption apis with symmetric secrets to decrypt jwes from untrusted parties who do not limit the accepted jwe key management algorithms alg header parameter using the keymanagementalgorithms or algorithms in x decryption option or through other means the and releases limit the maximum iteration count to by default it is possible to adjust this limit with a newly introduced decryption option if users are unable to upgrade their required library version they have two options depending on whether they expect to receive jwes using any of the three based jwe key management algorithms they can use the keymanagementalgorithms decryption option to disable accepting altogether or they can inspect the jose header prior to using the decryption api and limit the iteration count header parameter publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jose direct dependency fix resolution kubernetes client node step up your open source security game with mend
| 0
|
221,426
| 24,630,302,413
|
IssuesEvent
|
2022-10-17 01:00:10
|
MendDemo-josh/moby
|
https://api.github.com/repos/MendDemo-josh/moby
|
closed
|
libiberty9.1.0: 1 vulnerabilities (highest severity is: 7.5) - autoclosed
|
security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>libiberty9.1.0</b></p></summary>
<p>
<p>Library home page: <a href=https://ftp.gnu.org/pub/gnu/gcc/gcc-9.1.0/?wsslib=libiberty>https://ftp.gnu.org/pub/gnu/gcc/gcc-9.1.0/?wsslib=libiberty</a></p>
</p>
</p></p>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
<p></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2022-2879](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-2879) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | libiberty9.1.0 | Direct | go1.18.7,go1.19.2 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-2879</summary>
### Vulnerable Library - <b>libiberty9.1.0</b></p>
<p>
<p>Library home page: <a href=https://ftp.gnu.org/pub/gnu/gcc/gcc-9.1.0/?wsslib=libiberty>https://ftp.gnu.org/pub/gnu/gcc/gcc-9.1.0/?wsslib=libiberty</a></p>
<p>Found in base branch: <b>master</b></p></p>
</p></p>
### Vulnerable Source Files (1)
<p></p>
<p>
</p>
<p></p>
</p>
<p></p>
### Vulnerability Details
<p>
Reader.Read does not set a limit on the maximum size of file headers. A maliciously crafted archive could cause Read to allocate unbounded amounts of memory, potentially causing resource exhaustion or panics. After fix, Reader.Read limits the maximum size of header blocks to 1 MiB.
<p>Publish Date: 2022-10-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-2879>CVE-2022-2879</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://pkg.go.dev/vuln/GO-2022-1037">https://pkg.go.dev/vuln/GO-2022-1037</a></p>
<p>Release Date: 2022-10-14</p>
<p>Fix Resolution: go1.18.7,go1.19.2</p>
</p>
<p></p>
</details>
|
True
|
libiberty9.1.0: 1 vulnerabilities (highest severity is: 7.5) - autoclosed - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>libiberty9.1.0</b></p></summary>
<p>
<p>Library home page: <a href=https://ftp.gnu.org/pub/gnu/gcc/gcc-9.1.0/?wsslib=libiberty>https://ftp.gnu.org/pub/gnu/gcc/gcc-9.1.0/?wsslib=libiberty</a></p>
</p>
</p></p>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
</p>
<p></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2022-2879](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-2879) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.5 | libiberty9.1.0 | Direct | go1.18.7,go1.19.2 | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2022-2879</summary>
### Vulnerable Library - <b>libiberty9.1.0</b></p>
<p>
<p>Library home page: <a href=https://ftp.gnu.org/pub/gnu/gcc/gcc-9.1.0/?wsslib=libiberty>https://ftp.gnu.org/pub/gnu/gcc/gcc-9.1.0/?wsslib=libiberty</a></p>
<p>Found in base branch: <b>master</b></p></p>
</p></p>
### Vulnerable Source Files (1)
<p></p>
<p>
</p>
<p></p>
</p>
<p></p>
### Vulnerability Details
<p>
Reader.Read does not set a limit on the maximum size of file headers. A maliciously crafted archive could cause Read to allocate unbounded amounts of memory, potentially causing resource exhaustion or panics. After fix, Reader.Read limits the maximum size of header blocks to 1 MiB.
<p>Publish Date: 2022-10-14
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2022-2879>CVE-2022-2879</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.5</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: None
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://pkg.go.dev/vuln/GO-2022-1037">https://pkg.go.dev/vuln/GO-2022-1037</a></p>
<p>Release Date: 2022-10-14</p>
<p>Fix Resolution: go1.18.7,go1.19.2</p>
</p>
<p></p>
</details>
|
non_architecture
|
vulnerabilities highest severity is autoclosed vulnerable library library home page a href vulnerable source files vulnerabilities cve severity cvss dependency type fixed in remediation available high direct details cve vulnerable library library home page a href found in base branch master vulnerable source files vulnerability details reader read does not set a limit on the maximum size of file headers a maliciously crafted archive could cause read to allocate unbounded amounts of memory potentially causing resource exhaustion or panics after fix reader read limits the maximum size of header blocks to mib publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution
| 0
|
619
| 3,862,146,688
|
IssuesEvent
|
2016-04-08 00:43:28
|
rails-oceania/melbourne-ruby
|
https://api.github.com/repos/rails-oceania/melbourne-ruby
|
closed
|
Event Sourcing, part 2
|
40min architecture code intermediate presentation
|
I'd like to follow up my RubyConf talk which was meant to make people curious about Event Sourcing, with a talk about some of the practicalities and where to start.
For those who missed my RubyConf talk, I'll quickly recap that before launching into particulars.
I'd allow for 45 mins for this talk.
|
1.0
|
Event Sourcing, part 2 - I'd like to follow up my RubyConf talk which was meant to make people curious about Event Sourcing, with a talk about some of the practicalities and where to start.
For those who missed my RubyConf talk, I'll quickly recap that before launching into particulars.
I'd allow for 45 mins for this talk.
|
architecture
|
event sourcing part i d like to follow up my rubyconf talk which was meant to make people curious about event sourcing with a talk about some of the practicalities and where to start for those who missed my rubyconf talk i ll quickly recap that before launching into particulars i d allow for mins for this talk
| 1
|
10,480
| 27,022,476,040
|
IssuesEvent
|
2023-02-11 06:36:04
|
jsolly/blogthedata
|
https://api.github.com/repos/jsolly/blogthedata
|
closed
|
Separate Porfolio page into its own page, disconnecting it from being a category
|
Architecture SEO
|
#### Context
The current implementation of the portfolio page is quite hacky. It's technically a 'category' which causes all kinds of wonkiness because I am having to add lots of conditional logic inside categories.html in order to handle special cases on the portfolio page.
I also have to have more logic for meta tags as the portfolio page has it's own tags that are unique from the other category pages.
#### Ideal behavior
The portfolio page is on it's own. It's not a 'category' of posts.
#### Things to consider
Will have to refactor the how posts are brought onto a page that is not a category
Might need to do a database migration to remove the 'category' attribute from portfolio posts.
Will need to refactor the category templates to remove the special portfolio logic.
|
1.0
|
Separate Porfolio page into its own page, disconnecting it from being a category - #### Context
The current implementation of the portfolio page is quite hacky. It's technically a 'category' which causes all kinds of wonkiness because I am having to add lots of conditional logic inside categories.html in order to handle special cases on the portfolio page.
I also have to have more logic for meta tags as the portfolio page has it's own tags that are unique from the other category pages.
#### Ideal behavior
The portfolio page is on it's own. It's not a 'category' of posts.
#### Things to consider
Will have to refactor the how posts are brought onto a page that is not a category
Might need to do a database migration to remove the 'category' attribute from portfolio posts.
Will need to refactor the category templates to remove the special portfolio logic.
|
architecture
|
separate porfolio page into its own page disconnecting it from being a category context the current implementation of the portfolio page is quite hacky it s technically a category which causes all kinds of wonkiness because i am having to add lots of conditional logic inside categories html in order to handle special cases on the portfolio page i also have to have more logic for meta tags as the portfolio page has it s own tags that are unique from the other category pages ideal behavior the portfolio page is on it s own it s not a category of posts things to consider will have to refactor the how posts are brought onto a page that is not a category might need to do a database migration to remove the category attribute from portfolio posts will need to refactor the category templates to remove the special portfolio logic
| 1
|
1,169
| 5,221,420,375
|
IssuesEvent
|
2017-01-27 01:28:11
|
jung-digital/ringa
|
https://api.github.com/repos/jung-digital/ringa
|
opened
|
Command: add no wait operator [[]]
|
architecture
|
Target Code:
```
controller.addListener([
Command1,
[[Command2]],
Command3
]);
```
Command 1 should run, Command2 should be started, but then Command3 should be run immediately without waiting for Command2 to finish.
|
1.0
|
Command: add no wait operator [[]] - Target Code:
```
controller.addListener([
Command1,
[[Command2]],
Command3
]);
```
Command 1 should run, Command2 should be started, but then Command3 should be run immediately without waiting for Command2 to finish.
|
architecture
|
command add no wait operator target code controller addlistener command should run should be started but then should be run immediately without waiting for to finish
| 1
|
9,819
| 25,289,125,063
|
IssuesEvent
|
2022-11-16 22:07:07
|
spring-projects/sts4
|
https://api.github.com/repos/spring-projects/sts4
|
closed
|
exception thrown in VSCode when using latest snapshots
|
type: bug status: needs-investigation theme: internal-architecture for: vscode theme: refactoring
|
I am using VSCode with the latest pre-releases from:
- Language Support for Java
- Spring Boot Dashboard
- Spring Boot Tools (from the latest VSIX file)
I have a project open in my workspace, created from initializr (web + actuator), on Spring Boot 2.6.12.
After a little while, I see an error popup showing up, complaining about a problem when asking for `textDocument/codeAction`, and showing this exception in the log output:
```
java.util.concurrent.CompletionException: com.google.gson.JsonSyntaxException: java.lang.IllegalStateException: Expected BEGIN_ARRAY but was BEGIN_OBJECT at path $
at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture.completeThrowable(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture$Completion.run(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
Caused by: com.google.gson.JsonSyntaxException: java.lang.IllegalStateException: Expected BEGIN_ARRAY but was BEGIN_OBJECT at path $
at com.google.gson.Gson.fromJson(Gson.java:1070)
at com.google.gson.Gson.fromJson(Gson.java:1129)
at org.springframework.ide.vscode.commons.languageserver.util.SimpleTextDocumentService.lambda$computeCodeActions$10(SimpleTextDocumentService.java:445)
at java.base/java.util.ArrayList.forEach(Unknown Source)
at org.springframework.ide.vscode.commons.languageserver.util.SimpleTextDocumentService.computeCodeActions(SimpleTextDocumentService.java:442)
at org.springframework.ide.vscode.commons.languageserver.util.SimpleTextDocumentService.lambda$codeAction$11(SimpleTextDocumentService.java:499)
... 5 more
Caused by: java.lang.IllegalStateException: Expected BEGIN_ARRAY but was BEGIN_OBJECT at path $
at com.google.gson.internal.bind.JsonTreeReader.expect(JsonTreeReader.java:163)
at com.google.gson.internal.bind.JsonTreeReader.beginArray(JsonTreeReader.java:72)
at com.google.gson.internal.bind.CollectionTypeAdapterFactory$Adapter.read(CollectionTypeAdapterFactory.java:80)
at com.google.gson.internal.bind.CollectionTypeAdapterFactory$Adapter.read(CollectionTypeAdapterFactory.java:61)
at com.google.gson.Gson.fromJson(Gson.java:1058)
... 10 more
```
|
1.0
|
exception thrown in VSCode when using latest snapshots - I am using VSCode with the latest pre-releases from:
- Language Support for Java
- Spring Boot Dashboard
- Spring Boot Tools (from the latest VSIX file)
I have a project open in my workspace, created from initializr (web + actuator), on Spring Boot 2.6.12.
After a little while, I see an error popup showing up, complaining about a problem when asking for `textDocument/codeAction`, and showing this exception in the log output:
```
java.util.concurrent.CompletionException: com.google.gson.JsonSyntaxException: java.lang.IllegalStateException: Expected BEGIN_ARRAY but was BEGIN_OBJECT at path $
at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture.completeThrowable(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(Unknown Source)
at java.base/java.util.concurrent.CompletableFuture$Completion.run(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.base/java.lang.Thread.run(Unknown Source)
Caused by: com.google.gson.JsonSyntaxException: java.lang.IllegalStateException: Expected BEGIN_ARRAY but was BEGIN_OBJECT at path $
at com.google.gson.Gson.fromJson(Gson.java:1070)
at com.google.gson.Gson.fromJson(Gson.java:1129)
at org.springframework.ide.vscode.commons.languageserver.util.SimpleTextDocumentService.lambda$computeCodeActions$10(SimpleTextDocumentService.java:445)
at java.base/java.util.ArrayList.forEach(Unknown Source)
at org.springframework.ide.vscode.commons.languageserver.util.SimpleTextDocumentService.computeCodeActions(SimpleTextDocumentService.java:442)
at org.springframework.ide.vscode.commons.languageserver.util.SimpleTextDocumentService.lambda$codeAction$11(SimpleTextDocumentService.java:499)
... 5 more
Caused by: java.lang.IllegalStateException: Expected BEGIN_ARRAY but was BEGIN_OBJECT at path $
at com.google.gson.internal.bind.JsonTreeReader.expect(JsonTreeReader.java:163)
at com.google.gson.internal.bind.JsonTreeReader.beginArray(JsonTreeReader.java:72)
at com.google.gson.internal.bind.CollectionTypeAdapterFactory$Adapter.read(CollectionTypeAdapterFactory.java:80)
at com.google.gson.internal.bind.CollectionTypeAdapterFactory$Adapter.read(CollectionTypeAdapterFactory.java:61)
at com.google.gson.Gson.fromJson(Gson.java:1058)
... 10 more
```
|
architecture
|
exception thrown in vscode when using latest snapshots i am using vscode with the latest pre releases from language support for java spring boot dashboard spring boot tools from the latest vsix file i have a project open in my workspace created from initializr web actuator on spring boot after a little while i see an error popup showing up complaining about a problem when asking for textdocument codeaction and showing this exception in the log output java util concurrent completionexception com google gson jsonsyntaxexception java lang illegalstateexception expected begin array but was begin object at path at java base java util concurrent completablefuture encodethrowable unknown source at java base java util concurrent completablefuture completethrowable unknown source at java base java util concurrent completablefuture uniapply tryfire unknown source at java base java util concurrent completablefuture completion run unknown source at java base java util concurrent threadpoolexecutor runworker unknown source at java base java util concurrent threadpoolexecutor worker run unknown source at java base java lang thread run unknown source caused by com google gson jsonsyntaxexception java lang illegalstateexception expected begin array but was begin object at path at com google gson gson fromjson gson java at com google gson gson fromjson gson java at org springframework ide vscode commons languageserver util simpletextdocumentservice lambda computecodeactions simpletextdocumentservice java at java base java util arraylist foreach unknown source at org springframework ide vscode commons languageserver util simpletextdocumentservice computecodeactions simpletextdocumentservice java at org springframework ide vscode commons languageserver util simpletextdocumentservice lambda codeaction simpletextdocumentservice java more caused by java lang illegalstateexception expected begin array but was begin object at path at com google gson internal bind jsontreereader expect jsontreereader java at com google gson internal bind jsontreereader beginarray jsontreereader java at com google gson internal bind collectiontypeadapterfactory adapter read collectiontypeadapterfactory java at com google gson internal bind collectiontypeadapterfactory adapter read collectiontypeadapterfactory java at com google gson gson fromjson gson java more
| 1
|
3,954
| 10,344,295,967
|
IssuesEvent
|
2019-09-04 10:52:21
|
open-zaak/open-zaak
|
https://api.github.com/repos/open-zaak/open-zaak
|
closed
|
As stakeholder, I want to have the Authorizations API exposed on OpenZaak
|
EPIC: Architecture
|
... so applications can request their permissions (and theoretically, set permissions).
**Description**
The Authorization API was left out of scope in #3. Which might or might not be the best choice, depending on #3. This US makes sure it gets in.
|
1.0
|
As stakeholder, I want to have the Authorizations API exposed on OpenZaak - ... so applications can request their permissions (and theoretically, set permissions).
**Description**
The Authorization API was left out of scope in #3. Which might or might not be the best choice, depending on #3. This US makes sure it gets in.
|
architecture
|
as stakeholder i want to have the authorizations api exposed on openzaak so applications can request their permissions and theoretically set permissions description the authorization api was left out of scope in which might or might not be the best choice depending on this us makes sure it gets in
| 1
|
7,907
| 19,916,085,929
|
IssuesEvent
|
2022-01-25 22:53:47
|
MicrosoftDocs/architecture-center
|
https://api.github.com/repos/MicrosoftDocs/architecture-center
|
closed
|
this sentence does not make sense...
|
doc-bug cxp triaged architecture-center/svc reference-architecture/subsvc Pri2
|
[Enter feedback here]
From this article: https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/dmz/nva-ha
This sentence does not make sense / is not understandable -
Since HA Ports for inbound traffic every individual TCP/UDP port needs to be opened in a dedicated load-balancing rule.
You may have meant something like
Since HA ports control (or restrict) inbound traffic, every individual TCP/UDP port needs to be opened in a dedicated load-balancing rule.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 9bab6a90-43a3-3d83-d680-26683a3d833d
* Version Independent ID: f8eea094-d297-93e6-06ac-7652d059734f
* Content: [Deploy highly available NVAs - Azure Architecture Center](https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/dmz/nva-ha)
* Content Source: [docs/reference-architectures/dmz/nva-ha.yml](https://github.com/microsoftdocs/architecture-center/blob/main/docs/reference-architectures/dmz/nva-ha.yml)
* Service: **architecture-center**
* Sub-service: **reference-architecture**
* GitHub Login: @telmosampaio
* Microsoft Alias: **pnp**
|
2.0
|
this sentence does not make sense... - [Enter feedback here]
From this article: https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/dmz/nva-ha
This sentence does not make sense / is not understandable -
Since HA Ports for inbound traffic every individual TCP/UDP port needs to be opened in a dedicated load-balancing rule.
You may have meant something like
Since HA ports control (or restrict) inbound traffic, every individual TCP/UDP port needs to be opened in a dedicated load-balancing rule.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 9bab6a90-43a3-3d83-d680-26683a3d833d
* Version Independent ID: f8eea094-d297-93e6-06ac-7652d059734f
* Content: [Deploy highly available NVAs - Azure Architecture Center](https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/dmz/nva-ha)
* Content Source: [docs/reference-architectures/dmz/nva-ha.yml](https://github.com/microsoftdocs/architecture-center/blob/main/docs/reference-architectures/dmz/nva-ha.yml)
* Service: **architecture-center**
* Sub-service: **reference-architecture**
* GitHub Login: @telmosampaio
* Microsoft Alias: **pnp**
|
architecture
|
this sentence does not make sense from this article this sentence does not make sense is not understandable since ha ports for inbound traffic every individual tcp udp port needs to be opened in a dedicated load balancing rule you may have meant something like since ha ports control or restrict inbound traffic every individual tcp udp port needs to be opened in a dedicated load balancing rule document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service architecture center sub service reference architecture github login telmosampaio microsoft alias pnp
| 1
|
3,453
| 9,645,281,781
|
IssuesEvent
|
2019-05-17 08:15:13
|
dotnet/docs
|
https://api.github.com/repos/dotnet/docs
|
closed
|
Non working links
|
:book: guide - .NET Microservices :books: Area - .NET Guide :card_file_box: Technology - .NET Architecture Source - Docs.ms broken-link doc-bug
|
codebetter.com has been down for some time.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 84d83855-4370-90e8-024a-12c5ac8220e9
* Version Independent ID: 01be3db1-4a00-c8f2-aa59-59656f206b93
* Content: [Applying CQRS and CQS approaches in a DDD microservice in eShopOnContainers](https://docs.microsoft.com/en-us/dotnet/standard/microservices-architecture/microservice-ddd-cqrs-patterns/eshoponcontainers-cqrs-ddd-microservice#feedback)
* Content Source: [docs/standard/microservices-architecture/microservice-ddd-cqrs-patterns/eshoponcontainers-cqrs-ddd-microservice.md](https://github.com/dotnet/docs/blob/master/docs/standard/microservices-architecture/microservice-ddd-cqrs-patterns/eshoponcontainers-cqrs-ddd-microservice.md)
* Product: **dotnet**
* Technology: **dotnet-ebooks**
* GitHub Login: @CESARDELATORRE
* Microsoft Alias: **wiwagn**
|
1.0
|
Non working links - codebetter.com has been down for some time.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 84d83855-4370-90e8-024a-12c5ac8220e9
* Version Independent ID: 01be3db1-4a00-c8f2-aa59-59656f206b93
* Content: [Applying CQRS and CQS approaches in a DDD microservice in eShopOnContainers](https://docs.microsoft.com/en-us/dotnet/standard/microservices-architecture/microservice-ddd-cqrs-patterns/eshoponcontainers-cqrs-ddd-microservice#feedback)
* Content Source: [docs/standard/microservices-architecture/microservice-ddd-cqrs-patterns/eshoponcontainers-cqrs-ddd-microservice.md](https://github.com/dotnet/docs/blob/master/docs/standard/microservices-architecture/microservice-ddd-cqrs-patterns/eshoponcontainers-cqrs-ddd-microservice.md)
* Product: **dotnet**
* Technology: **dotnet-ebooks**
* GitHub Login: @CESARDELATORRE
* Microsoft Alias: **wiwagn**
|
architecture
|
non working links codebetter com has been down for some time document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source product dotnet technology dotnet ebooks github login cesardelatorre microsoft alias wiwagn
| 1
|
207,397
| 7,127,893,407
|
IssuesEvent
|
2018-01-21 04:06:00
|
OperationCode/operationcode_frontend
|
https://api.github.com/repos/OperationCode/operationcode_frontend
|
closed
|
Add photos/bio to /team page
|
Priority: Medium Status: In Progress Type: Feature
|
# Feature
## Why is this feature being added?
We should make it easy to find board members on our `/team` page. Suggest we break down this page, ex. #contributors, #chapter-leaders, #board, #advisors, etc., and include photos so we can recognize them. Here's an example of what I'm envisioning (via GitHub):

*could be round images instead of square*
- [ ] There should be a `+` sign, or another icon so funders and other interested stakeholders can read up on their bios, ex. `operationcode.org/team/hollomancer` and learn more about Conrad, including GitHub, Twitter handle.
- [ ] Needs to be easy to send the link to x foundation or x corporation, and say, "Meet some of our board members and link to the exact spot where board members are listed.
## Technical Requirements
- [ ] Refactor StaffCard to make BoardCard obsolete, such that these components only render what they're provided.
- [ ] Resize images for uniformity.
- [ ] Make image links relative.
|
1.0
|
Add photos/bio to /team page - # Feature
## Why is this feature being added?
We should make it easy to find board members on our `/team` page. Suggest we break down this page, ex. #contributors, #chapter-leaders, #board, #advisors, etc., and include photos so we can recognize them. Here's an example of what I'm envisioning (via GitHub):

*could be round images instead of square*
- [ ] There should be a `+` sign, or another icon so funders and other interested stakeholders can read up on their bios, ex. `operationcode.org/team/hollomancer` and learn more about Conrad, including GitHub, Twitter handle.
- [ ] Needs to be easy to send the link to x foundation or x corporation, and say, "Meet some of our board members and link to the exact spot where board members are listed.
## Technical Requirements
- [ ] Refactor StaffCard to make BoardCard obsolete, such that these components only render what they're provided.
- [ ] Resize images for uniformity.
- [ ] Make image links relative.
|
non_architecture
|
add photos bio to team page feature why is this feature being added we should make it easy to find board members on our team page suggest we break down this page ex contributors chapter leaders board advisors etc and include photos so we can recognize them here s an example of what i m envisioning via github could be round images instead of square there should be a sign or another icon so funders and other interested stakeholders can read up on their bios ex operationcode org team hollomancer and learn more about conrad including github twitter handle needs to be easy to send the link to x foundation or x corporation and say meet some of our board members and link to the exact spot where board members are listed technical requirements refactor staffcard to make boardcard obsolete such that these components only render what they re provided resize images for uniformity make image links relative
| 0
|
59,918
| 14,671,978,283
|
IssuesEvent
|
2020-12-30 09:31:08
|
Raku/old-issue-tracker
|
https://api.github.com/repos/Raku/old-issue-tracker
|
closed
|
MoarVM build fail on termux/Android 6.0.1
|
build
|
Migrated from [rt.perl.org#132785](https://rt-archive.perl.org/perl6/Ticket/Display.html?id=132785) (status was 'new')
Searchable as RT132785$
|
1.0
|
MoarVM build fail on termux/Android 6.0.1 - Migrated from [rt.perl.org#132785](https://rt-archive.perl.org/perl6/Ticket/Display.html?id=132785) (status was 'new')
Searchable as RT132785$
|
non_architecture
|
moarvm build fail on termux android migrated from status was new searchable as
| 0
|
242,407
| 7,841,901,318
|
IssuesEvent
|
2018-06-18 21:10:00
|
StrangeLoopGames/EcoIssues
|
https://api.github.com/repos/StrangeLoopGames/EcoIssues
|
closed
|
Website not loading additional layers (Linux-Server)
|
Medium Priority
|
Hi there,
I'm running my Server on Linux and since 0.6.2.5-beta, the map in the World Status page can't display additional layers (like AirPollutionSpread). Tested with Chrome (v 66) and Edge Browser
How to reproduce: Open the website and wait for map loaded -> Click on Select map layer and select something (happens for all layers) -> Map get's a little bit darker but nothing else changes (but error occurs in Chrome Browser console).
Server versions: Mono linux standalone with 0.6.2.5, 0.7.3.3, 0.7.4.2
Edge Browser is not displaying ocean
Log from Chrome console when switching to AirPollutionSpread layer:
```
ecomap.js:419 Starting to parse AirPollutionSpread
ecomap.js:112 Uncaught TypeError: Cannot read property 'concat' of undefined
at lzwDecode (ecomap.js:112)
at parseImg (ecomap.js:311)
at parseBlock (ecomap.js:331)
```
Default view without selected layer:

View with selected layer:

Default view in Microsoft Edge:

|
1.0
|
Website not loading additional layers (Linux-Server) - Hi there,
I'm running my Server on Linux and since 0.6.2.5-beta, the map in the World Status page can't display additional layers (like AirPollutionSpread). Tested with Chrome (v 66) and Edge Browser
How to reproduce: Open the website and wait for map loaded -> Click on Select map layer and select something (happens for all layers) -> Map get's a little bit darker but nothing else changes (but error occurs in Chrome Browser console).
Server versions: Mono linux standalone with 0.6.2.5, 0.7.3.3, 0.7.4.2
Edge Browser is not displaying ocean
Log from Chrome console when switching to AirPollutionSpread layer:
```
ecomap.js:419 Starting to parse AirPollutionSpread
ecomap.js:112 Uncaught TypeError: Cannot read property 'concat' of undefined
at lzwDecode (ecomap.js:112)
at parseImg (ecomap.js:311)
at parseBlock (ecomap.js:331)
```
Default view without selected layer:

View with selected layer:

Default view in Microsoft Edge:

|
non_architecture
|
website not loading additional layers linux server hi there i m running my server on linux and since beta the map in the world status page can t display additional layers like airpollutionspread tested with chrome v and edge browser how to reproduce open the website and wait for map loaded click on select map layer and select something happens for all layers map get s a little bit darker but nothing else changes but error occurs in chrome browser console server versions mono linux standalone with edge browser is not displaying ocean log from chrome console when switching to airpollutionspread layer ecomap js starting to parse airpollutionspread ecomap js uncaught typeerror cannot read property concat of undefined at lzwdecode ecomap js at parseimg ecomap js at parseblock ecomap js default view without selected layer view with selected layer default view in microsoft edge
| 0
|
577,916
| 17,139,172,640
|
IssuesEvent
|
2021-07-13 07:40:29
|
googleapis/java-bigtable-hbase
|
https://api.github.com/repos/googleapis/java-bigtable-hbase
|
opened
|
bigtable.hbase.TestBufferedMutator: testAutoFlushOff failed
|
flakybot: issue priority: p1 type: bug
|
This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: a891335ce3179c45fade4f3683b7e09d38d0107a
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/64905140-8b04-4270-9ed2-a91e73d97e39), [Sponge](http://sponge2/64905140-8b04-4270-9ed2-a91e73d97e39)
status: failed
<details><summary>Test output</summary><br><pre>org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: UnauthenticatedException: 1 time, servers with issues: bigtable.googleapis.com
at com.google.cloud.bigtable.hbase.BigtableBufferedMutator.getExceptions(BigtableBufferedMutator.java:188)
at com.google.cloud.bigtable.hbase.BigtableBufferedMutator.handleExceptions(BigtableBufferedMutator.java:142)
at com.google.cloud.bigtable.hbase.BigtableBufferedMutator.flush(BigtableBufferedMutator.java:93)
at com.google.cloud.bigtable.hbase.TestBufferedMutator.testAutoFlushOff(TestBufferedMutator.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.apache.maven.surefire.junitcore.pc.Scheduler$1.run(Scheduler.java:410)
at org.apache.maven.surefire.junitcore.pc.InvokerStrategy.schedule(InvokerStrategy.java:54)
at org.apache.maven.surefire.junitcore.pc.Scheduler.schedule(Scheduler.java:367)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.apache.maven.surefire.junitcore.pc.Scheduler$1.run(Scheduler.java:410)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Suppressed: com.google.api.gax.batching.BatchingException: Batching finished with 1 batches failed to apply due to: 1 ApiException(1 INTERNAL) and 0 partial failures.
at com.google.api.gax.batching.BatcherStats.asException(BatcherStats.java:147)
at com.google.api.gax.batching.BatcherImpl.close(BatcherImpl.java:290)
at com.google.cloud.bigtable.hbase.wrappers.veneer.BulkMutationVeneerApi.close(BulkMutationVeneerApi.java:68)
at com.google.cloud.bigtable.hbase.BigtableBufferedMutatorHelper.close(BigtableBufferedMutatorHelper.java:91)
at com.google.cloud.bigtable.hbase.BigtableBufferedMutator.close(BigtableBufferedMutator.java:85)
at com.google.cloud.bigtable.hbase.TestBufferedMutator.testAutoFlushOff(TestBufferedMutator.java:64)
... 31 more
</pre></details>
|
1.0
|
bigtable.hbase.TestBufferedMutator: testAutoFlushOff failed - This test failed!
To configure my behavior, see [the Flaky Bot documentation](https://github.com/googleapis/repo-automation-bots/tree/master/packages/flakybot).
If I'm commenting on this issue too often, add the `flakybot: quiet` label and
I will stop commenting.
---
commit: a891335ce3179c45fade4f3683b7e09d38d0107a
buildURL: [Build Status](https://source.cloud.google.com/results/invocations/64905140-8b04-4270-9ed2-a91e73d97e39), [Sponge](http://sponge2/64905140-8b04-4270-9ed2-a91e73d97e39)
status: failed
<details><summary>Test output</summary><br><pre>org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1 action: UnauthenticatedException: 1 time, servers with issues: bigtable.googleapis.com
at com.google.cloud.bigtable.hbase.BigtableBufferedMutator.getExceptions(BigtableBufferedMutator.java:188)
at com.google.cloud.bigtable.hbase.BigtableBufferedMutator.handleExceptions(BigtableBufferedMutator.java:142)
at com.google.cloud.bigtable.hbase.BigtableBufferedMutator.flush(BigtableBufferedMutator.java:93)
at com.google.cloud.bigtable.hbase.TestBufferedMutator.testAutoFlushOff(TestBufferedMutator.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.apache.maven.surefire.junitcore.pc.Scheduler$1.run(Scheduler.java:410)
at org.apache.maven.surefire.junitcore.pc.InvokerStrategy.schedule(InvokerStrategy.java:54)
at org.apache.maven.surefire.junitcore.pc.Scheduler.schedule(Scheduler.java:367)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
at org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
at org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
at org.junit.runners.ParentRunner.run(ParentRunner.java:413)
at org.junit.runners.Suite.runChild(Suite.java:128)
at org.junit.runners.Suite.runChild(Suite.java:27)
at org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
at org.apache.maven.surefire.junitcore.pc.Scheduler$1.run(Scheduler.java:410)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Suppressed: com.google.api.gax.batching.BatchingException: Batching finished with 1 batches failed to apply due to: 1 ApiException(1 INTERNAL) and 0 partial failures.
at com.google.api.gax.batching.BatcherStats.asException(BatcherStats.java:147)
at com.google.api.gax.batching.BatcherImpl.close(BatcherImpl.java:290)
at com.google.cloud.bigtable.hbase.wrappers.veneer.BulkMutationVeneerApi.close(BulkMutationVeneerApi.java:68)
at com.google.cloud.bigtable.hbase.BigtableBufferedMutatorHelper.close(BigtableBufferedMutatorHelper.java:91)
at com.google.cloud.bigtable.hbase.BigtableBufferedMutator.close(BigtableBufferedMutator.java:85)
at com.google.cloud.bigtable.hbase.TestBufferedMutator.testAutoFlushOff(TestBufferedMutator.java:64)
... 31 more
</pre></details>
|
non_architecture
|
bigtable hbase testbufferedmutator testautoflushoff failed this test failed to configure my behavior see if i m commenting on this issue too often add the flakybot quiet label and i will stop commenting commit buildurl status failed test output org apache hadoop hbase client retriesexhaustedwithdetailsexception failed action unauthenticatedexception time servers with issues bigtable googleapis com at com google cloud bigtable hbase bigtablebufferedmutator getexceptions bigtablebufferedmutator java at com google cloud bigtable hbase bigtablebufferedmutator handleexceptions bigtablebufferedmutator java at com google cloud bigtable hbase bigtablebufferedmutator flush bigtablebufferedmutator java at com google cloud bigtable hbase testbufferedmutator testautoflushoff testbufferedmutator java at sun reflect nativemethodaccessorimpl native method at sun reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at sun reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at java lang reflect method invoke method java at org junit runners model frameworkmethod runreflectivecall frameworkmethod java at org junit internal runners model reflectivecallable run reflectivecallable java at org junit runners model frameworkmethod invokeexplosively frameworkmethod java at org junit internal runners statements invokemethod evaluate invokemethod java at org junit runners parentrunner evaluate parentrunner java at org junit runners evaluate java at org junit runners parentrunner runleaf parentrunner java at org junit runners runchild java at org junit runners runchild java at org junit runners parentrunner run parentrunner java at org apache maven surefire junitcore pc scheduler run scheduler java at org apache maven surefire junitcore pc invokerstrategy schedule invokerstrategy java at org apache maven surefire junitcore pc scheduler schedule scheduler java at org junit runners parentrunner runchildren parentrunner java at org junit runners parentrunner access parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner evaluate parentrunner java at org junit runners parentrunner run parentrunner java at org junit runners suite runchild suite java at org junit runners suite runchild suite java at org junit runners parentrunner run parentrunner java at org apache maven surefire junitcore pc scheduler run scheduler java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java suppressed com google api gax batching batchingexception batching finished with batches failed to apply due to apiexception internal and partial failures at com google api gax batching batcherstats asexception batcherstats java at com google api gax batching batcherimpl close batcherimpl java at com google cloud bigtable hbase wrappers veneer bulkmutationveneerapi close bulkmutationveneerapi java at com google cloud bigtable hbase bigtablebufferedmutatorhelper close bigtablebufferedmutatorhelper java at com google cloud bigtable hbase bigtablebufferedmutator close bigtablebufferedmutator java at com google cloud bigtable hbase testbufferedmutator testautoflushoff testbufferedmutator java more
| 0
|
257,267
| 27,561,833,137
|
IssuesEvent
|
2023-03-07 22:49:14
|
samqws-marketing/coursera_naptime
|
https://api.github.com/repos/samqws-marketing/coursera_naptime
|
closed
|
CVE-2020-36185 (High) detected in multiple libraries - autoclosed
|
Mend: dependency security vulnerability
|
## CVE-2020-36185 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.9.0.jar</b>, <b>jackson-databind-2.8.11.4.jar</b>, <b>jackson-databind-2.3.3.jar</b></p></summary>
<p>
<details><summary><b>jackson-databind-2.9.0.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: /home/wss-scanner/.ivy2/cache/com.fasterxml.jackson.core/jackson-databind/bundles/jackson-databind-2.9.0.jar</p>
<p>
Dependency Hierarchy:
- play-ehcache_2.12-2.6.25.jar (Root Library)
- play_2.12-2.6.25.jar
- play-json_2.12-2.6.14.jar
- jackson-datatype-jdk8-2.8.11.jar
- :x: **jackson-databind-2.9.0.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.8.11.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: /home/wss-scanner/.ivy2/cache/com.fasterxml.jackson.core/jackson-databind/bundles/jackson-databind-2.8.11.4.jar</p>
<p>
Dependency Hierarchy:
- play-ehcache_2.12-2.6.25.jar (Root Library)
- play_2.12-2.6.25.jar
- :x: **jackson-databind-2.8.11.4.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.3.3.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Path to vulnerable library: /home/wss-scanner/.ivy2/cache/com.fasterxml.jackson.core/jackson-databind/bundles/jackson-databind-2.3.3.jar</p>
<p>
Dependency Hierarchy:
- sbt-plugin-2.4.4.jar (Root Library)
- sbt-js-engine-1.1.3.jar
- npm_2.10-1.1.1.jar
- webjars-locator-0.26.jar
- :x: **jackson-databind-2.3.3.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/samqws-marketing/coursera_naptime/commit/95750513b615ecf0ea9b7e14fb5f71e577d01a1f">95750513b615ecf0ea9b7e14fb5f71e577d01a1f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp2.datasources.SharedPoolDataSource.
<p>Publish Date: 2021-01-06
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-36185>CVE-2020-36185</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2021-01-06</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p>
</p>
</details>
<p></p>
|
True
|
CVE-2020-36185 (High) detected in multiple libraries - autoclosed - ## CVE-2020-36185 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jackson-databind-2.9.0.jar</b>, <b>jackson-databind-2.8.11.4.jar</b>, <b>jackson-databind-2.3.3.jar</b></p></summary>
<p>
<details><summary><b>jackson-databind-2.9.0.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: /home/wss-scanner/.ivy2/cache/com.fasterxml.jackson.core/jackson-databind/bundles/jackson-databind-2.9.0.jar</p>
<p>
Dependency Hierarchy:
- play-ehcache_2.12-2.6.25.jar (Root Library)
- play_2.12-2.6.25.jar
- play-json_2.12-2.6.14.jar
- jackson-datatype-jdk8-2.8.11.jar
- :x: **jackson-databind-2.9.0.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.8.11.4.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to vulnerable library: /home/wss-scanner/.ivy2/cache/com.fasterxml.jackson.core/jackson-databind/bundles/jackson-databind-2.8.11.4.jar</p>
<p>
Dependency Hierarchy:
- play-ehcache_2.12-2.6.25.jar (Root Library)
- play_2.12-2.6.25.jar
- :x: **jackson-databind-2.8.11.4.jar** (Vulnerable Library)
</details>
<details><summary><b>jackson-databind-2.3.3.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Path to vulnerable library: /home/wss-scanner/.ivy2/cache/com.fasterxml.jackson.core/jackson-databind/bundles/jackson-databind-2.3.3.jar</p>
<p>
Dependency Hierarchy:
- sbt-plugin-2.4.4.jar (Root Library)
- sbt-js-engine-1.1.3.jar
- npm_2.10-1.1.1.jar
- webjars-locator-0.26.jar
- :x: **jackson-databind-2.3.3.jar** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/samqws-marketing/coursera_naptime/commit/95750513b615ecf0ea9b7e14fb5f71e577d01a1f">95750513b615ecf0ea9b7e14fb5f71e577d01a1f</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.8 mishandles the interaction between serialization gadgets and typing, related to org.apache.tomcat.dbcp.dbcp2.datasources.SharedPoolDataSource.
<p>Publish Date: 2021-01-06
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-36185>CVE-2020-36185</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Release Date: 2021-01-06</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.9.10.8</p>
</p>
</details>
<p></p>
|
non_architecture
|
cve high detected in multiple libraries autoclosed cve high severity vulnerability vulnerable libraries jackson databind jar jackson databind jar jackson databind jar jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to vulnerable library home wss scanner cache com fasterxml jackson core jackson databind bundles jackson databind jar dependency hierarchy play ehcache jar root library play jar play json jar jackson datatype jar x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to vulnerable library home wss scanner cache com fasterxml jackson core jackson databind bundles jackson databind jar dependency hierarchy play ehcache jar root library play jar x jackson databind jar vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api path to vulnerable library home wss scanner cache com fasterxml jackson core jackson databind bundles jackson databind jar dependency hierarchy sbt plugin jar root library sbt js engine jar npm jar webjars locator jar x jackson databind jar vulnerable library found in head commit a href found in base branch master vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to org apache tomcat dbcp datasources sharedpooldatasource publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution com fasterxml jackson core jackson databind
| 0
|
13,152
| 9,888,310,810
|
IssuesEvent
|
2019-06-25 11:14:46
|
wellcometrust/platform
|
https://api.github.com/repos/wellcometrust/platform
|
closed
|
Distinguish between “could not parse bag-info.txt” and “bag-info.txt does not exist”
|
📦 Storage service
|
Follow-up from https://github.com/wellcometrust/storage-service/pull/227
|
1.0
|
Distinguish between “could not parse bag-info.txt” and “bag-info.txt does not exist” - Follow-up from https://github.com/wellcometrust/storage-service/pull/227
|
non_architecture
|
distinguish between “could not parse bag info txt” and “bag info txt does not exist” follow up from
| 0
|
10,332
| 26,785,479,868
|
IssuesEvent
|
2023-02-01 02:09:13
|
facebook/react-native
|
https://api.github.com/repos/facebook/react-native
|
closed
|
Error "Can't find variable: require" on iOS (RN v.0.68.2)
|
Stale Needs: Triage :mag: Type: Old Architecture
|
### Description
Error _"Can't find variable: require"_ on 0.68.2 on Debug & Release.
Enumeration of presets and babel plugins did not solve the problem. The cache was cleared after each attempt. On the forums of other libraries, the developers dealt with a similar error with a patch in the new version.
The bug was discovered after fixing [another bug](https://github.com/facebook/react-native/issues/33954#issuecomment-1146979111) 0.68.2
Cache was cleared before each launch: react-native start --reset-cache
The babel config was tested from empty to what worked in previous versions and suggested in other forums and repositories ([1](https://github.com/facebook/react-native/issues/21048#issuecomment-426987192), [2](https://github.com/Tecode/react-native-mobx/blob/master/babel.config.js), [3](https://github.com/facebook/react-native/issues/22321)).
### Version
0.68.2
### Output of `npx react-native info`
System:
OS: macOS 11.6
CPU: (6) x64 Intel(R) Core(TM) i5-8500B CPU @ 3.00GHz
Memory: 76.89 MB / 8.00 GB
Shell: 5.8 - /bin/zsh
Binaries:
Node: 16.15.0 - /usr/local/bin/node
Yarn: 1.22.11 - /usr/local/bin/yarn
npm: 8.5.5 - /usr/local/bin/npm
Watchman: 2022.05.30.00 - /usr/local/bin/watchman
Managers:
CocoaPods: 1.11.3 - /usr/local/bin/pod
SDKs:
iOS SDK:
Platforms: DriverKit 21.2, iOS 15.2, macOS 12.1, tvOS 15.2, watchOS 8.3
Android SDK:
API Levels: 28, 29, 30, 31
Build Tools: 29.0.2, 30.0.2, 31.0.0, 32.0.0
System Images: android-30 | Google APIs Intel x86 Atom, android-30 | Google Play Intel x86 Atom, android-31 | Google APIs Intel x86 Atom_64
Android NDK: Not Found
IDEs:
Android Studio: 2020.3 AI-203.7717.56.2031.7678000
Xcode: 13.2.1/13C100 - /usr/bin/xcodebuild
Languages:
Java: 1.8.0_292 - /usr/bin/javac
npmPackages:
@react-native-community/cli: 7.0.3 => 7.0.3
react: ^18.1.0 => 18.1.0
react-native: ^0.68.2 => 0.68.2
react-native-macos: Not Found
npmGlobalPackages:
*react-native*: Not Found
### Steps to reproduce
Any Babel configuration. Tested on iOS 15.2 Simulator.
### Snack, code example, screenshot, or link to a repository

|
1.0
|
Error "Can't find variable: require" on iOS (RN v.0.68.2) - ### Description
Error _"Can't find variable: require"_ on 0.68.2 on Debug & Release.
Enumeration of presets and babel plugins did not solve the problem. The cache was cleared after each attempt. On the forums of other libraries, the developers dealt with a similar error with a patch in the new version.
The bug was discovered after fixing [another bug](https://github.com/facebook/react-native/issues/33954#issuecomment-1146979111) 0.68.2
Cache was cleared before each launch: react-native start --reset-cache
The babel config was tested from empty to what worked in previous versions and suggested in other forums and repositories ([1](https://github.com/facebook/react-native/issues/21048#issuecomment-426987192), [2](https://github.com/Tecode/react-native-mobx/blob/master/babel.config.js), [3](https://github.com/facebook/react-native/issues/22321)).
### Version
0.68.2
### Output of `npx react-native info`
System:
OS: macOS 11.6
CPU: (6) x64 Intel(R) Core(TM) i5-8500B CPU @ 3.00GHz
Memory: 76.89 MB / 8.00 GB
Shell: 5.8 - /bin/zsh
Binaries:
Node: 16.15.0 - /usr/local/bin/node
Yarn: 1.22.11 - /usr/local/bin/yarn
npm: 8.5.5 - /usr/local/bin/npm
Watchman: 2022.05.30.00 - /usr/local/bin/watchman
Managers:
CocoaPods: 1.11.3 - /usr/local/bin/pod
SDKs:
iOS SDK:
Platforms: DriverKit 21.2, iOS 15.2, macOS 12.1, tvOS 15.2, watchOS 8.3
Android SDK:
API Levels: 28, 29, 30, 31
Build Tools: 29.0.2, 30.0.2, 31.0.0, 32.0.0
System Images: android-30 | Google APIs Intel x86 Atom, android-30 | Google Play Intel x86 Atom, android-31 | Google APIs Intel x86 Atom_64
Android NDK: Not Found
IDEs:
Android Studio: 2020.3 AI-203.7717.56.2031.7678000
Xcode: 13.2.1/13C100 - /usr/bin/xcodebuild
Languages:
Java: 1.8.0_292 - /usr/bin/javac
npmPackages:
@react-native-community/cli: 7.0.3 => 7.0.3
react: ^18.1.0 => 18.1.0
react-native: ^0.68.2 => 0.68.2
react-native-macos: Not Found
npmGlobalPackages:
*react-native*: Not Found
### Steps to reproduce
Any Babel configuration. Tested on iOS 15.2 Simulator.
### Snack, code example, screenshot, or link to a repository

|
architecture
|
error can t find variable require on ios rn v description error can t find variable require on on debug release enumeration of presets and babel plugins did not solve the problem the cache was cleared after each attempt on the forums of other libraries the developers dealt with a similar error with a patch in the new version the bug was discovered after fixing cache was cleared before each launch react native start reset cache the babel config was tested from empty to what worked in previous versions and suggested in other forums and repositories version output of npx react native info system os macos cpu intel r core tm cpu memory mb gb shell bin zsh binaries node usr local bin node yarn usr local bin yarn npm usr local bin npm watchman usr local bin watchman managers cocoapods usr local bin pod sdks ios sdk platforms driverkit ios macos tvos watchos android sdk api levels build tools system images android google apis intel atom android google play intel atom android google apis intel atom android ndk not found ides android studio ai xcode usr bin xcodebuild languages java usr bin javac npmpackages react native community cli react react native react native macos not found npmglobalpackages react native not found steps to reproduce any babel configuration tested on ios simulator snack code example screenshot or link to a repository
| 1
|
6,553
| 14,877,114,902
|
IssuesEvent
|
2021-01-20 02:23:19
|
Azure/azure-sdk
|
https://api.github.com/repos/Azure/azure-sdk
|
opened
|
Board Review: Azure Mixed Reality Authentication client library
|
architecture board-review
|
Thank you for submitting this review request. Thorough review of your client library ensures that your APIs are consistent with the guidelines and the consumers of your client library have a consistently good experience when using Azure.
**The Architecture Board reviews [Track 2 libraries](https://azure.github.io/azure-sdk/general_introduction.html) only.** If your library does not meet this requirement, please reach out to [Architecture Board](adparch@microsoft.com) before creating the issue.
Please reference our [review process guidelines](https://azure.github.io/azure-sdk/policies_reviewprocess.html) to understand what is being asked for in the issue template.
To ensure consistency, all Tier-1 languages (C#, TypeScript, Java, Python) will generally be reviewed together. In expansive libraries, we will pair dynamic languages (Python, TypeScript) together, and strongly typed languages (C#, Java) together in separate meetings.
For Tier-2 languages (C, C++, Go, Android, iOS), the review will be on an as-needed basis.
**Before submitting, ensure you adjust the title of the issue appropriately.**
**Note that the required material must be included before a meeting can be scheduled.**
## Contacts and Timeline
* Responsible service team: ou-servicesdkdev@microsoft.com
* Main contacts: crtreasu, virivera, rgarcia, ariye
* Expected code complete date: 01/29
* Expected release date: 03/01/2021
## About the Service
* Link to documentation introducing/describing the service: https://review.docs.microsoft.com/en-us/azure/object-anchors/overview?branch=release-preview-aou
* Link to the service REST APIs: https://github.com/Azure/azure-rest-api-specs/tree/master/specification/mixedreality/data-plane/Microsoft.MixedReality/preview/2019-02-28-preview
* Link to GitHub issue for previous review sessions, if applicable: https://github.com/Azure/azure-sdk/issues/2005
## About the client library
* Name of the client library: Azure Mixed Reality Authentication
* Languages for this review: JavaScript, Java, Python
## Artifacts required (per language)
Please read through “API Review” section [here](https://azure.github.io/azure-sdk/policies_reviewprocess.html) to understand how these artifacts are generated. **It is critical that these artifacts are present and are in the right format. If not, the language architects cannot review them with the SDK Team’s API review tool.**
### .NET
n/a already completed
* APIView Link:
* Link to Champion Scenarios/Quickstart samples:
### Java
* APIView Link: https://apiview.dev/Assemblies/Review/f3a4bb684ffd4badada05eff7de952d5
* Link to Champion Scenarios/Quickstart samples:
### Python
* APIView Link:
* Link to Champion Scenarios/Quickstart samples:
### TypeScript
* APIView Link: https://apiview.dev/Assemblies/Review/4917626415bc448c8e2534e00c6f3a17
* Link to Champion Scenarios/Quickstart samples:
For all other languages, send a request to the Architecture Board to discuss the best format on individual basis.
## Thank you!
|
1.0
|
Board Review: Azure Mixed Reality Authentication client library - Thank you for submitting this review request. Thorough review of your client library ensures that your APIs are consistent with the guidelines and the consumers of your client library have a consistently good experience when using Azure.
**The Architecture Board reviews [Track 2 libraries](https://azure.github.io/azure-sdk/general_introduction.html) only.** If your library does not meet this requirement, please reach out to [Architecture Board](adparch@microsoft.com) before creating the issue.
Please reference our [review process guidelines](https://azure.github.io/azure-sdk/policies_reviewprocess.html) to understand what is being asked for in the issue template.
To ensure consistency, all Tier-1 languages (C#, TypeScript, Java, Python) will generally be reviewed together. In expansive libraries, we will pair dynamic languages (Python, TypeScript) together, and strongly typed languages (C#, Java) together in separate meetings.
For Tier-2 languages (C, C++, Go, Android, iOS), the review will be on an as-needed basis.
**Before submitting, ensure you adjust the title of the issue appropriately.**
**Note that the required material must be included before a meeting can be scheduled.**
## Contacts and Timeline
* Responsible service team: ou-servicesdkdev@microsoft.com
* Main contacts: crtreasu, virivera, rgarcia, ariye
* Expected code complete date: 01/29
* Expected release date: 03/01/2021
## About the Service
* Link to documentation introducing/describing the service: https://review.docs.microsoft.com/en-us/azure/object-anchors/overview?branch=release-preview-aou
* Link to the service REST APIs: https://github.com/Azure/azure-rest-api-specs/tree/master/specification/mixedreality/data-plane/Microsoft.MixedReality/preview/2019-02-28-preview
* Link to GitHub issue for previous review sessions, if applicable: https://github.com/Azure/azure-sdk/issues/2005
## About the client library
* Name of the client library: Azure Mixed Reality Authentication
* Languages for this review: JavaScript, Java, Python
## Artifacts required (per language)
Please read through “API Review” section [here](https://azure.github.io/azure-sdk/policies_reviewprocess.html) to understand how these artifacts are generated. **It is critical that these artifacts are present and are in the right format. If not, the language architects cannot review them with the SDK Team’s API review tool.**
### .NET
n/a already completed
* APIView Link:
* Link to Champion Scenarios/Quickstart samples:
### Java
* APIView Link: https://apiview.dev/Assemblies/Review/f3a4bb684ffd4badada05eff7de952d5
* Link to Champion Scenarios/Quickstart samples:
### Python
* APIView Link:
* Link to Champion Scenarios/Quickstart samples:
### TypeScript
* APIView Link: https://apiview.dev/Assemblies/Review/4917626415bc448c8e2534e00c6f3a17
* Link to Champion Scenarios/Quickstart samples:
For all other languages, send a request to the Architecture Board to discuss the best format on individual basis.
## Thank you!
|
architecture
|
board review azure mixed reality authentication client library thank you for submitting this review request thorough review of your client library ensures that your apis are consistent with the guidelines and the consumers of your client library have a consistently good experience when using azure the architecture board reviews only if your library does not meet this requirement please reach out to adparch microsoft com before creating the issue please reference our to understand what is being asked for in the issue template to ensure consistency all tier languages c typescript java python will generally be reviewed together in expansive libraries we will pair dynamic languages python typescript together and strongly typed languages c java together in separate meetings for tier languages c c go android ios the review will be on an as needed basis before submitting ensure you adjust the title of the issue appropriately note that the required material must be included before a meeting can be scheduled contacts and timeline responsible service team ou servicesdkdev microsoft com main contacts crtreasu virivera rgarcia ariye expected code complete date expected release date about the service link to documentation introducing describing the service link to the service rest apis link to github issue for previous review sessions if applicable about the client library name of the client library azure mixed reality authentication languages for this review javascript java python artifacts required per language please read through “api review” section to understand how these artifacts are generated it is critical that these artifacts are present and are in the right format if not the language architects cannot review them with the sdk team’s api review tool net n a already completed apiview link link to champion scenarios quickstart samples java apiview link link to champion scenarios quickstart samples python apiview link link to champion scenarios quickstart samples typescript apiview link link to champion scenarios quickstart samples for all other languages send a request to the architecture board to discuss the best format on individual basis thank you
| 1
|
4,928
| 11,851,414,736
|
IssuesEvent
|
2020-03-24 18:05:01
|
kubernetes/kubernetes
|
https://api.github.com/repos/kubernetes/kubernetes
|
closed
|
Research for creating a readCoreV1NamespacedPodStatus conformant test
|
area/conformance sig/architecture sig/testing
|
# Description
After looking through endpoints which need tests written for in APIsnoop, this appears to be an endpoint which isn't hit.
#
## Fetch dependencies
```shell
go get -v -u k8s.io/apimachinery/pkg/apis/meta/v1
go get -v -u k8s.io/client-go/kubernetes
go get -v -u k8s.io/client-go/tools/clientcmd
go get -v -u github.com/ghodss/yaml
```
## Test draft
```go
package main
import (
"fmt"
"flag"
"time"
"os"
"k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"github.com/ghodss/yaml"
)
func main() {
// uses the current context in kubeconfig
kubeconfig := flag.String("kubeconfig",
fmt.Sprintf("%v/%v/%v", os.Getenv("HOME"), ".kube", "config"),
"(optional) absolute path to the kubeconfig file")
flag.Parse()
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
fmt.Println(err)
}
// make our work easier to find in the audit_event queries
config.UserAgent = "live-test-writing"
// creates the clientset
clientset, _ := kubernetes.NewForConfig(config)
// access the API to list pods
_, err = clientset.CoreV1().Pods("default").Create(&v1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: "nginx",
Labels: map[string]string{
"pod-name": "nginx",
},
},
Spec: v1.PodSpec{
Containers: []v1.Container{{
Image: "nginx",
Name: "nginx",
}},
RestartPolicy: v1.RestartPolicyNever,
},
})
if err != nil {
fmt.Println(err)
return
}
time.Sleep(5 * time.Second)
pod, err := clientset.CoreV1().Pods("default").Get("nginx", metav1.GetOptions{})
if err != nil {
fmt.Println(err)
return
}
podYAML, err := yaml.Marshal(pod)
if err != nil {
fmt.Printf("err: %v\n", err)
return
}
fmt.Println(string(podYAML))
time.Sleep(5 * time.Second)
err = clientset.CoreV1().Pods("default").Delete("nginx", &metav1.DeleteOptions{})
if err != nil {
fmt.Println(err)
return
}
}
```
## Test draft- Python working implementation
```python
pip3 install kubernetes
from __future__ import print_function
import time
import kubernetes.client
from kubernetes.client.rest import ApiException
from pprint import pprint
from kubernetes.client.configuration import Configuration
from kubernetes.config import kube_config
configuration = Configuration()
configuration.host = None
kube_config.load_kube_config(client_configuration=configuration)
# Uncomment below to setup prefix (e.g. Bearer) for API key, if needed
# configuration.api_key_prefix['authorization'] = 'Bearer'
# create an instance of the API class
api_instance = kubernetes.client.CoreV1Api(kubernetes.client.ApiClient(configuration))
name = 'kindnet-c7vtg' # str | name of the Pod
namespace = 'kube-system' # str | object name and auth scope, such as for teams and projects
pretty = 'pretty_example' # str | If 'true', then the output is pretty printed. (optional)
try:
api_response = api_instance.read_namespaced_pod_status(name, namespace, pretty=pretty)
pprint(api_response)
except ApiException as e:
print("Exception when calling CoreV1Api->read_namespaced_pod_status: %s\n" % e)
```
# Verify with APISnoop<a id="sec-4"></a>
## create view for hit endpoints
```sql-mode
CREATE VIEW "public"."endpoints_hit_by_new_test" AS
WITH live_testing_endpoints AS (
SELECT DISTINCT
operation_id,
count(1) as hits
FROM
audit_event
WHERE bucket = 'apisnoop'
AND useragent = 'live-test-writing'
GROUP BY operation_id
), baseline AS (
SELECT DISTINCT
operation_id,
test_hits,
conf_hits
FROM endpoint_coverage where bucket != 'apisnoop'
)
SELECT DISTINCT
lte.operation_id,
b.test_hits as hit_by_ete,
lte.hits as hit_by_new_test
FROM live_testing_endpoints lte
JOIN baseline b ON (b.operation_id = lte.operation_id);
```
## create view for coverage changed
```sql-mode
CREATE OR REPLACE VIEW "public"."projected_change_in_coverage" AS
with baseline as (
SELECT *
FROM
stable_endpoint_stats
WHERE job != 'live'
), test as (
SELECT
count(1) as endpoints_hit
FROM
(
select
operation_id
FROM audit_event
WHERE useragent = 'live-test-writing'
EXCEPT
SELECT
operation_id
FROM
endpoint_coverage
WHERE test_hits > 0
) tested_endpoints
), coverage as (
SELECT
baseline.test_hits as old_coverage,
(baseline.test_hits::int + test.endpoints_hit::int ) as new_coverage
from baseline, test
)
select
'test_coverage' as category,
baseline.total_endpoints,
coverage.old_coverage,
coverage.new_coverage,
(coverage.new_coverage - coverage.old_coverage) as change_in_number
from baseline, coverage
;
```
## find endpoints hit by this test
```sql-mode
select * from endpoints_hit_by_new_test;
```
```
operation_id | hit_by_ete | hit_by_new_test
---------------------------+------------+-----------------
createCoreV1NamespacedPod | 1990 | 2
deleteCoreV1NamespacedPod | 2114 | 2
readCoreV1NamespacedPod | 11421 | 1
(3 rows)
```
## show the change in coverage
```sql-mode
select * from projected_change_in_coverage;
```
```
category | total_endpoints | old_coverage | new_coverage | change_in_number
---------------+-----------------+--------------+--------------+------------------
test_coverage | 430 | 167 | 167 | 0
(1 row)
```
# Final notes
From the endpoints hit report above, it doesn't appear that my draft test hit the target endpoint.
Would it be possible for some help and/or advice on hitting the `/api/v1/namespaces/NAMESPACE/pods/PODNAME/status` endpoint?
|
1.0
|
Research for creating a readCoreV1NamespacedPodStatus conformant test - # Description
After looking through endpoints which need tests written for in APIsnoop, this appears to be an endpoint which isn't hit.
#
## Fetch dependencies
```shell
go get -v -u k8s.io/apimachinery/pkg/apis/meta/v1
go get -v -u k8s.io/client-go/kubernetes
go get -v -u k8s.io/client-go/tools/clientcmd
go get -v -u github.com/ghodss/yaml
```
## Test draft
```go
package main
import (
"fmt"
"flag"
"time"
"os"
"k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
"github.com/ghodss/yaml"
)
func main() {
// uses the current context in kubeconfig
kubeconfig := flag.String("kubeconfig",
fmt.Sprintf("%v/%v/%v", os.Getenv("HOME"), ".kube", "config"),
"(optional) absolute path to the kubeconfig file")
flag.Parse()
config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
if err != nil {
fmt.Println(err)
}
// make our work easier to find in the audit_event queries
config.UserAgent = "live-test-writing"
// creates the clientset
clientset, _ := kubernetes.NewForConfig(config)
// access the API to list pods
_, err = clientset.CoreV1().Pods("default").Create(&v1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: "nginx",
Labels: map[string]string{
"pod-name": "nginx",
},
},
Spec: v1.PodSpec{
Containers: []v1.Container{{
Image: "nginx",
Name: "nginx",
}},
RestartPolicy: v1.RestartPolicyNever,
},
})
if err != nil {
fmt.Println(err)
return
}
time.Sleep(5 * time.Second)
pod, err := clientset.CoreV1().Pods("default").Get("nginx", metav1.GetOptions{})
if err != nil {
fmt.Println(err)
return
}
podYAML, err := yaml.Marshal(pod)
if err != nil {
fmt.Printf("err: %v\n", err)
return
}
fmt.Println(string(podYAML))
time.Sleep(5 * time.Second)
err = clientset.CoreV1().Pods("default").Delete("nginx", &metav1.DeleteOptions{})
if err != nil {
fmt.Println(err)
return
}
}
```
## Test draft- Python working implementation
```python
pip3 install kubernetes
from __future__ import print_function
import time
import kubernetes.client
from kubernetes.client.rest import ApiException
from pprint import pprint
from kubernetes.client.configuration import Configuration
from kubernetes.config import kube_config
configuration = Configuration()
configuration.host = None
kube_config.load_kube_config(client_configuration=configuration)
# Uncomment below to setup prefix (e.g. Bearer) for API key, if needed
# configuration.api_key_prefix['authorization'] = 'Bearer'
# create an instance of the API class
api_instance = kubernetes.client.CoreV1Api(kubernetes.client.ApiClient(configuration))
name = 'kindnet-c7vtg' # str | name of the Pod
namespace = 'kube-system' # str | object name and auth scope, such as for teams and projects
pretty = 'pretty_example' # str | If 'true', then the output is pretty printed. (optional)
try:
api_response = api_instance.read_namespaced_pod_status(name, namespace, pretty=pretty)
pprint(api_response)
except ApiException as e:
print("Exception when calling CoreV1Api->read_namespaced_pod_status: %s\n" % e)
```
# Verify with APISnoop<a id="sec-4"></a>
## create view for hit endpoints
```sql-mode
CREATE VIEW "public"."endpoints_hit_by_new_test" AS
WITH live_testing_endpoints AS (
SELECT DISTINCT
operation_id,
count(1) as hits
FROM
audit_event
WHERE bucket = 'apisnoop'
AND useragent = 'live-test-writing'
GROUP BY operation_id
), baseline AS (
SELECT DISTINCT
operation_id,
test_hits,
conf_hits
FROM endpoint_coverage where bucket != 'apisnoop'
)
SELECT DISTINCT
lte.operation_id,
b.test_hits as hit_by_ete,
lte.hits as hit_by_new_test
FROM live_testing_endpoints lte
JOIN baseline b ON (b.operation_id = lte.operation_id);
```
## create view for coverage changed
```sql-mode
CREATE OR REPLACE VIEW "public"."projected_change_in_coverage" AS
with baseline as (
SELECT *
FROM
stable_endpoint_stats
WHERE job != 'live'
), test as (
SELECT
count(1) as endpoints_hit
FROM
(
select
operation_id
FROM audit_event
WHERE useragent = 'live-test-writing'
EXCEPT
SELECT
operation_id
FROM
endpoint_coverage
WHERE test_hits > 0
) tested_endpoints
), coverage as (
SELECT
baseline.test_hits as old_coverage,
(baseline.test_hits::int + test.endpoints_hit::int ) as new_coverage
from baseline, test
)
select
'test_coverage' as category,
baseline.total_endpoints,
coverage.old_coverage,
coverage.new_coverage,
(coverage.new_coverage - coverage.old_coverage) as change_in_number
from baseline, coverage
;
```
## find endpoints hit by this test
```sql-mode
select * from endpoints_hit_by_new_test;
```
```
operation_id | hit_by_ete | hit_by_new_test
---------------------------+------------+-----------------
createCoreV1NamespacedPod | 1990 | 2
deleteCoreV1NamespacedPod | 2114 | 2
readCoreV1NamespacedPod | 11421 | 1
(3 rows)
```
## show the change in coverage
```sql-mode
select * from projected_change_in_coverage;
```
```
category | total_endpoints | old_coverage | new_coverage | change_in_number
---------------+-----------------+--------------+--------------+------------------
test_coverage | 430 | 167 | 167 | 0
(1 row)
```
# Final notes
From the endpoints hit report above, it doesn't appear that my draft test hit the target endpoint.
Would it be possible for some help and/or advice on hitting the `/api/v1/namespaces/NAMESPACE/pods/PODNAME/status` endpoint?
|
architecture
|
research for creating a conformant test description after looking through endpoints which need tests written for in apisnoop this appears to be an endpoint which isn t hit fetch dependencies shell go get v u io apimachinery pkg apis meta go get v u io client go kubernetes go get v u io client go tools clientcmd go get v u github com ghodss yaml test draft go package main import fmt flag time os io api core io apimachinery pkg apis meta io client go kubernetes io client go tools clientcmd github com ghodss yaml func main uses the current context in kubeconfig kubeconfig flag string kubeconfig fmt sprintf v v v os getenv home kube config optional absolute path to the kubeconfig file flag parse config err clientcmd buildconfigfromflags kubeconfig if err nil fmt println err make our work easier to find in the audit event queries config useragent live test writing creates the clientset clientset kubernetes newforconfig config access the api to list pods err clientset pods default create pod objectmeta objectmeta name nginx labels map string pod name nginx spec podspec containers container image nginx name nginx restartpolicy restartpolicynever if err nil fmt println err return time sleep time second pod err clientset pods default get nginx getoptions if err nil fmt println err return podyaml err yaml marshal pod if err nil fmt printf err v n err return fmt println string podyaml time sleep time second err clientset pods default delete nginx deleteoptions if err nil fmt println err return test draft python working implementation python install kubernetes from future import print function import time import kubernetes client from kubernetes client rest import apiexception from pprint import pprint from kubernetes client configuration import configuration from kubernetes config import kube config configuration configuration configuration host none kube config load kube config client configuration configuration uncomment below to setup prefix e g bearer for api key if needed configuration api key prefix bearer create an instance of the api class api instance kubernetes client kubernetes client apiclient configuration name kindnet str name of the pod namespace kube system str object name and auth scope such as for teams and projects pretty pretty example str if true then the output is pretty printed optional try api response api instance read namespaced pod status name namespace pretty pretty pprint api response except apiexception as e print exception when calling read namespaced pod status s n e verify with apisnoop create view for hit endpoints sql mode create view public endpoints hit by new test as with live testing endpoints as select distinct operation id count as hits from audit event where bucket apisnoop and useragent live test writing group by operation id baseline as select distinct operation id test hits conf hits from endpoint coverage where bucket apisnoop select distinct lte operation id b test hits as hit by ete lte hits as hit by new test from live testing endpoints lte join baseline b on b operation id lte operation id create view for coverage changed sql mode create or replace view public projected change in coverage as with baseline as select from stable endpoint stats where job live test as select count as endpoints hit from select operation id from audit event where useragent live test writing except select operation id from endpoint coverage where test hits tested endpoints coverage as select baseline test hits as old coverage baseline test hits int test endpoints hit int as new coverage from baseline test select test coverage as category baseline total endpoints coverage old coverage coverage new coverage coverage new coverage coverage old coverage as change in number from baseline coverage find endpoints hit by this test sql mode select from endpoints hit by new test operation id hit by ete hit by new test rows show the change in coverage sql mode select from projected change in coverage category total endpoints old coverage new coverage change in number test coverage row final notes from the endpoints hit report above it doesn t appear that my draft test hit the target endpoint would it be possible for some help and or advice on hitting the api namespaces namespace pods podname status endpoint
| 1
|
8,702
| 23,287,981,996
|
IssuesEvent
|
2022-08-05 18:45:42
|
Azure/azure-sdk
|
https://api.github.com/repos/Azure/azure-sdk
|
closed
|
Board Review: metrics advisor (Python & .net)
|
architecture board-review
|
## The Basics
* Service team responsible for the client library: Metrics Advisor
* Link to documentation describing the service: https://docs.microsoft.com/en-us/azure/cognitive-services/metrics-advisor/
* Contact email (if service team, provide PM and Dev Lead):
bix@microsoft.com, bowgong@microsoft.com (dev)
quying@microsoft.com (PM)
## About this client library
* Name of the client library: azure-ai-metricsadvisor
* Languages for this review: .net/Python
* Link to the service REST APIs:
https://github.com/bowgong/azure-rest-api-specs/blob/metricsadvisor-preview/specification/cognitiveservices/data-plane/MetricsAdvisor/preview/v1.0/MetricsAdvisor.json
https://westus2.dev.cognitive.microsoft.com/docs/services/MetricsAdvisor/operations/createDataFeed
## Artifacts required (per language)
We use an API review tool ([apiview](https://apiview.azurewebsites.net)) to support .NET and Java API reviews. For Python and TypeScript, use the API extractor tool, then submit the output as a Draft PR to the relevant repository (azure-sdk-for-python or azure-sdk-for-js).
### .NET
* [APIView](https://apiview.dev/Assemblies/Review/8caf3dd1661c45228d8081a536cca3bc)
* Link to samples for champion scenarios: https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/metricsadvisor/Azure.AI.MetricsAdvisor/samples/README.md
### Python
* [APIView](https://apiview.dev/Assemblies/Review/6acc354ec6c5421b82a081a07b481df0)
* Link to samples for champion scenarios: https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/metricsadvisor/azure-ai-metricsadvisor/samples
## Champion Scenarios
A champion scenario is a use case that the consumer of the client library is commonly expected to perform. Champion scenarios are used to ensure the developer experience is exemplary for the common cases. You need to show the entire code sample (including error handling, as an example) for the champion scenarios.
* Champion Scenario 1:
* Describe the champion scenario: create a data feed to ingest data
* Estimate the percentage of developers using the service who would use the champion scenario
* Link to the code samples: [Python](https://gist.github.com/xiangyan99/6c367d45168294b043b2f5685d57f584#create-data-feed) | [.NET](https://gist.github.com/kinelski/c072790394398c37d186df611f6bea44#scenario-1-datafeed-creation)
* Champion Scenario 2:
* Describe the champion scenario: create a configuration to let service know whether a point is anomaly
* Estimate the percentage of developers using the service who would use the champion scenario
* Link to the code samples: [Python](https://gist.github.com/xiangyan99/6c367d45168294b043b2f5685d57f584#create-detection-configuration) | [.NET](https://gist.github.com/kinelski/c072790394398c37d186df611f6bea44#2d-applyingtuning-anomaly-detection)
* Champion Scenario 3:
* Describe the champion scenario: configure the service when to trigger an alert
* Estimate the percentage of developers using the service who would use the champion scenario
* Link to the code samples: [Python](https://gist.github.com/xiangyan99/6c367d45168294b043b2f5685d57f584#config-alert-configuration) | [.NET](https://gist.github.com/kinelski/c072790394398c37d186df611f6bea44#scenario-3-configure-alerts-and-get-incidents-notification-using-a-hook)
* Champion Scenario 4:
* Describe the champion scenario: query anomalies & alerts
* Estimate the percentage of developers using the service who would use the champion scenario
* Link to the code samples: [Python](https://gist.github.com/xiangyan99/6c367d45168294b043b2f5685d57f584#query-anomalies-for-alert-configuration) | [.NET](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/metricsadvisor/Azure.AI.MetricsAdvisor/README.md#query-detected-anomalies-and-triggered-alerts)
|
1.0
|
Board Review: metrics advisor (Python & .net) - ## The Basics
* Service team responsible for the client library: Metrics Advisor
* Link to documentation describing the service: https://docs.microsoft.com/en-us/azure/cognitive-services/metrics-advisor/
* Contact email (if service team, provide PM and Dev Lead):
bix@microsoft.com, bowgong@microsoft.com (dev)
quying@microsoft.com (PM)
## About this client library
* Name of the client library: azure-ai-metricsadvisor
* Languages for this review: .net/Python
* Link to the service REST APIs:
https://github.com/bowgong/azure-rest-api-specs/blob/metricsadvisor-preview/specification/cognitiveservices/data-plane/MetricsAdvisor/preview/v1.0/MetricsAdvisor.json
https://westus2.dev.cognitive.microsoft.com/docs/services/MetricsAdvisor/operations/createDataFeed
## Artifacts required (per language)
We use an API review tool ([apiview](https://apiview.azurewebsites.net)) to support .NET and Java API reviews. For Python and TypeScript, use the API extractor tool, then submit the output as a Draft PR to the relevant repository (azure-sdk-for-python or azure-sdk-for-js).
### .NET
* [APIView](https://apiview.dev/Assemblies/Review/8caf3dd1661c45228d8081a536cca3bc)
* Link to samples for champion scenarios: https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/metricsadvisor/Azure.AI.MetricsAdvisor/samples/README.md
### Python
* [APIView](https://apiview.dev/Assemblies/Review/6acc354ec6c5421b82a081a07b481df0)
* Link to samples for champion scenarios: https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/metricsadvisor/azure-ai-metricsadvisor/samples
## Champion Scenarios
A champion scenario is a use case that the consumer of the client library is commonly expected to perform. Champion scenarios are used to ensure the developer experience is exemplary for the common cases. You need to show the entire code sample (including error handling, as an example) for the champion scenarios.
* Champion Scenario 1:
* Describe the champion scenario: create a data feed to ingest data
* Estimate the percentage of developers using the service who would use the champion scenario
* Link to the code samples: [Python](https://gist.github.com/xiangyan99/6c367d45168294b043b2f5685d57f584#create-data-feed) | [.NET](https://gist.github.com/kinelski/c072790394398c37d186df611f6bea44#scenario-1-datafeed-creation)
* Champion Scenario 2:
* Describe the champion scenario: create a configuration to let service know whether a point is anomaly
* Estimate the percentage of developers using the service who would use the champion scenario
* Link to the code samples: [Python](https://gist.github.com/xiangyan99/6c367d45168294b043b2f5685d57f584#create-detection-configuration) | [.NET](https://gist.github.com/kinelski/c072790394398c37d186df611f6bea44#2d-applyingtuning-anomaly-detection)
* Champion Scenario 3:
* Describe the champion scenario: configure the service when to trigger an alert
* Estimate the percentage of developers using the service who would use the champion scenario
* Link to the code samples: [Python](https://gist.github.com/xiangyan99/6c367d45168294b043b2f5685d57f584#config-alert-configuration) | [.NET](https://gist.github.com/kinelski/c072790394398c37d186df611f6bea44#scenario-3-configure-alerts-and-get-incidents-notification-using-a-hook)
* Champion Scenario 4:
* Describe the champion scenario: query anomalies & alerts
* Estimate the percentage of developers using the service who would use the champion scenario
* Link to the code samples: [Python](https://gist.github.com/xiangyan99/6c367d45168294b043b2f5685d57f584#query-anomalies-for-alert-configuration) | [.NET](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/metricsadvisor/Azure.AI.MetricsAdvisor/README.md#query-detected-anomalies-and-triggered-alerts)
|
architecture
|
board review metrics advisor python net the basics service team responsible for the client library metrics advisor link to documentation describing the service contact email if service team provide pm and dev lead bix microsoft com bowgong microsoft com dev quying microsoft com pm about this client library name of the client library azure ai metricsadvisor languages for this review net python link to the service rest apis artifacts required per language we use an api review tool to support net and java api reviews for python and typescript use the api extractor tool then submit the output as a draft pr to the relevant repository azure sdk for python or azure sdk for js net link to samples for champion scenarios python link to samples for champion scenarios champion scenarios a champion scenario is a use case that the consumer of the client library is commonly expected to perform champion scenarios are used to ensure the developer experience is exemplary for the common cases you need to show the entire code sample including error handling as an example for the champion scenarios champion scenario describe the champion scenario create a data feed to ingest data estimate the percentage of developers using the service who would use the champion scenario link to the code samples champion scenario describe the champion scenario create a configuration to let service know whether a point is anomaly estimate the percentage of developers using the service who would use the champion scenario link to the code samples champion scenario describe the champion scenario configure the service when to trigger an alert estimate the percentage of developers using the service who would use the champion scenario link to the code samples champion scenario describe the champion scenario query anomalies alerts estimate the percentage of developers using the service who would use the champion scenario link to the code samples
| 1
|
11,112
| 28,051,489,438
|
IssuesEvent
|
2023-03-29 06:13:37
|
nim-lang/Nim
|
https://api.github.com/repos/nim-lang/Nim
|
closed
|
codegen with tuple[x: int, y: DateTime] doesn't compile with vc when using --gc:arc
|
OS/Architecture Specific ARC/ORC Memory Management
|
The following code doesn't compile with vc.
```nim
import times
proc p1(): tuple[x: int, y: DateTime] =
(1, now())
echo p1()
```
The vc compiler say this line cannot be compile.
```
static NIM_CONST tyTuple__EJrTnJxzlRNIm7iyy5gXzg TM__hh1wlisFj9asP03YxyloVEw_2 = {}
```
VC doesn't support empty struct.
```
$ nim -v
Nim Compiler Version 1.5.1 [Windows: i386]
Compiled at 2021-07-12
Copyright (c) 2006-2021 by Andreas Rumpf
active boot switches: -d:release
```
|
1.0
|
codegen with tuple[x: int, y: DateTime] doesn't compile with vc when using --gc:arc - The following code doesn't compile with vc.
```nim
import times
proc p1(): tuple[x: int, y: DateTime] =
(1, now())
echo p1()
```
The vc compiler say this line cannot be compile.
```
static NIM_CONST tyTuple__EJrTnJxzlRNIm7iyy5gXzg TM__hh1wlisFj9asP03YxyloVEw_2 = {}
```
VC doesn't support empty struct.
```
$ nim -v
Nim Compiler Version 1.5.1 [Windows: i386]
Compiled at 2021-07-12
Copyright (c) 2006-2021 by Andreas Rumpf
active boot switches: -d:release
```
|
architecture
|
codegen with tuple doesn t compile with vc when using gc arc the following code doesn t compile with vc nim import times proc tuple now echo the vc compiler say this line cannot be compile static nim const tytuple tm vc doesn t support empty struct nim v nim compiler version compiled at copyright c by andreas rumpf active boot switches d release
| 1
|
11,117
| 28,065,515,067
|
IssuesEvent
|
2023-03-29 15:09:18
|
MicrosoftDocs/architecture-center
|
https://api.github.com/repos/MicrosoftDocs/architecture-center
|
closed
|
Why the title has AWS in it?
|
doc-enhancement triaged architecture-center/svc cloud-fundamentals/subsvc Pri2
|
[Enter feedback here]
Do not find any AWS information specific or comparison in this page and so wondering why the title has "AWS" in it.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: d42fd9ae-205d-042a-4d38-4ca356e27d17
* Version Independent ID: 694c671b-430e-31b6-9133-2324bb3a2efa
* Content: [Comparing AWS and Azure regions and zones - Azure Architecture Center](https://docs.microsoft.com/en-us/azure/architecture/aws-professional/regions-zones)
* Content Source: [docs/aws-professional/regions-zones.md](https://github.com/microsoftdocs/architecture-center/blob/master/docs/aws-professional/regions-zones.md)
* Service: **architecture-center**
* Sub-service: **cloud-fundamentals**
* GitHub Login: @doodlemania2
* Microsoft Alias: **pnp**
|
1.0
|
Why the title has AWS in it? -
[Enter feedback here]
Do not find any AWS information specific or comparison in this page and so wondering why the title has "AWS" in it.
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: d42fd9ae-205d-042a-4d38-4ca356e27d17
* Version Independent ID: 694c671b-430e-31b6-9133-2324bb3a2efa
* Content: [Comparing AWS and Azure regions and zones - Azure Architecture Center](https://docs.microsoft.com/en-us/azure/architecture/aws-professional/regions-zones)
* Content Source: [docs/aws-professional/regions-zones.md](https://github.com/microsoftdocs/architecture-center/blob/master/docs/aws-professional/regions-zones.md)
* Service: **architecture-center**
* Sub-service: **cloud-fundamentals**
* GitHub Login: @doodlemania2
* Microsoft Alias: **pnp**
|
architecture
|
why the title has aws in it do not find any aws information specific or comparison in this page and so wondering why the title has aws in it document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service architecture center sub service cloud fundamentals github login microsoft alias pnp
| 1
|
11,356
| 30,243,773,828
|
IssuesEvent
|
2023-07-06 15:04:38
|
huggingface/datasets-server
|
https://api.github.com/repos/huggingface/datasets-server
|
closed
|
Update datasets to 2.11.0
|
refactoring / architecture
|
See https://github.com/huggingface/datasets/releases/tag/2.11.0
TODO: See discussions below
- [x] #1009
- [x] #1280
- [x] #1281
- [x] Use writer_batch_size for ArrowBasedBuilder
- [ ] Use direct cast from binary to Audio/Image
- [ ] Refresh datasets that use numpy.load
Useful changes for the datasets server (please complete if there are more, @huggingface/datasets)
> Use soundfile for mp3 decoding instead of torchaudio by @polinaeterna in https://github.com/huggingface/datasets/pull/5573
>
> - this allows to not have dependencies on pytorch to decode audio files
> - this was possible with soundfile 0.12 which bundles libsndfile binaries at a recent version with MP3 support
should we remove the dependency to torch and torchaudio? cc @polinaeterna
> Add writer_batch_size for ArrowBasedBuilder by @lhoestq in https://github.com/huggingface/datasets/pull/5565
> - allow to specofy the row group / record batch size when you download_and_prepare() a dataset
Needed for https://github.com/huggingface/datasets-server/pull/833 I think; cc @lhoestq
> Allow direct cast from binary to Audio/Image by @mariosasko in https://github.com/huggingface/datasets/pull/5644
Should we adapt the code in https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/features.py due to that?
> Support streaming datasets with numpy.load by @albertvillanova in https://github.com/huggingface/datasets/pull/5626
should we refresh some datasets after that?
|
1.0
|
Update datasets to 2.11.0 - See https://github.com/huggingface/datasets/releases/tag/2.11.0
TODO: See discussions below
- [x] #1009
- [x] #1280
- [x] #1281
- [x] Use writer_batch_size for ArrowBasedBuilder
- [ ] Use direct cast from binary to Audio/Image
- [ ] Refresh datasets that use numpy.load
Useful changes for the datasets server (please complete if there are more, @huggingface/datasets)
> Use soundfile for mp3 decoding instead of torchaudio by @polinaeterna in https://github.com/huggingface/datasets/pull/5573
>
> - this allows to not have dependencies on pytorch to decode audio files
> - this was possible with soundfile 0.12 which bundles libsndfile binaries at a recent version with MP3 support
should we remove the dependency to torch and torchaudio? cc @polinaeterna
> Add writer_batch_size for ArrowBasedBuilder by @lhoestq in https://github.com/huggingface/datasets/pull/5565
> - allow to specofy the row group / record batch size when you download_and_prepare() a dataset
Needed for https://github.com/huggingface/datasets-server/pull/833 I think; cc @lhoestq
> Allow direct cast from binary to Audio/Image by @mariosasko in https://github.com/huggingface/datasets/pull/5644
Should we adapt the code in https://github.com/huggingface/datasets-server/blob/main/services/worker/src/worker/features.py due to that?
> Support streaming datasets with numpy.load by @albertvillanova in https://github.com/huggingface/datasets/pull/5626
should we refresh some datasets after that?
|
architecture
|
update datasets to see todo see discussions below use writer batch size for arrowbasedbuilder use direct cast from binary to audio image refresh datasets that use numpy load useful changes for the datasets server please complete if there are more huggingface datasets use soundfile for decoding instead of torchaudio by polinaeterna in this allows to not have dependencies on pytorch to decode audio files this was possible with soundfile which bundles libsndfile binaries at a recent version with support should we remove the dependency to torch and torchaudio cc polinaeterna add writer batch size for arrowbasedbuilder by lhoestq in allow to specofy the row group record batch size when you download and prepare a dataset needed for i think cc lhoestq allow direct cast from binary to audio image by mariosasko in should we adapt the code in due to that support streaming datasets with numpy load by albertvillanova in should we refresh some datasets after that
| 1
|
96,954
| 3,976,163,456
|
IssuesEvent
|
2016-05-05 10:10:42
|
jfmengels/eslint-plugin-lodash-fp
|
https://api.github.com/repos/jfmengels/eslint-plugin-lodash-fp
|
closed
|
Warn when iteratee method uses too many arguments
|
assigned enhancement priority
|
Rule name to be found.
In Lodash/fp, many methods with an iteratee have their iteratee called with a fixed arity, often one less than what they're used to in vanilla Lodash. It would be nice to warn the user of this.
Example:
```js
// Vanilla
_.map([1, 2, 3], function(value, index) {
return value + index; // OK, index will be 0, 1, 2 in order
});
// FP
_.map(function(value, index) {
return value + index; // Not OK, as index is never passed (always `undefined`)
}, [1, 2, 3]);
```
|
1.0
|
Warn when iteratee method uses too many arguments - Rule name to be found.
In Lodash/fp, many methods with an iteratee have their iteratee called with a fixed arity, often one less than what they're used to in vanilla Lodash. It would be nice to warn the user of this.
Example:
```js
// Vanilla
_.map([1, 2, 3], function(value, index) {
return value + index; // OK, index will be 0, 1, 2 in order
});
// FP
_.map(function(value, index) {
return value + index; // Not OK, as index is never passed (always `undefined`)
}, [1, 2, 3]);
```
|
non_architecture
|
warn when iteratee method uses too many arguments rule name to be found in lodash fp many methods with an iteratee have their iteratee called with a fixed arity often one less than what they re used to in vanilla lodash it would be nice to warn the user of this example js vanilla map function value index return value index ok index will be in order fp map function value index return value index not ok as index is never passed always undefined
| 0
|
10,983
| 27,663,496,126
|
IssuesEvent
|
2023-03-12 19:46:32
|
DependencyTrack/hyades
|
https://api.github.com/repos/DependencyTrack/hyades
|
closed
|
Proposal: Use PURL, CPE, SWID Tag ID as key for Kafka Messages
|
proposal 🤔 domain/vuln-analysis 🕵 architecture 🔮
|
## Current Implementation
For every component within a BOM uploaded to the API server, the API server will publish an event to the `EventNew` Kafka topic.
Those events currently have the form:
| Key | Value
|:---|:-----|
| Project UUID | Component Details |
<details>
<summary><strong>Example</strong></summary>
```
ebb10845-8f95-4194-85c0-0ff6c5ab3cdf
```
```json
{
"uuid": "445dc140-5638-4eb7-9409-53204d7f3cae",
"group": "xerces",
"name": "xercesImpl",
"version": "2.12.2",
"purl": "pkg:maven/xerces/xercesImpl@2.12.2?type=jar",
"cpe": null
}
```
</details>
The Kafka producer used by the API server utilizes the [default partitioner](https://docs.confluent.io/platform/current/clients/producer.html#concepts), meaning that events with the same key will always end up in the same topic partition.
Kafka streams applications (read: consumer groups) in the analyzer application consume from the `EventNew` topic. At the time of writing, those applications are:
| Application Name | Class |
|:------|:-------|
| OSSConsumer | `org.acme.consumer.OSSIndexBatcher` |
| SnykAnalyzer | `org.acme.consumer.SnykAnalyzer` |
Quoting the [streams architecture documentation](https://docs.confluent.io/platform/current/streams/architecture.html#stream-partitions-and-tasks):
> Kafka Streams creates a fixed number of stream tasks based on the input stream partitions for the application, *with each task being assigned a list of partitions from the input streams* (i.e., Kafka topics). **The assignment of stream partitions to stream tasks never changes**, hence the stream task is a fixed unit of parallelism of the application.
Applying this to our current implementation, this means that events for the same project UUID will always end up being processed by the same streams task (maps to a JVM thread) within a streams application.
<details>
<summary><strong>Example</strong></summary>
* The `EventNew` topic is created with 3 partitions
* A BOM with 200 components is uploaded to the DT project with UUID `ebb10845-8f95-4194-85c0-0ff6c5ab3cdf`
* The API server sends 200 messages with key `ebb10845-8f95-4194-85c0-0ff6c5ab3cdf` to the `EventNew` topic
* The default partitioner assigns all 200 events to partition `1`
* The streams applications *OSSConsumer* and *SnykAnalyzer* are started with 3 threads each
* Thread `1` of each streams application is assigned to partition `1` of the `EventNew` topic
* Both thread `1`s process the 200 events, while threads `0` and `2` of both Streams applications remain idle
</details>
Analyzers perform lookups with external services (OSS Index, Snyk, VulnDB APIs), unless they experience a cache hit for the component at hand. They will emit messages of the following form to the `vuln-result` topic:
| Key | Value |
|:----|:------|
| Component UUID | Vulnerability Details (may be `null` when no vuln has been found) |
<details>
<summary><strong>Example</strong></summary>
```
445dc140-5638-4eb7-9409-53204d7f3cae
```
```json
{
"vulnerability": {
"vulnId": "CVE-2017-10355",
"source": "NVD",
"description": "sonatype-2017-0348 - xerces:xercesImpl - Denial of Service (DoS)\n\nThe software contains multiple threads or executable segments that are waiting for each other to release a necessary lock, resulting in deadlock.",
"references": "* [https://ossindex.sonatype.org/vulnerability/sonatype-2017-0348?component-type=maven&component-name=xerces%2FxercesImpl&utm_source=unknown&utm_medium=integration&utm_content=Alpine](https://ossindex.sonatype.org/vulnerability/sonatype-2017-0348?component-type=maven&component-name=xerces%2FxercesImpl&utm_source=unknown&utm_medium=integration&utm_content=Alpine)\n* [https://blogs.securiteam.com/index.php/archives/3271](https://blogs.securiteam.com/index.php/archives/3271)",
"cwes": [
{
"cweId": 833,
"name": "Deadlock"
}
],
"severity": "MEDIUM",
"affectedProjectCount": 0
},
"identity": "OSSINDEX_ANALYZER"
}
```
</details>
Using the component UUID from the message key, the API server can easily correlate the message with a specific component in the portfolio.
#### Benefits
* ✅ Each event in `EventNew` represents a component in DT and thus a nicely encapsulated unit of work for the analyzer
* ✅ Easy correlation of `vuln-result` events to components in the portfolio
#### Drawbacks
1. ⛔ Projects with many components can clog a topic partition, keeping one streams task super busy while others run idle
* Benefits of parallelizing the analysis work is done at the project-, rather than at the component-level
2. ⛔ DT can consider components to be different, despite them having idential PURLs or CPEs. OSS Index, Snyk etc. don't do that, so triggering a scan for each DT component will result in many redundant calls
3. ⛔ Because the same PURL or CPE may be analyzed in multiple stream tasks at once, there will be race conditions for cache lookups, again causing redundant calls to external services
4. ⛔ The cache lookup issues and redundant calls mentioned above contribute to faster exceeding of rate limits emposed by the external services
5. ⛔ If we want to support ad-hoc scanning of components or BOMs for which no project in DT exists, we can't rely on the project or component UUID to always be available for message keys
> **Note**
> Points (2) - (4) exist in vanilla DT, too.
## Proposed Solution
Both of the following variants are based on option 2 in Alioune's comment here: https://github.com/DependencyTrack/dependency-track/issues/2023#issuecomment-1280640671 (and I *think* it is also what he was referring to in https://github.com/syalioune/DTKafkaPOC/pull/1#discussion_r985786099):
> * Using a combination of object pools and sharding based on queues (in-memory or not) : Having a pool of analyzer objects with the proper logic fetching components to process from a dedicated queue.
> * Upstream process have to be updated to always send (or it can be wrapped) identical purls to the same queue (using some kind of hashed based partitioning).
>
> This way, there would be no need for synchronization between the analyzer objects as identitical purls would always be processed by the same analyzer
### Variant 1: Only work with PURL / CPE / SWID Tag ID
Instead of using the project UUID as message key, we use the identifiers used for vulnerability analysis:
* PURL
* CPE
* SWID Tag ID (at a later point in time)
* etc.
Further, the entire analysis process will happen without any relations to component identities in DT. There will be no IDs or UUIDs of components or projects transmitted.
> **Note**
> There is a complication regarding PURLs, in that they can contain qualifiers and sub-paths.
> For example, `pkg:maven/com.acme/acme-lib@1.2.3` and `pkg:maven/com.acme/acme-lib@1.2.3?type=jar` are *technically* different, but describe the same component, and are treated as equal by all (currently known) analyzers.
>
> We could implement a [custom Kafka partitioner](https://www.clairvoyant.ai/blog/writing-custom-partitioner-for-apache-kafka) that would ensure that `pkg:maven/com.acme/acme-lib@1.2.3` and `pkg:maven/com.acme/acme-lib@1.2.3?type=jar` end up in the same partition. The partitioner would treat PURLs as equal, as long as their coordinates (type, namespace, name, version) are the same.
>
> A similar strategy will be necessary for CPEs, too.
| Key | Value |
|:---|:-----|
| PURL / CPE / SWID Tag ID | Nothing / Additional Details |
<details>
<summary><strong>Example</strong></summary>
```
pkg:maven/com.acme/acme-lib@1.2.3?type=jar
```
```json
{}
```
---
```
cpe:2.3:a:apache:xerces2_java:*:*:*:*:*:*:*:*
```
```json
{}
```
</details>
Results emitted by the analyzers would then have the form of:
| Key | Value |
|:----|:------|
|PURL / CPE / SWID Tag ID | Vulnerability Details (may be `null` when no vuln has been found) |
<details>
<summary><strong>Example</strong></summary>
```
pkg:maven/com.acme/acme-lib@1.2.3?type=jar
```
```json
{
"vulnerability": {
"vulnId": "CVE-2017-10355",
"source": "NVD",
"description": "sonatype-2017-0348 - xerces:xercesImpl - Denial of Service (DoS)\n\nThe software contains multiple threads or executable segments that are waiting for each other to release a necessary lock, resulting in deadlock.",
"references": "* [https://ossindex.sonatype.org/vulnerability/sonatype-2017-0348?component-type=maven&component-name=xerces%2FxercesImpl&utm_source=unknown&utm_medium=integration&utm_content=Alpine](https://ossindex.sonatype.org/vulnerability/sonatype-2017-0348?component-type=maven&component-name=xerces%2FxercesImpl&utm_source=unknown&utm_medium=integration&utm_content=Alpine)\n* [https://blogs.securiteam.com/index.php/archives/3271](https://blogs.securiteam.com/index.php/archives/3271)",
"cwes": [
{
"cweId": 833,
"name": "Deadlock"
}
],
"severity": "MEDIUM",
"affectedProjectCount": 0
},
"identity": "OSSINDEX_ANALYZER"
}
```
</details>
#### Benefits
1. ✅ Kafka streams guarantees us that the same PURL will be processed by the same streams task, solving the problem of race conditions in cache lookups
2. ✅ Consequently, processing the same PURL multiple times is not an issue, because caching is more effective
3. ✅ The API server can perform best-effort de-duplication of those identifiers before sending them off to Kafka. That way, a BOM upload to the same project should never result in duplicate PURL / CPE events. This can contribute to less overall load on the system.
4. ✅ Streams tasks additionally get a chance to perform further de-duplication, so they don't process the same PURL multiple times within a window / batch. Duplicate PURL events can simply be discarded.
5. ✅ Simplification of the recurring analysis of the entire portfolio: Instead of iterating over all individual components every X hours, iterate over all *unique* PURLs, CPEs, SWID Tag IDs in the entire portfolio and send them to Kafka
* This has a potential to *drastically* reduce the effort and time needed to analyze the entire portfolio
6. ✅ Vulnerability analysis results will be applied to all affected components in the portfolio in one go, whereas the current approach only applied them to a single component at a time
#### Drawbacks
1. ⛔ More responsibility on the API server: Messages from the `vuln-result` topic will no longer be tied to a specific project UUID or component UUID
* Instead, the API server will have to apply the results to **all** components matching the given PURL / CPE
* This can be an expensive operation, but can be optimized with proper indexes and efficient use of transactions. Should be tested though
2. ⛔ Batching of `EventNew` events (as required for OSS Index) will be harder (https://github.com/mehab/DTKafkaPOC/issues/50#issuecomment-1282636777)
3. ⛔ May not be efficient for use cases where the system is only exposed to little load, or BOMs are uploaded only sporadically (https://github.com/mehab/DTKafkaPOC/issues/50#issuecomment-1283765744)
### Possible Solution 2: Only change the key for `EventNew` messages
As a compromise between the current solution and variant 1 as described above: Still set the message key to PURL / CPE, but include the component UUID in the message body.
<details>
<summary><strong>Example</strong></summary>
```
pkg:maven/com.acme/acme-lib@1.2.3?type=jar
```
```json
{
"uuid": "445dc140-5638-4eb7-9409-53204d7f3cae",
"group": "xerces",
"name": "xercesImpl",
"version": "2.12.2"
}
```
---
```
cpe:2.3:a:apache:xerces2_java:*:*:*:*:*:*:*:*
```
```json
{
"uuid": "445dc140-5638-4eb7-9409-53204d7f3cae",
"group": "xerces",
"name": "xercesImpl",
"version": "2.12.2"
}
```
</details>
#### Benefits
TBD
#### Drawbacks
TBD
|
1.0
|
Proposal: Use PURL, CPE, SWID Tag ID as key for Kafka Messages - ## Current Implementation
For every component within a BOM uploaded to the API server, the API server will publish an event to the `EventNew` Kafka topic.
Those events currently have the form:
| Key | Value
|:---|:-----|
| Project UUID | Component Details |
<details>
<summary><strong>Example</strong></summary>
```
ebb10845-8f95-4194-85c0-0ff6c5ab3cdf
```
```json
{
"uuid": "445dc140-5638-4eb7-9409-53204d7f3cae",
"group": "xerces",
"name": "xercesImpl",
"version": "2.12.2",
"purl": "pkg:maven/xerces/xercesImpl@2.12.2?type=jar",
"cpe": null
}
```
</details>
The Kafka producer used by the API server utilizes the [default partitioner](https://docs.confluent.io/platform/current/clients/producer.html#concepts), meaning that events with the same key will always end up in the same topic partition.
Kafka streams applications (read: consumer groups) in the analyzer application consume from the `EventNew` topic. At the time of writing, those applications are:
| Application Name | Class |
|:------|:-------|
| OSSConsumer | `org.acme.consumer.OSSIndexBatcher` |
| SnykAnalyzer | `org.acme.consumer.SnykAnalyzer` |
Quoting the [streams architecture documentation](https://docs.confluent.io/platform/current/streams/architecture.html#stream-partitions-and-tasks):
> Kafka Streams creates a fixed number of stream tasks based on the input stream partitions for the application, *with each task being assigned a list of partitions from the input streams* (i.e., Kafka topics). **The assignment of stream partitions to stream tasks never changes**, hence the stream task is a fixed unit of parallelism of the application.
Applying this to our current implementation, this means that events for the same project UUID will always end up being processed by the same streams task (maps to a JVM thread) within a streams application.
<details>
<summary><strong>Example</strong></summary>
* The `EventNew` topic is created with 3 partitions
* A BOM with 200 components is uploaded to the DT project with UUID `ebb10845-8f95-4194-85c0-0ff6c5ab3cdf`
* The API server sends 200 messages with key `ebb10845-8f95-4194-85c0-0ff6c5ab3cdf` to the `EventNew` topic
* The default partitioner assigns all 200 events to partition `1`
* The streams applications *OSSConsumer* and *SnykAnalyzer* are started with 3 threads each
* Thread `1` of each streams application is assigned to partition `1` of the `EventNew` topic
* Both thread `1`s process the 200 events, while threads `0` and `2` of both Streams applications remain idle
</details>
Analyzers perform lookups with external services (OSS Index, Snyk, VulnDB APIs), unless they experience a cache hit for the component at hand. They will emit messages of the following form to the `vuln-result` topic:
| Key | Value |
|:----|:------|
| Component UUID | Vulnerability Details (may be `null` when no vuln has been found) |
<details>
<summary><strong>Example</strong></summary>
```
445dc140-5638-4eb7-9409-53204d7f3cae
```
```json
{
"vulnerability": {
"vulnId": "CVE-2017-10355",
"source": "NVD",
"description": "sonatype-2017-0348 - xerces:xercesImpl - Denial of Service (DoS)\n\nThe software contains multiple threads or executable segments that are waiting for each other to release a necessary lock, resulting in deadlock.",
"references": "* [https://ossindex.sonatype.org/vulnerability/sonatype-2017-0348?component-type=maven&component-name=xerces%2FxercesImpl&utm_source=unknown&utm_medium=integration&utm_content=Alpine](https://ossindex.sonatype.org/vulnerability/sonatype-2017-0348?component-type=maven&component-name=xerces%2FxercesImpl&utm_source=unknown&utm_medium=integration&utm_content=Alpine)\n* [https://blogs.securiteam.com/index.php/archives/3271](https://blogs.securiteam.com/index.php/archives/3271)",
"cwes": [
{
"cweId": 833,
"name": "Deadlock"
}
],
"severity": "MEDIUM",
"affectedProjectCount": 0
},
"identity": "OSSINDEX_ANALYZER"
}
```
</details>
Using the component UUID from the message key, the API server can easily correlate the message with a specific component in the portfolio.
#### Benefits
* ✅ Each event in `EventNew` represents a component in DT and thus a nicely encapsulated unit of work for the analyzer
* ✅ Easy correlation of `vuln-result` events to components in the portfolio
#### Drawbacks
1. ⛔ Projects with many components can clog a topic partition, keeping one streams task super busy while others run idle
* Benefits of parallelizing the analysis work is done at the project-, rather than at the component-level
2. ⛔ DT can consider components to be different, despite them having idential PURLs or CPEs. OSS Index, Snyk etc. don't do that, so triggering a scan for each DT component will result in many redundant calls
3. ⛔ Because the same PURL or CPE may be analyzed in multiple stream tasks at once, there will be race conditions for cache lookups, again causing redundant calls to external services
4. ⛔ The cache lookup issues and redundant calls mentioned above contribute to faster exceeding of rate limits emposed by the external services
5. ⛔ If we want to support ad-hoc scanning of components or BOMs for which no project in DT exists, we can't rely on the project or component UUID to always be available for message keys
> **Note**
> Points (2) - (4) exist in vanilla DT, too.
## Proposed Solution
Both of the following variants are based on option 2 in Alioune's comment here: https://github.com/DependencyTrack/dependency-track/issues/2023#issuecomment-1280640671 (and I *think* it is also what he was referring to in https://github.com/syalioune/DTKafkaPOC/pull/1#discussion_r985786099):
> * Using a combination of object pools and sharding based on queues (in-memory or not) : Having a pool of analyzer objects with the proper logic fetching components to process from a dedicated queue.
> * Upstream process have to be updated to always send (or it can be wrapped) identical purls to the same queue (using some kind of hashed based partitioning).
>
> This way, there would be no need for synchronization between the analyzer objects as identitical purls would always be processed by the same analyzer
### Variant 1: Only work with PURL / CPE / SWID Tag ID
Instead of using the project UUID as message key, we use the identifiers used for vulnerability analysis:
* PURL
* CPE
* SWID Tag ID (at a later point in time)
* etc.
Further, the entire analysis process will happen without any relations to component identities in DT. There will be no IDs or UUIDs of components or projects transmitted.
> **Note**
> There is a complication regarding PURLs, in that they can contain qualifiers and sub-paths.
> For example, `pkg:maven/com.acme/acme-lib@1.2.3` and `pkg:maven/com.acme/acme-lib@1.2.3?type=jar` are *technically* different, but describe the same component, and are treated as equal by all (currently known) analyzers.
>
> We could implement a [custom Kafka partitioner](https://www.clairvoyant.ai/blog/writing-custom-partitioner-for-apache-kafka) that would ensure that `pkg:maven/com.acme/acme-lib@1.2.3` and `pkg:maven/com.acme/acme-lib@1.2.3?type=jar` end up in the same partition. The partitioner would treat PURLs as equal, as long as their coordinates (type, namespace, name, version) are the same.
>
> A similar strategy will be necessary for CPEs, too.
| Key | Value |
|:---|:-----|
| PURL / CPE / SWID Tag ID | Nothing / Additional Details |
<details>
<summary><strong>Example</strong></summary>
```
pkg:maven/com.acme/acme-lib@1.2.3?type=jar
```
```json
{}
```
---
```
cpe:2.3:a:apache:xerces2_java:*:*:*:*:*:*:*:*
```
```json
{}
```
</details>
Results emitted by the analyzers would then have the form of:
| Key | Value |
|:----|:------|
|PURL / CPE / SWID Tag ID | Vulnerability Details (may be `null` when no vuln has been found) |
<details>
<summary><strong>Example</strong></summary>
```
pkg:maven/com.acme/acme-lib@1.2.3?type=jar
```
```json
{
"vulnerability": {
"vulnId": "CVE-2017-10355",
"source": "NVD",
"description": "sonatype-2017-0348 - xerces:xercesImpl - Denial of Service (DoS)\n\nThe software contains multiple threads or executable segments that are waiting for each other to release a necessary lock, resulting in deadlock.",
"references": "* [https://ossindex.sonatype.org/vulnerability/sonatype-2017-0348?component-type=maven&component-name=xerces%2FxercesImpl&utm_source=unknown&utm_medium=integration&utm_content=Alpine](https://ossindex.sonatype.org/vulnerability/sonatype-2017-0348?component-type=maven&component-name=xerces%2FxercesImpl&utm_source=unknown&utm_medium=integration&utm_content=Alpine)\n* [https://blogs.securiteam.com/index.php/archives/3271](https://blogs.securiteam.com/index.php/archives/3271)",
"cwes": [
{
"cweId": 833,
"name": "Deadlock"
}
],
"severity": "MEDIUM",
"affectedProjectCount": 0
},
"identity": "OSSINDEX_ANALYZER"
}
```
</details>
#### Benefits
1. ✅ Kafka streams guarantees us that the same PURL will be processed by the same streams task, solving the problem of race conditions in cache lookups
2. ✅ Consequently, processing the same PURL multiple times is not an issue, because caching is more effective
3. ✅ The API server can perform best-effort de-duplication of those identifiers before sending them off to Kafka. That way, a BOM upload to the same project should never result in duplicate PURL / CPE events. This can contribute to less overall load on the system.
4. ✅ Streams tasks additionally get a chance to perform further de-duplication, so they don't process the same PURL multiple times within a window / batch. Duplicate PURL events can simply be discarded.
5. ✅ Simplification of the recurring analysis of the entire portfolio: Instead of iterating over all individual components every X hours, iterate over all *unique* PURLs, CPEs, SWID Tag IDs in the entire portfolio and send them to Kafka
* This has a potential to *drastically* reduce the effort and time needed to analyze the entire portfolio
6. ✅ Vulnerability analysis results will be applied to all affected components in the portfolio in one go, whereas the current approach only applied them to a single component at a time
#### Drawbacks
1. ⛔ More responsibility on the API server: Messages from the `vuln-result` topic will no longer be tied to a specific project UUID or component UUID
* Instead, the API server will have to apply the results to **all** components matching the given PURL / CPE
* This can be an expensive operation, but can be optimized with proper indexes and efficient use of transactions. Should be tested though
2. ⛔ Batching of `EventNew` events (as required for OSS Index) will be harder (https://github.com/mehab/DTKafkaPOC/issues/50#issuecomment-1282636777)
3. ⛔ May not be efficient for use cases where the system is only exposed to little load, or BOMs are uploaded only sporadically (https://github.com/mehab/DTKafkaPOC/issues/50#issuecomment-1283765744)
### Possible Solution 2: Only change the key for `EventNew` messages
As a compromise between the current solution and variant 1 as described above: Still set the message key to PURL / CPE, but include the component UUID in the message body.
<details>
<summary><strong>Example</strong></summary>
```
pkg:maven/com.acme/acme-lib@1.2.3?type=jar
```
```json
{
"uuid": "445dc140-5638-4eb7-9409-53204d7f3cae",
"group": "xerces",
"name": "xercesImpl",
"version": "2.12.2"
}
```
---
```
cpe:2.3:a:apache:xerces2_java:*:*:*:*:*:*:*:*
```
```json
{
"uuid": "445dc140-5638-4eb7-9409-53204d7f3cae",
"group": "xerces",
"name": "xercesImpl",
"version": "2.12.2"
}
```
</details>
#### Benefits
TBD
#### Drawbacks
TBD
|
architecture
|
proposal use purl cpe swid tag id as key for kafka messages current implementation for every component within a bom uploaded to the api server the api server will publish an event to the eventnew kafka topic those events currently have the form key value project uuid component details example json uuid group xerces name xercesimpl version purl pkg maven xerces xercesimpl type jar cpe null the kafka producer used by the api server utilizes the meaning that events with the same key will always end up in the same topic partition kafka streams applications read consumer groups in the analyzer application consume from the eventnew topic at the time of writing those applications are application name class ossconsumer org acme consumer ossindexbatcher snykanalyzer org acme consumer snykanalyzer quoting the kafka streams creates a fixed number of stream tasks based on the input stream partitions for the application with each task being assigned a list of partitions from the input streams i e kafka topics the assignment of stream partitions to stream tasks never changes hence the stream task is a fixed unit of parallelism of the application applying this to our current implementation this means that events for the same project uuid will always end up being processed by the same streams task maps to a jvm thread within a streams application example the eventnew topic is created with partitions a bom with components is uploaded to the dt project with uuid the api server sends messages with key to the eventnew topic the default partitioner assigns all events to partition the streams applications ossconsumer and snykanalyzer are started with threads each thread of each streams application is assigned to partition of the eventnew topic both thread s process the events while threads and of both streams applications remain idle analyzers perform lookups with external services oss index snyk vulndb apis unless they experience a cache hit for the component at hand they will emit messages of the following form to the vuln result topic key value component uuid vulnerability details may be null when no vuln has been found example json vulnerability vulnid cve source nvd description sonatype xerces xercesimpl denial of service dos n nthe software contains multiple threads or executable segments that are waiting for each other to release a necessary lock resulting in deadlock references cwes cweid name deadlock severity medium affectedprojectcount identity ossindex analyzer using the component uuid from the message key the api server can easily correlate the message with a specific component in the portfolio benefits ✅ each event in eventnew represents a component in dt and thus a nicely encapsulated unit of work for the analyzer ✅ easy correlation of vuln result events to components in the portfolio drawbacks ⛔ projects with many components can clog a topic partition keeping one streams task super busy while others run idle benefits of parallelizing the analysis work is done at the project rather than at the component level ⛔ dt can consider components to be different despite them having idential purls or cpes oss index snyk etc don t do that so triggering a scan for each dt component will result in many redundant calls ⛔ because the same purl or cpe may be analyzed in multiple stream tasks at once there will be race conditions for cache lookups again causing redundant calls to external services ⛔ the cache lookup issues and redundant calls mentioned above contribute to faster exceeding of rate limits emposed by the external services ⛔ if we want to support ad hoc scanning of components or boms for which no project in dt exists we can t rely on the project or component uuid to always be available for message keys note points exist in vanilla dt too proposed solution both of the following variants are based on option in alioune s comment here and i think it is also what he was referring to in using a combination of object pools and sharding based on queues in memory or not having a pool of analyzer objects with the proper logic fetching components to process from a dedicated queue upstream process have to be updated to always send or it can be wrapped identical purls to the same queue using some kind of hashed based partitioning this way there would be no need for synchronization between the analyzer objects as identitical purls would always be processed by the same analyzer variant only work with purl cpe swid tag id instead of using the project uuid as message key we use the identifiers used for vulnerability analysis purl cpe swid tag id at a later point in time etc further the entire analysis process will happen without any relations to component identities in dt there will be no ids or uuids of components or projects transmitted note there is a complication regarding purls in that they can contain qualifiers and sub paths for example pkg maven com acme acme lib and pkg maven com acme acme lib type jar are technically different but describe the same component and are treated as equal by all currently known analyzers we could implement a that would ensure that pkg maven com acme acme lib and pkg maven com acme acme lib type jar end up in the same partition the partitioner would treat purls as equal as long as their coordinates type namespace name version are the same a similar strategy will be necessary for cpes too key value purl cpe swid tag id nothing additional details example pkg maven com acme acme lib type jar json cpe a apache java json results emitted by the analyzers would then have the form of key value purl cpe swid tag id vulnerability details may be null when no vuln has been found example pkg maven com acme acme lib type jar json vulnerability vulnid cve source nvd description sonatype xerces xercesimpl denial of service dos n nthe software contains multiple threads or executable segments that are waiting for each other to release a necessary lock resulting in deadlock references cwes cweid name deadlock severity medium affectedprojectcount identity ossindex analyzer benefits ✅ kafka streams guarantees us that the same purl will be processed by the same streams task solving the problem of race conditions in cache lookups ✅ consequently processing the same purl multiple times is not an issue because caching is more effective ✅ the api server can perform best effort de duplication of those identifiers before sending them off to kafka that way a bom upload to the same project should never result in duplicate purl cpe events this can contribute to less overall load on the system ✅ streams tasks additionally get a chance to perform further de duplication so they don t process the same purl multiple times within a window batch duplicate purl events can simply be discarded ✅ simplification of the recurring analysis of the entire portfolio instead of iterating over all individual components every x hours iterate over all unique purls cpes swid tag ids in the entire portfolio and send them to kafka this has a potential to drastically reduce the effort and time needed to analyze the entire portfolio ✅ vulnerability analysis results will be applied to all affected components in the portfolio in one go whereas the current approach only applied them to a single component at a time drawbacks ⛔ more responsibility on the api server messages from the vuln result topic will no longer be tied to a specific project uuid or component uuid instead the api server will have to apply the results to all components matching the given purl cpe this can be an expensive operation but can be optimized with proper indexes and efficient use of transactions should be tested though ⛔ batching of eventnew events as required for oss index will be harder ⛔ may not be efficient for use cases where the system is only exposed to little load or boms are uploaded only sporadically possible solution only change the key for eventnew messages as a compromise between the current solution and variant as described above still set the message key to purl cpe but include the component uuid in the message body example pkg maven com acme acme lib type jar json uuid group xerces name xercesimpl version cpe a apache java json uuid group xerces name xercesimpl version benefits tbd drawbacks tbd
| 1
|
4,600
| 11,415,991,206
|
IssuesEvent
|
2020-02-02 14:38:17
|
elderanakain/daily-dish
|
https://api.github.com/repos/elderanakain/daily-dish
|
opened
|
Kotlin Native integration
|
architecture
|
- [ ] Switch to native libraries
- [ ] Integrate KMP structure
- [ ] Convert business logic to Kotlin Native
|
1.0
|
Kotlin Native integration - - [ ] Switch to native libraries
- [ ] Integrate KMP structure
- [ ] Convert business logic to Kotlin Native
|
architecture
|
kotlin native integration switch to native libraries integrate kmp structure convert business logic to kotlin native
| 1
|
6,091
| 13,675,382,951
|
IssuesEvent
|
2020-09-29 12:36:50
|
craftcms/cms
|
https://api.github.com/repos/craftcms/cms
|
opened
|
Include fields’ UIDs in their content column names
|
enhancement system architecture :building_construction:
|
There are a couple cases where custom fields’ `content` table columns could get dropped or renamed unexpectedly. For example, if two global fields with the same handle are added via the project config (#6536), only one column will be created for both, and when one of those fields are deleted, the column will be dropped with it, resulting in potential data loss and SQL errors due to the remaining field still expecting its column to be present.
We can fix this in Craft 4 by including the first ~8 characters of fields’ UIDs in their `content` column names. For example instead of `field_body`, the column could be named `field_body_08f8ec90`. That way Craft can be 100% sure it’s deleting/renaming the correct column.
|
1.0
|
Include fields’ UIDs in their content column names - There are a couple cases where custom fields’ `content` table columns could get dropped or renamed unexpectedly. For example, if two global fields with the same handle are added via the project config (#6536), only one column will be created for both, and when one of those fields are deleted, the column will be dropped with it, resulting in potential data loss and SQL errors due to the remaining field still expecting its column to be present.
We can fix this in Craft 4 by including the first ~8 characters of fields’ UIDs in their `content` column names. For example instead of `field_body`, the column could be named `field_body_08f8ec90`. That way Craft can be 100% sure it’s deleting/renaming the correct column.
|
architecture
|
include fields’ uids in their content column names there are a couple cases where custom fields’ content table columns could get dropped or renamed unexpectedly for example if two global fields with the same handle are added via the project config only one column will be created for both and when one of those fields are deleted the column will be dropped with it resulting in potential data loss and sql errors due to the remaining field still expecting its column to be present we can fix this in craft by including the first characters of fields’ uids in their content column names for example instead of field body the column could be named field body that way craft can be sure it’s deleting renaming the correct column
| 1
|
320,131
| 27,420,213,071
|
IssuesEvent
|
2023-03-01 16:13:43
|
unifyai/ivy
|
https://api.github.com/repos/unifyai/ivy
|
opened
|
Fix array.test_array__abs__
|
Sub Task Failing Test
|
| | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="null" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|
1.0
|
Fix array.test_array__abs__ - | | |
|---|---|
|tensorflow|<a href="https://github.com/unifyai/ivy/actions/" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|torch|<a href="https://github.com/unifyai/ivy/actions/" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|numpy|<a href="https://github.com/unifyai/ivy/actions/" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-success-success></a>
|jax|<a href="null" rel="noopener noreferrer" target="_blank"><img src=https://img.shields.io/badge/-failure-red></a>
|
non_architecture
|
fix array test array abs tensorflow img src torch img src numpy img src jax img src
| 0
|
795,203
| 28,065,659,968
|
IssuesEvent
|
2023-03-29 15:14:45
|
googleapis/repo-automation-bots
|
https://api.github.com/repos/googleapis/repo-automation-bots
|
opened
|
[auto-approve] README should include example(s) of process usage
|
type: feature request priority: p3
|
There are examples of the previous configuration style. There are also lists of supported parameters in the new style, but no concrete examples.
It would be helpful for users to see what a yml might look like.
https://github.com/googleapis/repo-automation-bots/tree/main/packages/auto-approve
|
1.0
|
[auto-approve] README should include example(s) of process usage - There are examples of the previous configuration style. There are also lists of supported parameters in the new style, but no concrete examples.
It would be helpful for users to see what a yml might look like.
https://github.com/googleapis/repo-automation-bots/tree/main/packages/auto-approve
|
non_architecture
|
readme should include example s of process usage there are examples of the previous configuration style there are also lists of supported parameters in the new style but no concrete examples it would be helpful for users to see what a yml might look like
| 0
|
404
| 3,287,674,038
|
IssuesEvent
|
2015-10-29 11:39:05
|
palazzem/wheelie
|
https://api.github.com/repos/palazzem/wheelie
|
opened
|
provide a mechanism that allows Tasks' dependencies to be modified at runtime
|
architecture/design
|
For instance, I may add in my project a ``Task`` that should be run before a particular ``Task`` available in my recipe.
|
1.0
|
provide a mechanism that allows Tasks' dependencies to be modified at runtime - For instance, I may add in my project a ``Task`` that should be run before a particular ``Task`` available in my recipe.
|
architecture
|
provide a mechanism that allows tasks dependencies to be modified at runtime for instance i may add in my project a task that should be run before a particular task available in my recipe
| 1
|
7,553
| 18,236,945,312
|
IssuesEvent
|
2021-10-01 08:11:24
|
kubewarden/kubewarden.io
|
https://api.github.com/repos/kubewarden/kubewarden.io
|
closed
|
Add helm crds chart to installation steps
|
new-architecture
|
Modify installation instructions to include the new helm crds chart
|
1.0
|
Add helm crds chart to installation steps - Modify installation instructions to include the new helm crds chart
|
architecture
|
add helm crds chart to installation steps modify installation instructions to include the new helm crds chart
| 1
|
62,476
| 17,023,930,529
|
IssuesEvent
|
2021-07-03 04:37:17
|
tomhughes/trac-tickets
|
https://api.github.com/repos/tomhughes/trac-tickets
|
closed
|
Download does not work
|
Component: website Priority: minor Resolution: wontfix Type: defect
|
**[Submitted to the original trac issue database at 4.40pm, Tuesday, 16th February 2016]**
Hello there.
I am trying do download png or pdf files but it does not work. I have Windows and I tried it at work and at home on different computers. I can't download a thing.
It says that it is rendering but then it says timeout, and then nothing happens.
Hope you can help. Keep up the good work.
|
1.0
|
Download does not work - **[Submitted to the original trac issue database at 4.40pm, Tuesday, 16th February 2016]**
Hello there.
I am trying do download png or pdf files but it does not work. I have Windows and I tried it at work and at home on different computers. I can't download a thing.
It says that it is rendering but then it says timeout, and then nothing happens.
Hope you can help. Keep up the good work.
|
non_architecture
|
download does not work hello there i am trying do download png or pdf files but it does not work i have windows and i tried it at work and at home on different computers i can t download a thing it says that it is rendering but then it says timeout and then nothing happens hope you can help keep up the good work
| 0
|
10,913
| 27,458,064,657
|
IssuesEvent
|
2023-03-02 23:24:56
|
Azure/azure-sdk
|
https://api.github.com/repos/Azure/azure-sdk
|
closed
|
Board Review: Easm Mgmt Plane Namespace Review
|
architecture board-review
|
## Contacts and Timeline
* Responsible service team: [riskiq_easm@microsoft.com](mailto:riskiq_easm@microsoft.com)
* Main contacts:
- Nate Falke, [nathanfalke@microsoft.com](mailto:nathanfalke@microsoft.com), @nathanfalke
- Brian Zak, [brianzak@microsoft.com](mailto:brianzak@microsoft.com), @eqtip
- Nisha Bhatia, [nishabhatia@microsoft.com](mailto:nishabhatia@microsoft.com), @nishabhatia-msft
## About the Service
* Link to documentation introducing/describing the service: [External Attack Surface Management](https://learn.microsoft.com/en-us/azure/external-attack-surface-management/)
* Link to the service REST APIs: [2022-04-01-preview/easm.json](https://github.com/Azure/azure-rest-api-specs-pr/blob/RPSaaSMaster/specification/riskiq/data-plane/Microsoft.Easm/preview/2022-04-01-preview/easm.json)
* Link to GitHub issue for previous review sessions, if applicable: https://github.com/Azure/azure-sdk/issues/5062
## About the client library
suggested management client name:
* Name of the client library: easm
|
1.0
|
Board Review: Easm Mgmt Plane Namespace Review -
## Contacts and Timeline
* Responsible service team: [riskiq_easm@microsoft.com](mailto:riskiq_easm@microsoft.com)
* Main contacts:
- Nate Falke, [nathanfalke@microsoft.com](mailto:nathanfalke@microsoft.com), @nathanfalke
- Brian Zak, [brianzak@microsoft.com](mailto:brianzak@microsoft.com), @eqtip
- Nisha Bhatia, [nishabhatia@microsoft.com](mailto:nishabhatia@microsoft.com), @nishabhatia-msft
## About the Service
* Link to documentation introducing/describing the service: [External Attack Surface Management](https://learn.microsoft.com/en-us/azure/external-attack-surface-management/)
* Link to the service REST APIs: [2022-04-01-preview/easm.json](https://github.com/Azure/azure-rest-api-specs-pr/blob/RPSaaSMaster/specification/riskiq/data-plane/Microsoft.Easm/preview/2022-04-01-preview/easm.json)
* Link to GitHub issue for previous review sessions, if applicable: https://github.com/Azure/azure-sdk/issues/5062
## About the client library
suggested management client name:
* Name of the client library: easm
|
architecture
|
board review easm mgmt plane namespace review contacts and timeline responsible service team mailto riskiq easm microsoft com main contacts nate falke mailto nathanfalke microsoft com nathanfalke brian zak mailto brianzak microsoft com eqtip nisha bhatia mailto nishabhatia microsoft com nishabhatia msft about the service link to documentation introducing describing the service link to the service rest apis link to github issue for previous review sessions if applicable about the client library suggested management client name name of the client library easm
| 1
|
1,644
| 3,388,272,538
|
IssuesEvent
|
2015-11-29 05:06:19
|
Automattic/wp-calypso
|
https://api.github.com/repos/Automattic/wp-calypso
|
opened
|
Reader: consider sandboxing iframes used for embeds
|
Reader Security [Type] Enhancement
|
Raised by @blowery
See https://html.spec.whatwg.org/multipage/embedded-content.html#attr-iframe-sandbox
This would limit what the iframes in content can do. Need to check to see how it impacts common oEmbed providers.
|
True
|
Reader: consider sandboxing iframes used for embeds - Raised by @blowery
See https://html.spec.whatwg.org/multipage/embedded-content.html#attr-iframe-sandbox
This would limit what the iframes in content can do. Need to check to see how it impacts common oEmbed providers.
|
non_architecture
|
reader consider sandboxing iframes used for embeds raised by blowery see this would limit what the iframes in content can do need to check to see how it impacts common oembed providers
| 0
|
7,138
| 16,669,882,611
|
IssuesEvent
|
2021-06-07 09:31:17
|
mbecker12/surface-rl-decoder
|
https://api.github.com/repos/mbecker12/surface-rl-decoder
|
closed
|
Implement Transfer Learning for 3D Conv Networks
|
infrastructure network architecture q learning
|
We want to investigate if transfer learning from smaller systems to larger systems is beneficial.
For that, implement the functionality to load a pretrained d=5 3D Conv model and apply it to a model capable of decoding at d=7,9,....
|
1.0
|
Implement Transfer Learning for 3D Conv Networks - We want to investigate if transfer learning from smaller systems to larger systems is beneficial.
For that, implement the functionality to load a pretrained d=5 3D Conv model and apply it to a model capable of decoding at d=7,9,....
|
architecture
|
implement transfer learning for conv networks we want to investigate if transfer learning from smaller systems to larger systems is beneficial for that implement the functionality to load a pretrained d conv model and apply it to a model capable of decoding at d
| 1
|
253,378
| 19,099,616,686
|
IssuesEvent
|
2021-11-29 20:46:16
|
NetAppDocs/ontap
|
https://api.github.com/repos/NetAppDocs/ontap
|
closed
|
Secure Purge Overview
|
documentation good first issue
|
Page: [Securely purge data on an encrypted volume overview](https://docs.netapp.com/us-en/ontap/encryption-at-rest/secure-purge-data-encrypted-volume-concept.html)
The page states features are different between versions.
- 9.8 and later has "x" features
- 9.8 and earlier has "y" features
I believe the second 9.8 needs to state 9.7 and earlier.
|
1.0
|
Secure Purge Overview - Page: [Securely purge data on an encrypted volume overview](https://docs.netapp.com/us-en/ontap/encryption-at-rest/secure-purge-data-encrypted-volume-concept.html)
The page states features are different between versions.
- 9.8 and later has "x" features
- 9.8 and earlier has "y" features
I believe the second 9.8 needs to state 9.7 and earlier.
|
non_architecture
|
secure purge overview page the page states features are different between versions and later has x features and earlier has y features i believe the second needs to state and earlier
| 0
|
8,451
| 22,552,986,732
|
IssuesEvent
|
2022-06-27 07:45:40
|
idobelieveinmiracle/tasks
|
https://api.github.com/repos/idobelieveinmiracle/tasks
|
opened
|
Move Date Time set to other component
|
architecture
|
Move Date Time set to other component in [DetailFragment](https://github.com/idobelieveinmiracle/tasks/blob/6f38e44cf973e52727f038f979d02c3703636b22/app/src/main/java/com/rose/taskassignmenttest/views/detail/DetailFragment.kt#L183) to keep **Single Responsibility Principle**
|
1.0
|
Move Date Time set to other component - Move Date Time set to other component in [DetailFragment](https://github.com/idobelieveinmiracle/tasks/blob/6f38e44cf973e52727f038f979d02c3703636b22/app/src/main/java/com/rose/taskassignmenttest/views/detail/DetailFragment.kt#L183) to keep **Single Responsibility Principle**
|
architecture
|
move date time set to other component move date time set to other component in to keep single responsibility principle
| 1
|
26,805
| 4,789,318,974
|
IssuesEvent
|
2016-10-31 00:18:52
|
Cockatrice/Cockatrice
|
https://api.github.com/repos/Cockatrice/Cockatrice
|
closed
|
Server Crash Report: Sept 22, 2015
|
App - Servatrice Defect - Crash
|
```
[libprotobuf ERROR google/protobuf/wire_format_lite.cc:530] String field 'MoveCard_ToZone.start_zone' contains invalid UTF-8 data when parsing a protocol buffer. Use the 'bytes' type if you intend to send raw bytes.
Error: signal 11:
/usr/local/bin/servatrice.reg(_ZN13SignalHandler14sigSegvHandlerEi+0x1f)[0x4f923f]
/lib64/libpthread.so.0(+0xf130)[0x7f63fdf86130]
/lib64/libQt5Core.so.5(_ZltRK7QStringS1_+0xf)[0x7f63fd66080f]
/usr/local/bin/servatrice.reg(_ZNK8QMapDataI7QStringP22Server_ProtocolHandlerE8findNodeERKS0_+0x30)[0x4f3430]
/usr/local/bin/servatrice.reg(_ZN6Server9loginUserEP22Server_ProtocolHandlerR7QStringRKS2_S3_RiS3_S3_+0xba9)[0x526999]
/usr/local/bin/servatrice.reg(_ZN22Server_ProtocolHandler8cmdLoginERK13Command_LoginR17ResponseContainer+0x8b3)[0x548613]
/usr/local/bin/servatrice.reg(_ZN22Server_ProtocolHandler30processSessionCommandContainerERK16CommandContainerR17ResponseContainer+0x597)[0x54b4d7]
/usr/local/bin/servatrice.reg(_ZN22Server_ProtocolHandler23processCommandContainerERK16CommandContainer+0xa6)[0x54b786]
/usr/local/bin/servatrice.reg(_ZN21ServerSocketInterface10readClientEv+0x18e)[0x4ef2fe]
/usr/local/bin/servatrice.reg[0x512fe5]
/lib64/libQt5Core.so.5(_ZN11QMetaObject8activateEP7QObjectiiPPv+0x846)[0x7f63fd7e6166]
/lib64/libQt5Network.so.5(+0xd1328)[0x7f63fede1328]
/lib64/libQt5Network.so.5(+0xde9e1)[0x7f63fedee9e1]
/lib64/libQt5Core.so.5(_ZN16QCoreApplication6notifyEP7QObjectP6QEvent+0x5d)[0x7f63fd7b562d]
/lib64/libQt5Core.so.5(_ZN16QCoreApplication14notifyInternalEP7QObjectP6QEvent+0x85)[0x7f63fd7b52d5]
/lib64/libQt5Core.so.5(+0x2e8255)[0x7f63fd810255]
/lib64/libglib-2.0.so.0(g_main_context_dispatch+0x15a)[0x7f63fc40e99a]
/lib64/libglib-2.0.so.0(+0x49ce8)[0x7f63fc40ece8]
/lib64/libglib-2.0.so.0(g_main_context_iteration+0x2c)[0x7f63fc40ed9c]
/lib64/libQt5Core.so.5(_ZN20QEventDispatcherGlib13processEventsE6QFlagsIN10QEventLoop17ProcessEventsFlagEE+0x7b)[0x7f63fd80f2db]
/lib64/libQt5Core.so.5(_ZN10QEventLoop4execE6QFlagsINS_17ProcessEventsFlagEE+0x12b)[0x7f63fd7b313b]
/lib64/libQt5Core.so.5(_ZN7QThread4execEv+0xb8)[0x7f63fd5c7038]
/lib64/libQt5Core.so.5(+0xa3ddf)[0x7f63fd5cbddf]
/lib64/libpthread.so.0(+0x7df5)[0x7f63fdf7edf5]
/lib64/libc.so.6(clone+0x6d)[0x7f63fca3e1ad]
```
Bug in the move card to zone?
|
1.0
|
Server Crash Report: Sept 22, 2015 - ```
[libprotobuf ERROR google/protobuf/wire_format_lite.cc:530] String field 'MoveCard_ToZone.start_zone' contains invalid UTF-8 data when parsing a protocol buffer. Use the 'bytes' type if you intend to send raw bytes.
Error: signal 11:
/usr/local/bin/servatrice.reg(_ZN13SignalHandler14sigSegvHandlerEi+0x1f)[0x4f923f]
/lib64/libpthread.so.0(+0xf130)[0x7f63fdf86130]
/lib64/libQt5Core.so.5(_ZltRK7QStringS1_+0xf)[0x7f63fd66080f]
/usr/local/bin/servatrice.reg(_ZNK8QMapDataI7QStringP22Server_ProtocolHandlerE8findNodeERKS0_+0x30)[0x4f3430]
/usr/local/bin/servatrice.reg(_ZN6Server9loginUserEP22Server_ProtocolHandlerR7QStringRKS2_S3_RiS3_S3_+0xba9)[0x526999]
/usr/local/bin/servatrice.reg(_ZN22Server_ProtocolHandler8cmdLoginERK13Command_LoginR17ResponseContainer+0x8b3)[0x548613]
/usr/local/bin/servatrice.reg(_ZN22Server_ProtocolHandler30processSessionCommandContainerERK16CommandContainerR17ResponseContainer+0x597)[0x54b4d7]
/usr/local/bin/servatrice.reg(_ZN22Server_ProtocolHandler23processCommandContainerERK16CommandContainer+0xa6)[0x54b786]
/usr/local/bin/servatrice.reg(_ZN21ServerSocketInterface10readClientEv+0x18e)[0x4ef2fe]
/usr/local/bin/servatrice.reg[0x512fe5]
/lib64/libQt5Core.so.5(_ZN11QMetaObject8activateEP7QObjectiiPPv+0x846)[0x7f63fd7e6166]
/lib64/libQt5Network.so.5(+0xd1328)[0x7f63fede1328]
/lib64/libQt5Network.so.5(+0xde9e1)[0x7f63fedee9e1]
/lib64/libQt5Core.so.5(_ZN16QCoreApplication6notifyEP7QObjectP6QEvent+0x5d)[0x7f63fd7b562d]
/lib64/libQt5Core.so.5(_ZN16QCoreApplication14notifyInternalEP7QObjectP6QEvent+0x85)[0x7f63fd7b52d5]
/lib64/libQt5Core.so.5(+0x2e8255)[0x7f63fd810255]
/lib64/libglib-2.0.so.0(g_main_context_dispatch+0x15a)[0x7f63fc40e99a]
/lib64/libglib-2.0.so.0(+0x49ce8)[0x7f63fc40ece8]
/lib64/libglib-2.0.so.0(g_main_context_iteration+0x2c)[0x7f63fc40ed9c]
/lib64/libQt5Core.so.5(_ZN20QEventDispatcherGlib13processEventsE6QFlagsIN10QEventLoop17ProcessEventsFlagEE+0x7b)[0x7f63fd80f2db]
/lib64/libQt5Core.so.5(_ZN10QEventLoop4execE6QFlagsINS_17ProcessEventsFlagEE+0x12b)[0x7f63fd7b313b]
/lib64/libQt5Core.so.5(_ZN7QThread4execEv+0xb8)[0x7f63fd5c7038]
/lib64/libQt5Core.so.5(+0xa3ddf)[0x7f63fd5cbddf]
/lib64/libpthread.so.0(+0x7df5)[0x7f63fdf7edf5]
/lib64/libc.so.6(clone+0x6d)[0x7f63fca3e1ad]
```
Bug in the move card to zone?
|
non_architecture
|
server crash report sept string field movecard tozone start zone contains invalid utf data when parsing a protocol buffer use the bytes type if you intend to send raw bytes error signal usr local bin servatrice reg libpthread so so usr local bin servatrice reg usr local bin servatrice reg usr local bin servatrice reg usr local bin servatrice reg usr local bin servatrice reg usr local bin servatrice reg usr local bin servatrice reg so so so so so so libglib so g main context dispatch libglib so libglib so g main context iteration so so so so libpthread so libc so clone bug in the move card to zone
| 0
|
17,688
| 12,246,473,909
|
IssuesEvent
|
2020-05-05 14:30:39
|
ONRR/nrrd
|
https://api.github.com/repos/ONRR/nrrd
|
closed
|
Refine display of grouping in the query tool
|
P2: High Query Tool enhancement usability
|
Users are still having problems with this, so we need to review what we've already tried and come up with a new approach.
|
True
|
Refine display of grouping in the query tool - Users are still having problems with this, so we need to review what we've already tried and come up with a new approach.
|
non_architecture
|
refine display of grouping in the query tool users are still having problems with this so we need to review what we ve already tried and come up with a new approach
| 0
|
3,546
| 9,781,468,114
|
IssuesEvent
|
2019-06-07 19:53:19
|
City-Bureau/city-scrapers
|
https://api.github.com/repos/City-Bureau/city-scrapers
|
closed
|
New Spider Requests: Mayor's Advisory Councils
|
architecture: spiders new spider needed
|
Hey all, we have 2 new meetings in need of spiders—both on the same webpage:
- Mayor’s Pedestrian Advisory Council
- Mayor’s Bicycle Advisory Council
Here's the website: http://chicagocompletestreets.org/getinvolved/mayors-advisory-councils/
Good catch @ab1470!
|
1.0
|
New Spider Requests: Mayor's Advisory Councils - Hey all, we have 2 new meetings in need of spiders—both on the same webpage:
- Mayor’s Pedestrian Advisory Council
- Mayor’s Bicycle Advisory Council
Here's the website: http://chicagocompletestreets.org/getinvolved/mayors-advisory-councils/
Good catch @ab1470!
|
architecture
|
new spider requests mayor s advisory councils hey all we have new meetings in need of spiders—both on the same webpage mayor’s pedestrian advisory council mayor’s bicycle advisory council here s the website good catch
| 1
|
136,642
| 19,905,867,348
|
IssuesEvent
|
2022-01-25 12:44:34
|
flutter/flutter
|
https://api.github.com/repos/flutter/flutter
|
closed
|
Keyboard transition shows Scaffold background color
|
a: text input framework f: material design has reproducible steps found in release: 2.8 found in release: 2.9
|
## Code to Reproduce
```dart
Widget build(BuildContext context) {
return Scaffold(
backgroundColor: Colors.red,
body: SingleChildScrollView(
child: Container(
color: Colors.white,
width: MediaQuery.of(context).size.width,
height: MediaQuery.of(context).size.height,
child: Column(
children: [
TextFormField(),
TextFormField(),
TextFormField(),
TextFormField(),
TextFormField(),
TextFormField(),
TextFormField(),
TextFormField(),
TextFormField(),
TextFormField(),
TextFormField(),
],
)),
),
);
}
```
## Step to reproduce
Click any field that will be underneath the keyboard and you should see the background color appearing.
You will see it even better when the keyboard closes.
## Demo
Like here you can see that the scaffold color is appearing instead of being transparent
<img src="https://s10.gifyu.com/images/outputd65b8fe78f9f61dc.gif" />
|
1.0
|
Keyboard transition shows Scaffold background color - ## Code to Reproduce
```dart
Widget build(BuildContext context) {
return Scaffold(
backgroundColor: Colors.red,
body: SingleChildScrollView(
child: Container(
color: Colors.white,
width: MediaQuery.of(context).size.width,
height: MediaQuery.of(context).size.height,
child: Column(
children: [
TextFormField(),
TextFormField(),
TextFormField(),
TextFormField(),
TextFormField(),
TextFormField(),
TextFormField(),
TextFormField(),
TextFormField(),
TextFormField(),
TextFormField(),
],
)),
),
);
}
```
## Step to reproduce
Click any field that will be underneath the keyboard and you should see the background color appearing.
You will see it even better when the keyboard closes.
## Demo
Like here you can see that the scaffold color is appearing instead of being transparent
<img src="https://s10.gifyu.com/images/outputd65b8fe78f9f61dc.gif" />
|
non_architecture
|
keyboard transition shows scaffold background color code to reproduce dart widget build buildcontext context return scaffold backgroundcolor colors red body singlechildscrollview child container color colors white width mediaquery of context size width height mediaquery of context size height child column children textformfield textformfield textformfield textformfield textformfield textformfield textformfield textformfield textformfield textformfield textformfield step to reproduce click any field that will be underneath the keyboard and you should see the background color appearing you will see it even better when the keyboard closes demo like here you can see that the scaffold color is appearing instead of being transparent
| 0
|
1,475
| 6,038,222,915
|
IssuesEvent
|
2017-06-09 20:49:16
|
18F/acq-alaska-dhss-modernization
|
https://api.github.com/repos/18F/acq-alaska-dhss-modernization
|
opened
|
Tools/Architecture?
|
Architecture & Integration Technology Choices Vendor feedback
|
The canvas and roadmap are great starting points. We're interested in learning more about (or possibly helping to shape) the tools/architecture which could support the vision.
|
1.0
|
Tools/Architecture? - The canvas and roadmap are great starting points. We're interested in learning more about (or possibly helping to shape) the tools/architecture which could support the vision.
|
architecture
|
tools architecture the canvas and roadmap are great starting points we re interested in learning more about or possibly helping to shape the tools architecture which could support the vision
| 1
|
690,490
| 23,661,647,164
|
IssuesEvent
|
2022-08-26 16:08:57
|
VEuPathDB/SiteSearchData
|
https://api.github.com/repos/VEuPathDB/SiteSearchData
|
closed
|
Site search not searching "Study specific variable information" field
|
bug high priority
|
I was looking for antibiotic resistance terms on qa.restricted.clinepidb.org with site search and searched for "resistance", and nothing was returned. The same search for “resistance” on the live site returned the "Azithromycin E-test" variable from ELICIT, which it matched based on "Study specific variable information" from the term definition.
Based on this experience, it looks like site search on qa isn’t searching variable definitions, or maybe other similar fields like original variable name.
|
1.0
|
Site search not searching "Study specific variable information" field - I was looking for antibiotic resistance terms on qa.restricted.clinepidb.org with site search and searched for "resistance", and nothing was returned. The same search for “resistance” on the live site returned the "Azithromycin E-test" variable from ELICIT, which it matched based on "Study specific variable information" from the term definition.
Based on this experience, it looks like site search on qa isn’t searching variable definitions, or maybe other similar fields like original variable name.
|
non_architecture
|
site search not searching study specific variable information field i was looking for antibiotic resistance terms on qa restricted clinepidb org with site search and searched for resistance and nothing was returned the same search for “resistance” on the live site returned the azithromycin e test variable from elicit which it matched based on study specific variable information from the term definition based on this experience it looks like site search on qa isn’t searching variable definitions or maybe other similar fields like original variable name
| 0
|
347,560
| 10,431,398,960
|
IssuesEvent
|
2019-09-17 09:01:52
|
yalla-coop/curenetics
|
https://api.github.com/repos/yalla-coop/curenetics
|
opened
|
While my PDFs are being analysed I can see a loading screen so I am aware analysis is in progress
|
priority-3
|
Acceptance criteria:
- [ ] Loading icon (potentially showing % progress but this is bonus)
@cloudstartuptech - how we can keep track on how the documents are doing? This goes back to our point as well around how we can get the JSON data returned to us as ideally this will be automatic as a response to us sending a post request with the files
|
1.0
|
While my PDFs are being analysed I can see a loading screen so I am aware analysis is in progress - Acceptance criteria:
- [ ] Loading icon (potentially showing % progress but this is bonus)
@cloudstartuptech - how we can keep track on how the documents are doing? This goes back to our point as well around how we can get the JSON data returned to us as ideally this will be automatic as a response to us sending a post request with the files
|
non_architecture
|
while my pdfs are being analysed i can see a loading screen so i am aware analysis is in progress acceptance criteria loading icon potentially showing progress but this is bonus cloudstartuptech how we can keep track on how the documents are doing this goes back to our point as well around how we can get the json data returned to us as ideally this will be automatic as a response to us sending a post request with the files
| 0
|
6,453
| 14,581,208,948
|
IssuesEvent
|
2020-12-18 10:22:24
|
sherpaai/Sherpa.ai-Federated-Learning-Framework
|
https://api.github.com/repos/sherpaai/Sherpa.ai-Federated-Learning-Framework
|
closed
|
Should we include federated EMNIST dataset?
|
architecture federated learning
|
I think we should include the functionality of loading federated EMNIST dataset like in https://www.tensorflow.org/federated/api_docs/python/tff/simulation/datasets/emnist/load_data?hl=es-419.
The EMNIST dataset is, by nature, federated as it has been created collecting numbers of different writers. In some papers, they use this configuration for experimentation so, if we can to make our framework capable of reproducing these experiments, we should add this functionality.
|
1.0
|
Should we include federated EMNIST dataset? - I think we should include the functionality of loading federated EMNIST dataset like in https://www.tensorflow.org/federated/api_docs/python/tff/simulation/datasets/emnist/load_data?hl=es-419.
The EMNIST dataset is, by nature, federated as it has been created collecting numbers of different writers. In some papers, they use this configuration for experimentation so, if we can to make our framework capable of reproducing these experiments, we should add this functionality.
|
architecture
|
should we include federated emnist dataset i think we should include the functionality of loading federated emnist dataset like in the emnist dataset is by nature federated as it has been created collecting numbers of different writers in some papers they use this configuration for experimentation so if we can to make our framework capable of reproducing these experiments we should add this functionality
| 1
|
9,534
| 24,773,358,943
|
IssuesEvent
|
2022-10-23 12:30:21
|
R-Type-Epitech-Nantes/R-Type
|
https://api.github.com/repos/R-Type-Epitech-Nantes/R-Type
|
closed
|
Re-do the architecture of the Components, System and Entities Creation librairies
|
Architecture E.C.S. ECS Game Systems ECS Game Shared Resources
|
Of all this three librairies should be in a single librairy
|
1.0
|
Re-do the architecture of the Components, System and Entities Creation librairies - Of all this three librairies should be in a single librairy
|
architecture
|
re do the architecture of the components system and entities creation librairies of all this three librairies should be in a single librairy
| 1
|
32,903
| 4,792,794,058
|
IssuesEvent
|
2016-10-31 16:25:36
|
TheScienceMuseum/collectionsonline
|
https://api.github.com/repos/TheScienceMuseum/collectionsonline
|
closed
|
Switch / split co_mediaPath into separate vars for zooms and images
|
enhancement please-test priority-2 T3h
|
_We now have a new batch of images in the system ie. /objects/smgc-objects-26704 but because the location of the zooms/images has changed they are currently broken._
Going forward:
- the zooms will be served via an IIPImage on it's own EC2 instance (cozooms.sciencemuseum.org.uk)
- the images will be served direct from the S3 bucket via it's own url (coimages.sciencemuseum.org.uk)
We don't currently have the domain names setup, but for the time being we should use i) the full S3 bucket url ii) the IP address of the zoom server (@jamieu to supply direct).
It would seem to make sense to split the current co_mediaPath var into two separate variables?
- co_mediaPath
- co_zoomPath
Thoughts?
|
1.0
|
Switch / split co_mediaPath into separate vars for zooms and images - _We now have a new batch of images in the system ie. /objects/smgc-objects-26704 but because the location of the zooms/images has changed they are currently broken._
Going forward:
- the zooms will be served via an IIPImage on it's own EC2 instance (cozooms.sciencemuseum.org.uk)
- the images will be served direct from the S3 bucket via it's own url (coimages.sciencemuseum.org.uk)
We don't currently have the domain names setup, but for the time being we should use i) the full S3 bucket url ii) the IP address of the zoom server (@jamieu to supply direct).
It would seem to make sense to split the current co_mediaPath var into two separate variables?
- co_mediaPath
- co_zoomPath
Thoughts?
|
non_architecture
|
switch split co mediapath into separate vars for zooms and images we now have a new batch of images in the system ie objects smgc objects but because the location of the zooms images has changed they are currently broken going forward the zooms will be served via an iipimage on it s own instance cozooms sciencemuseum org uk the images will be served direct from the bucket via it s own url coimages sciencemuseum org uk we don t currently have the domain names setup but for the time being we should use i the full bucket url ii the ip address of the zoom server jamieu to supply direct it would seem to make sense to split the current co mediapath var into two separate variables co mediapath co zoompath thoughts
| 0
|
61,805
| 25,735,160,231
|
IssuesEvent
|
2022-12-07 23:53:16
|
microsoft/vscode-cpptools
|
https://api.github.com/repos/microsoft/vscode-cpptools
|
closed
|
Report error for requires statement
|
bug Language Service Visual Studio
|
```
#include<concepts>
template <typename T>
concept string_c = requires (const T& str)
{
{str.c_str()} -> ::std::same_as<typename T::const_pointer>;
// ^ expected concept name c/c++3257, if I remove "::", it will not report this issue
};
int main()
{
}
```
os: windows10
extension : 1.1.3
|
1.0
|
Report error for requires statement - ```
#include<concepts>
template <typename T>
concept string_c = requires (const T& str)
{
{str.c_str()} -> ::std::same_as<typename T::const_pointer>;
// ^ expected concept name c/c++3257, if I remove "::", it will not report this issue
};
int main()
{
}
```
os: windows10
extension : 1.1.3
|
non_architecture
|
report error for requires statement include template concept string c requires const t str str c str std same as expected concept name c c if i remove it will not report this issue int main os extension
| 0
|
1,843
| 6,812,163,696
|
IssuesEvent
|
2017-11-06 01:03:14
|
p4lang/p4-spec
|
https://api.github.com/repos/p4lang/p4-spec
|
closed
|
PSA Packet paths and metadata details
|
portable switch architecture
|
[psa-packet-paths.zip](https://github.com/p4lang/p4-spec/files/1388623/psa-packet-paths.zip)
The attached zip file contains:
+ psa-packet-paths-figure.pptx - A figure similar to one Han Wang has created to show the recirculate, resubmit, and clone paths that is already in the document, but it also shows the "normal" packet paths, and uses abbreviations to name them all. Those abbreviations are described further in the Excel file below. This is intended to be _all_ possible ways that a packet can enter, leave, or move between ingress / egress, or at least the minimum set of ways supported by PSA compliant systems (implementers are free to extend this set, of course). If you think PSA should define ways that aren't shown in this figure, it would be good to bring that up ASAP.
+ psa-packet-paths-figure.pdf - The previous figure converted to PDF
+ psa-packet.paths.xlsx - A Microsoft Excel sheet with a proposal for how all metadata fields and packet contents are initialized when the ingress and egress parser start processing a packet, for each of the kinds given an abbreviated name in the figure. Some of this is mentioned in some places of the current PSA draft spec, but I think a lot of it is not precisely written down anywhere. I think the PSA spec _should_ make these things as precise -- at least as precise as we can get agreement on. These are the kinds of things that PSA implementers could easily differ on otherwise, and people writing programs for PSA devices will want to know what behavior they can rely upon.
There are 2 sheets to the Excel document -- you have to click on the sheet names near the bottom of the window to switch between them. The second sheet "metadata and packet details" is where most of the open questions are, marked in orange, but all of this is open to discussion if any of it can be improved upon.
|
1.0
|
PSA Packet paths and metadata details - [psa-packet-paths.zip](https://github.com/p4lang/p4-spec/files/1388623/psa-packet-paths.zip)
The attached zip file contains:
+ psa-packet-paths-figure.pptx - A figure similar to one Han Wang has created to show the recirculate, resubmit, and clone paths that is already in the document, but it also shows the "normal" packet paths, and uses abbreviations to name them all. Those abbreviations are described further in the Excel file below. This is intended to be _all_ possible ways that a packet can enter, leave, or move between ingress / egress, or at least the minimum set of ways supported by PSA compliant systems (implementers are free to extend this set, of course). If you think PSA should define ways that aren't shown in this figure, it would be good to bring that up ASAP.
+ psa-packet-paths-figure.pdf - The previous figure converted to PDF
+ psa-packet.paths.xlsx - A Microsoft Excel sheet with a proposal for how all metadata fields and packet contents are initialized when the ingress and egress parser start processing a packet, for each of the kinds given an abbreviated name in the figure. Some of this is mentioned in some places of the current PSA draft spec, but I think a lot of it is not precisely written down anywhere. I think the PSA spec _should_ make these things as precise -- at least as precise as we can get agreement on. These are the kinds of things that PSA implementers could easily differ on otherwise, and people writing programs for PSA devices will want to know what behavior they can rely upon.
There are 2 sheets to the Excel document -- you have to click on the sheet names near the bottom of the window to switch between them. The second sheet "metadata and packet details" is where most of the open questions are, marked in orange, but all of this is open to discussion if any of it can be improved upon.
|
architecture
|
psa packet paths and metadata details the attached zip file contains psa packet paths figure pptx a figure similar to one han wang has created to show the recirculate resubmit and clone paths that is already in the document but it also shows the normal packet paths and uses abbreviations to name them all those abbreviations are described further in the excel file below this is intended to be all possible ways that a packet can enter leave or move between ingress egress or at least the minimum set of ways supported by psa compliant systems implementers are free to extend this set of course if you think psa should define ways that aren t shown in this figure it would be good to bring that up asap psa packet paths figure pdf the previous figure converted to pdf psa packet paths xlsx a microsoft excel sheet with a proposal for how all metadata fields and packet contents are initialized when the ingress and egress parser start processing a packet for each of the kinds given an abbreviated name in the figure some of this is mentioned in some places of the current psa draft spec but i think a lot of it is not precisely written down anywhere i think the psa spec should make these things as precise at least as precise as we can get agreement on these are the kinds of things that psa implementers could easily differ on otherwise and people writing programs for psa devices will want to know what behavior they can rely upon there are sheets to the excel document you have to click on the sheet names near the bottom of the window to switch between them the second sheet metadata and packet details is where most of the open questions are marked in orange but all of this is open to discussion if any of it can be improved upon
| 1
|
6,128
| 13,765,238,366
|
IssuesEvent
|
2020-10-07 13:10:11
|
secureCodeBox/secureCodeBox-v2
|
https://api.github.com/repos/secureCodeBox/secureCodeBox-v2
|
opened
|
🏗 Merge v2 Code into regular seucureCodeBox Repository
|
architecture
|
Before we can properly release the v2.0.0 we have to merge over the code of [this repository](https://github.com/secureCodeBox/secureCodeBox-v2) into the regular [secureCodeBox Repository](secureCodeBox/secureCodeBox-v2).
Things that should be thought of (list probably incomplete):
- [ ] Update go namespace
- [ ] Ensure that the Docker Builds still works (Including github action secrets)
- [ ] Ensure that the Helm Publishing still works (Including github action secrets)
- [ ] Copy Netlify webhook
- [ ] Search for references in the docs and code
|
1.0
|
🏗 Merge v2 Code into regular seucureCodeBox Repository - Before we can properly release the v2.0.0 we have to merge over the code of [this repository](https://github.com/secureCodeBox/secureCodeBox-v2) into the regular [secureCodeBox Repository](secureCodeBox/secureCodeBox-v2).
Things that should be thought of (list probably incomplete):
- [ ] Update go namespace
- [ ] Ensure that the Docker Builds still works (Including github action secrets)
- [ ] Ensure that the Helm Publishing still works (Including github action secrets)
- [ ] Copy Netlify webhook
- [ ] Search for references in the docs and code
|
architecture
|
🏗 merge code into regular seucurecodebox repository before we can properly release the we have to merge over the code of into the regular securecodebox securecodebox things that should be thought of list probably incomplete update go namespace ensure that the docker builds still works including github action secrets ensure that the helm publishing still works including github action secrets copy netlify webhook search for references in the docs and code
| 1
|
7,639
| 18,735,434,131
|
IssuesEvent
|
2021-11-04 06:39:14
|
verida/verida-js
|
https://api.github.com/repos/verida/verida-js
|
opened
|
[account-web-vault] No longer force endpoints to be specified
|
refactor architecture
|
Remove the requirement for default endpoints to be specified when instantiating an `account-web-vault` instance.
See #92
|
1.0
|
[account-web-vault] No longer force endpoints to be specified - Remove the requirement for default endpoints to be specified when instantiating an `account-web-vault` instance.
See #92
|
architecture
|
no longer force endpoints to be specified remove the requirement for default endpoints to be specified when instantiating an account web vault instance see
| 1
|
19,771
| 27,420,927,984
|
IssuesEvent
|
2023-03-01 16:40:02
|
kellnerd/musicbrainz-scripts
|
https://api.github.com/repos/kellnerd/musicbrainz-scripts
|
closed
|
Adding recording relationships is broken
|
compatibility copyright
|
Since the new changes today (2/28/2023), it no longer is adding the labels to the individual recordings when the boxes are checked. Still works fine on the release.
|
True
|
Adding recording relationships is broken - Since the new changes today (2/28/2023), it no longer is adding the labels to the individual recordings when the boxes are checked. Still works fine on the release.
|
non_architecture
|
adding recording relationships is broken since the new changes today it no longer is adding the labels to the individual recordings when the boxes are checked still works fine on the release
| 0
|
211,853
| 23,849,933,764
|
IssuesEvent
|
2022-09-06 16:56:50
|
daniel-brown-ws-2/Baragon-test-1
|
https://api.github.com/repos/daniel-brown-ws-2/Baragon-test-1
|
opened
|
BaragonData-0.10.0-SNAPSHOT.jar: 1 vulnerabilities (highest severity is: 7.0)
|
security vulnerability
|
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>BaragonData-0.10.0-SNAPSHOT.jar</b></p></summary>
<p></p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-webapp/9.4.18.v20190429/jetty-webapp-9.4.18.v20190429.jar,/home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-webapp/9.4.18.v20190429/jetty-webapp-9.4.18.v20190429.jar,/home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-webapp/9.4.18.v20190429/jetty-webapp-9.4.18.v20190429.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/daniel-brown-ws-2/Baragon-test-1/commit/40d5ec96d38f2c1697a1928cd144b93f387bc0ae">40d5ec96d38f2c1697a1928cd144b93f387bc0ae</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2020-27216](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-27216) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.0 | jetty-webapp-9.4.18.v20190429.jar | Transitive | N/A | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-27216</summary>
### Vulnerable Library - <b>jetty-webapp-9.4.18.v20190429.jar</b></p>
<p>Jetty web application support</p>
<p>Library home page: <a href="http://www.eclipse.org/jetty">http://www.eclipse.org/jetty</a></p>
<p>Path to dependency file: /BaragonAgentService/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-webapp/9.4.18.v20190429/jetty-webapp-9.4.18.v20190429.jar,/home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-webapp/9.4.18.v20190429/jetty-webapp-9.4.18.v20190429.jar,/home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-webapp/9.4.18.v20190429/jetty-webapp-9.4.18.v20190429.jar</p>
<p>
Dependency Hierarchy:
- BaragonData-0.10.0-SNAPSHOT.jar (Root Library)
- dropwizard-jersey-1.3.12.jar
- :x: **jetty-webapp-9.4.18.v20190429.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/daniel-brown-ws-2/Baragon-test-1/commit/40d5ec96d38f2c1697a1928cd144b93f387bc0ae">40d5ec96d38f2c1697a1928cd144b93f387bc0ae</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In Eclipse Jetty versions 1.0 thru 9.4.32.v20200930, 10.0.0.alpha1 thru 10.0.0.beta2, and 11.0.0.alpha1 thru 11.0.0.beta2O, on Unix like systems, the system's temporary directory is shared between all users on that system. A collocated user can observe the process of creating a temporary sub directory in the shared temporary directory and race to complete the creation of the temporary subdirectory. If the attacker wins the race then they will have read and write permission to the subdirectory used to unpack web applications, including their WEB-INF/lib jar files and JSP files. If any code is ever executed out of this temporary directory, this can lead to a local privilege escalation vulnerability.
<p>Publish Date: 2020-10-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-27216>CVE-2020-27216</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.0</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=567921">https://bugs.eclipse.org/bugs/show_bug.cgi?id=567921</a></p>
<p>Release Date: 2020-10-23</p>
<p>Fix Resolution: org.eclipse.jetty:jetty-runner:9.4.33,10.0.0.beta3,11.0.0.beta3;org.eclipse.jetty:jetty-webapp:9.4.33,10.0.0.beta3,11.0.0.beta3</p>
</p>
<p></p>
</details>
|
True
|
BaragonData-0.10.0-SNAPSHOT.jar: 1 vulnerabilities (highest severity is: 7.0) - <details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>BaragonData-0.10.0-SNAPSHOT.jar</b></p></summary>
<p></p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-webapp/9.4.18.v20190429/jetty-webapp-9.4.18.v20190429.jar,/home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-webapp/9.4.18.v20190429/jetty-webapp-9.4.18.v20190429.jar,/home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-webapp/9.4.18.v20190429/jetty-webapp-9.4.18.v20190429.jar</p>
<p>
<p>Found in HEAD commit: <a href="https://github.com/daniel-brown-ws-2/Baragon-test-1/commit/40d5ec96d38f2c1697a1928cd144b93f387bc0ae">40d5ec96d38f2c1697a1928cd144b93f387bc0ae</a></p></details>
## Vulnerabilities
| CVE | Severity | <img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS | Dependency | Type | Fixed in | Remediation Available |
| ------------- | ------------- | ----- | ----- | ----- | --- | --- |
| [CVE-2020-27216](https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-27216) | <img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> High | 7.0 | jetty-webapp-9.4.18.v20190429.jar | Transitive | N/A | ❌ |
## Details
<details>
<summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> CVE-2020-27216</summary>
### Vulnerable Library - <b>jetty-webapp-9.4.18.v20190429.jar</b></p>
<p>Jetty web application support</p>
<p>Library home page: <a href="http://www.eclipse.org/jetty">http://www.eclipse.org/jetty</a></p>
<p>Path to dependency file: /BaragonAgentService/pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-webapp/9.4.18.v20190429/jetty-webapp-9.4.18.v20190429.jar,/home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-webapp/9.4.18.v20190429/jetty-webapp-9.4.18.v20190429.jar,/home/wss-scanner/.m2/repository/org/eclipse/jetty/jetty-webapp/9.4.18.v20190429/jetty-webapp-9.4.18.v20190429.jar</p>
<p>
Dependency Hierarchy:
- BaragonData-0.10.0-SNAPSHOT.jar (Root Library)
- dropwizard-jersey-1.3.12.jar
- :x: **jetty-webapp-9.4.18.v20190429.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/daniel-brown-ws-2/Baragon-test-1/commit/40d5ec96d38f2c1697a1928cd144b93f387bc0ae">40d5ec96d38f2c1697a1928cd144b93f387bc0ae</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
<p></p>
### Vulnerability Details
<p>
In Eclipse Jetty versions 1.0 thru 9.4.32.v20200930, 10.0.0.alpha1 thru 10.0.0.beta2, and 11.0.0.alpha1 thru 11.0.0.beta2O, on Unix like systems, the system's temporary directory is shared between all users on that system. A collocated user can observe the process of creating a temporary sub directory in the shared temporary directory and race to complete the creation of the temporary subdirectory. If the attacker wins the race then they will have read and write permission to the subdirectory used to unpack web applications, including their WEB-INF/lib jar files and JSP files. If any code is ever executed out of this temporary directory, this can lead to a local privilege escalation vulnerability.
<p>Publish Date: 2020-10-23
<p>URL: <a href=https://vuln.whitesourcesoftware.com/vulnerability/CVE-2020-27216>CVE-2020-27216</a></p>
</p>
<p></p>
### CVSS 3 Score Details (<b>7.0</b>)
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: High
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
<p></p>
### Suggested Fix
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://bugs.eclipse.org/bugs/show_bug.cgi?id=567921">https://bugs.eclipse.org/bugs/show_bug.cgi?id=567921</a></p>
<p>Release Date: 2020-10-23</p>
<p>Fix Resolution: org.eclipse.jetty:jetty-runner:9.4.33,10.0.0.beta3,11.0.0.beta3;org.eclipse.jetty:jetty-webapp:9.4.33,10.0.0.beta3,11.0.0.beta3</p>
</p>
<p></p>
</details>
|
non_architecture
|
baragondata snapshot jar vulnerabilities highest severity is vulnerable library baragondata snapshot jar path to vulnerable library home wss scanner repository org eclipse jetty jetty webapp jetty webapp jar home wss scanner repository org eclipse jetty jetty webapp jetty webapp jar home wss scanner repository org eclipse jetty jetty webapp jetty webapp jar found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available high jetty webapp jar transitive n a details cve vulnerable library jetty webapp jar jetty web application support library home page a href path to dependency file baragonagentservice pom xml path to vulnerable library home wss scanner repository org eclipse jetty jetty webapp jetty webapp jar home wss scanner repository org eclipse jetty jetty webapp jetty webapp jar home wss scanner repository org eclipse jetty jetty webapp jetty webapp jar dependency hierarchy baragondata snapshot jar root library dropwizard jersey jar x jetty webapp jar vulnerable library found in head commit a href found in base branch master vulnerability details in eclipse jetty versions thru thru and thru on unix like systems the system s temporary directory is shared between all users on that system a collocated user can observe the process of creating a temporary sub directory in the shared temporary directory and race to complete the creation of the temporary subdirectory if the attacker wins the race then they will have read and write permission to the subdirectory used to unpack web applications including their web inf lib jar files and jsp files if any code is ever executed out of this temporary directory this can lead to a local privilege escalation vulnerability publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org eclipse jetty jetty runner org eclipse jetty jetty webapp
| 0
|
9,962
| 25,820,018,333
|
IssuesEvent
|
2022-12-12 08:54:38
|
owncloud/android
|
https://api.github.com/repos/owncloud/android
|
closed
|
[New arch] Synchronization
|
Epic New architecture
|
# Introduction
We have been working on updating the old Android App to a new architecture for a long time.
The new architecture is MVVM and we can find more details about it here: https://github.com/owncloud/android/issues/2351
To do this, we started with Shares functionality, we continued with Authentication, and finally, we are working on the most crytical part of the app: the Synchronization. This last epic is huge and we created a milestone to keep track of it: https://github.com/owncloud/android/milestone/43
# Synchronization
App PR: https://github.com/owncloud/android/pull/2934
Library PR: https://github.com/owncloud/android-library/pull/339
## Atomic Operations
### Create folder [QA Passed] ✅
Issue: https://github.com/owncloud/android/issues/2861
PR: https://github.com/owncloud/android/pull/2923
QA: https://github.com/owncloud/android/issues/2818#issuecomment-683732010
QA reports fixed:
- [x] (1)
### Rename [QA Passed] ✅
Issue: https://github.com/owncloud/android/issues/2863
PR: https://github.com/owncloud/android/pull/3231
QA: https://github.com/owncloud/android/issues/2818#issuecomment-846984004
QA reports fixed:
- [x] (1)
- [x] (2)
- [x] (3)
- [x] (4)
### Remove [Bugfixing]
Issue: https://github.com/owncloud/android/issues/2864
PR: https://github.com/owncloud/android/pull/3214
QA: https://github.com/owncloud/android/issues/2818#issuecomment-834301102
QA reports fixed:
- [x] (1)
- [x] (2)
- [ ] (3) [P2] Removing many files blocks the app
- [ ] (4) [P2] Txt file removal cleans up the file list
### Copy [Bugfixing]
Issue: https://github.com/owncloud/android/issues/2866
PR: https://github.com/owncloud/android/pull/3253
QA: https://github.com/owncloud/android/issues/2818#issuecomment-849394483
QA reports fixed:
- [x] (1)
- [x] (2)
- [ ] (3) [P3] Error message too long
### Move [QA Passed] ✅
Issue: https://github.com/owncloud/android/issues/2865
PR: https://github.com/owncloud/android/pull/3232
QA: https://github.com/owncloud/android/issues/2818#issuecomment-844022500
QA reports fixed:
- [x] (1)
## Transfers
Transfers have changed completely from the previous version. We have moved from foreground services to WorkManager. They can be done parallely now.
Some notifications have been removed, previously it was a little bit messy when several uploads/downloads were done very fast.
### Uploads [Bugfixing]
There are two types of uploads at the moment.
Plain Uploads: When we get the content Uri via SAF or Camera Uploads
Plain or Chunks(depending on file size): When we share with oC, via documents provider, upload via camera, and when we upload conflicts.
Issue: https://github.com/owncloud/android/issues/3424
PR: https://github.com/owncloud/android/pull/3686
QA: https://github.com/owncloud/android/issues/2818#issuecomment-1157608496
QA reports fixed:
- [x] (1) [P2] Error takes long to happen
- [x] (2) Upload with no connection [FIXED]
- [ ] (3) [P3] Discussion: create or not a deleted target folder
- [x] (4) Folder error [FIXED]
- [ ] (5) [P2] Notifications missing
- [x] (6) Share txt does not work. [FIXED]
- [X] (7) [NON REPRODUCIBLE]
- [x] (8) [P1] Chunking [FIXED via https://github.com/owncloud/android/pull/3763)]
- [x] (9) Failed uploads are resumed after removing individually [FIXED]
- [x] (10) Error in maintenance mode [FIXED]
- [X] (11) Folder error [WONT FIX, same as (4)]
- [x] (12) Share txt does not work. [FIXED]
- [X] (13) Create with external [FIXED]
- [ ] (14) [P2] Not posible to read uploads view when many uploads running
- [x] (15) [P1] Uploading many files, part of them are missing [FIXED]
- [x] (16) Failed uploads are resumed after clearing all [FIXED]
- [x] (17) [P1] Folder error after killing the app [FIXED]
- [ ] (18) [P3] Same status for uploading and pending uploads
- [ ] (19) [P3] Clear button in enqueued section of uploads view
### Downloads [Bugfixing]
Issue: https://github.com/owncloud/android/issues/2872
PR: https://github.com/owncloud/android/pull/2918
QA: https://github.com/owncloud/android/issues/2818#issuecomment-1155115346
QA reports fixed:
- [X] (1) [P1] Crash when download some kind of files [FIXED]
- [X] (2) Notifications [FIXED]
- [X] (3) [P1] Downloads in uploads view [FIXED]
- [X] (4) Green badge in downloaded items [FIXED]
- [x] (5) [P1] Crash when download is cancelled
- [ ] (6) [P2] OAuth2 expired token
- [ ] (7) [P2] Progress bar missing
### Open with [Bugfixing]
QA: https://github.com/owncloud/android/issues/2818#issuecomment-1157358210
QA reports fixed:
- [x] (1)
### Store uploads into the new Room database [Bugfixing]
At the moment, we have the uploads database in the old database. We need to move it to the new one. Also, we need to move the current OCUpload model to the domain module. By the way, we should consider adding Downloads to that new table. At the moment we don't store the downloads in the database and it could be beneficial.
Issue: https://github.com/owncloud/android/issues/3426 & https://github.com/owncloud/android/issues/3717
PR: https://github.com/owncloud/android/pull/3710 & https://github.com/owncloud/android/pull/3729
### Transfers view [Bugfixing]
We need to adapt the uploads screen when we migrate the database. The idea is to take advantage of room improvements and observe any change via LiveData or Flow. By the way, we think that it would be a good idea to transform the uploads screen into a Transfer screen where the user could check the latest transfers, not only the uploads but also the downloads.
Issue: https://github.com/owncloud/android/issues/2858
PR: https://github.com/owncloud/android/pull/3718
## File list view [Bugfixing]
The main screen of the app has been refactored. Lots of changes have been applied including recycler view, live data... etc
Issue: https://github.com/owncloud/android/issues/2869
PR: https://github.com/owncloud/android/pull/3517
QA: https://github.com/owncloud/android/issues/2818#issuecomment-1155087211
QA reports fixed:
- [x] (1)
- [x] (2)
- [X] (3) [FIXED]
- [x] (4)
- [x] (5) [DONE]
- [X] (6) [P2] Scroll bar missing [FIXED]
## Av. Offline [Bugfixing]
As we already did some time ago with the camera uploads periodic work, we need to move the av offline job to work manager.
Issue: https://github.com/owncloud/android/issues/3246
PR: https://github.com/owncloud/android/pull/3715
QA: https://github.com/owncloud/android/issues/2818#issuecomment-1198961987
QA reports fixed:
- [X] (1) File automatic [FIXED]
- [ ] (2) [P2] Sync when opening file - more file sync suitable
- [X] (3) [P1] Folder not synced, only discovered [FIXED]
- [ ] (4) [P1] Crash when moving av. offline folder
- [X] (5) [P2] Upload changes in advance [FIXED]
- [ ] (6) [P3] unav. offline icon located in menu, not in toolbar
- [X] (7) Remove all av. offline stuff when account is removed [FIXED]
## Bottom navigation bar [Bugfixing]
At the moment the bottom navigation bar is not working properly. It should update the file list with the new upload list option and show only the files for that shortcut.
PR: https://github.com/owncloud/android/pull/3719
QA: https://github.com/owncloud/android/issues/2818#issuecomment-1216772525
QA reports fixed:
- [x] (1)
- [x] (2)
- [x] (3)
- [x] (4)
- [ ] (5) [P3] Glitch when browsing back after open in
- [X] (6) To fix in https://github.com/owncloud/android/issues/3016
## Conflicts Management [TO DO]
We need to detect if there are new conflicts between local and remote files and let the user choose to keep local, remote or both.
Issue: https://github.com/owncloud/android/issues/3005
## Refresh folder [Tested with folder synchronization]
Refreshing the folder should keep the folder updated with remote content and also, remove any file locally that is not available on the remote server anymore
Issue: https://github.com/owncloud/android/issues/3268
PR: https://github.com/owncloud/android/pull/3709
## File synchronization [Bugfixing]
Files should be synchronized, updating or removing local data in ScopedStorage and database depending on the remote changes.
Issue: https://github.com/owncloud/android/issues/3350
PR: https://github.com/owncloud/android/pull/3704
QA: https://github.com/owncloud/android/issues/2818#issuecomment-1173699585
QA reports fixed:
- [ ] (1) [P1] Updates in server not reflected
- [ ] (2) Badge gone
## Folder synchronization [Bugfixing]
Same as file synchronization but for folders. Recursively.
PR: https://github.com/owncloud/android/pull/3707 & https://github.com/owncloud/android/pull/3709
QA: https://github.com/owncloud/android/issues/2818#issuecomment-1176105909
QA reports fixed:
- [x] (1) [P2] Relocation mechanism [WONT FIX]
- [x] (2)
- [x] (3)
- [x] (4)
- [X] (5) [P3] No error message after pulling down if no connection available [DONE]
- [X] (6) [P1] Whole account refreshed after pulling down in root [FIXED]
- [X] (7) [P1] All files synced after reopening [FIXED]
- [X] (8) [P1] Request flooding [FIXED]
## Uploads migration (Under QA)
Issue: https://github.com/owncloud/android/issues/2858
PR: https://github.com/owncloud/android/pull/3718
QA: https://github.com/owncloud/android/issues/2818#issuecomment-1230150745
QA reports:
- [X] (1) Uploads enqueued forever, after upgrade [FIXED]
- [x] (2) [P1] Sharing big file with oC causes a crash [FIXED]
- [x] https://github.com/owncloud/android/issues/3741
- [x] (3) [P1] Uploads replayed after upgrading [WON'T FIX]
- [ ] (4) [P2] Uploads in progress do not end
## Miscellaneous
https://github.com/owncloud/android/issues/2818#issuecomment-1247805171
- [ ] (1) [P2] Toolbar incorrect after browsing
## Issues to have a look after this:
https://github.com/owncloud/android/issues/3721 [FIXED]
https://github.com/owncloud/android/issues/3741 [P1] [FIXED]
https://github.com/owncloud/android/issues/2070
https://github.com/owncloud/android/issues/2149
https://github.com/owncloud/android/issues/2834
https://github.com/owncloud/android/issues/2829
https://github.com/owncloud/android/issues/2921 [P1] [FIXED]
https://github.com/owncloud/android/issues/3708 [P2] [FIXED]
|
1.0
|
[New arch] Synchronization - # Introduction
We have been working on updating the old Android App to a new architecture for a long time.
The new architecture is MVVM and we can find more details about it here: https://github.com/owncloud/android/issues/2351
To do this, we started with Shares functionality, we continued with Authentication, and finally, we are working on the most crytical part of the app: the Synchronization. This last epic is huge and we created a milestone to keep track of it: https://github.com/owncloud/android/milestone/43
# Synchronization
App PR: https://github.com/owncloud/android/pull/2934
Library PR: https://github.com/owncloud/android-library/pull/339
## Atomic Operations
### Create folder [QA Passed] ✅
Issue: https://github.com/owncloud/android/issues/2861
PR: https://github.com/owncloud/android/pull/2923
QA: https://github.com/owncloud/android/issues/2818#issuecomment-683732010
QA reports fixed:
- [x] (1)
### Rename [QA Passed] ✅
Issue: https://github.com/owncloud/android/issues/2863
PR: https://github.com/owncloud/android/pull/3231
QA: https://github.com/owncloud/android/issues/2818#issuecomment-846984004
QA reports fixed:
- [x] (1)
- [x] (2)
- [x] (3)
- [x] (4)
### Remove [Bugfixing]
Issue: https://github.com/owncloud/android/issues/2864
PR: https://github.com/owncloud/android/pull/3214
QA: https://github.com/owncloud/android/issues/2818#issuecomment-834301102
QA reports fixed:
- [x] (1)
- [x] (2)
- [ ] (3) [P2] Removing many files blocks the app
- [ ] (4) [P2] Txt file removal cleans up the file list
### Copy [Bugfixing]
Issue: https://github.com/owncloud/android/issues/2866
PR: https://github.com/owncloud/android/pull/3253
QA: https://github.com/owncloud/android/issues/2818#issuecomment-849394483
QA reports fixed:
- [x] (1)
- [x] (2)
- [ ] (3) [P3] Error message too long
### Move [QA Passed] ✅
Issue: https://github.com/owncloud/android/issues/2865
PR: https://github.com/owncloud/android/pull/3232
QA: https://github.com/owncloud/android/issues/2818#issuecomment-844022500
QA reports fixed:
- [x] (1)
## Transfers
Transfers have changed completely from the previous version. We have moved from foreground services to WorkManager. They can be done parallely now.
Some notifications have been removed, previously it was a little bit messy when several uploads/downloads were done very fast.
### Uploads [Bugfixing]
There are two types of uploads at the moment.
Plain Uploads: When we get the content Uri via SAF or Camera Uploads
Plain or Chunks(depending on file size): When we share with oC, via documents provider, upload via camera, and when we upload conflicts.
Issue: https://github.com/owncloud/android/issues/3424
PR: https://github.com/owncloud/android/pull/3686
QA: https://github.com/owncloud/android/issues/2818#issuecomment-1157608496
QA reports fixed:
- [x] (1) [P2] Error takes long to happen
- [x] (2) Upload with no connection [FIXED]
- [ ] (3) [P3] Discussion: create or not a deleted target folder
- [x] (4) Folder error [FIXED]
- [ ] (5) [P2] Notifications missing
- [x] (6) Share txt does not work. [FIXED]
- [X] (7) [NON REPRODUCIBLE]
- [x] (8) [P1] Chunking [FIXED via https://github.com/owncloud/android/pull/3763)]
- [x] (9) Failed uploads are resumed after removing individually [FIXED]
- [x] (10) Error in maintenance mode [FIXED]
- [X] (11) Folder error [WONT FIX, same as (4)]
- [x] (12) Share txt does not work. [FIXED]
- [X] (13) Create with external [FIXED]
- [ ] (14) [P2] Not posible to read uploads view when many uploads running
- [x] (15) [P1] Uploading many files, part of them are missing [FIXED]
- [x] (16) Failed uploads are resumed after clearing all [FIXED]
- [x] (17) [P1] Folder error after killing the app [FIXED]
- [ ] (18) [P3] Same status for uploading and pending uploads
- [ ] (19) [P3] Clear button in enqueued section of uploads view
### Downloads [Bugfixing]
Issue: https://github.com/owncloud/android/issues/2872
PR: https://github.com/owncloud/android/pull/2918
QA: https://github.com/owncloud/android/issues/2818#issuecomment-1155115346
QA reports fixed:
- [X] (1) [P1] Crash when download some kind of files [FIXED]
- [X] (2) Notifications [FIXED]
- [X] (3) [P1] Downloads in uploads view [FIXED]
- [X] (4) Green badge in downloaded items [FIXED]
- [x] (5) [P1] Crash when download is cancelled
- [ ] (6) [P2] OAuth2 expired token
- [ ] (7) [P2] Progress bar missing
### Open with [Bugfixing]
QA: https://github.com/owncloud/android/issues/2818#issuecomment-1157358210
QA reports fixed:
- [x] (1)
### Store uploads into the new Room database [Bugfixing]
At the moment, we have the uploads database in the old database. We need to move it to the new one. Also, we need to move the current OCUpload model to the domain module. By the way, we should consider adding Downloads to that new table. At the moment we don't store the downloads in the database and it could be beneficial.
Issue: https://github.com/owncloud/android/issues/3426 & https://github.com/owncloud/android/issues/3717
PR: https://github.com/owncloud/android/pull/3710 & https://github.com/owncloud/android/pull/3729
### Transfers view [Bugfixing]
We need to adapt the uploads screen when we migrate the database. The idea is to take advantage of room improvements and observe any change via LiveData or Flow. By the way, we think that it would be a good idea to transform the uploads screen into a Transfer screen where the user could check the latest transfers, not only the uploads but also the downloads.
Issue: https://github.com/owncloud/android/issues/2858
PR: https://github.com/owncloud/android/pull/3718
## File list view [Bugfixing]
The main screen of the app has been refactored. Lots of changes have been applied including recycler view, live data... etc
Issue: https://github.com/owncloud/android/issues/2869
PR: https://github.com/owncloud/android/pull/3517
QA: https://github.com/owncloud/android/issues/2818#issuecomment-1155087211
QA reports fixed:
- [x] (1)
- [x] (2)
- [X] (3) [FIXED]
- [x] (4)
- [x] (5) [DONE]
- [X] (6) [P2] Scroll bar missing [FIXED]
## Av. Offline [Bugfixing]
As we already did some time ago with the camera uploads periodic work, we need to move the av offline job to work manager.
Issue: https://github.com/owncloud/android/issues/3246
PR: https://github.com/owncloud/android/pull/3715
QA: https://github.com/owncloud/android/issues/2818#issuecomment-1198961987
QA reports fixed:
- [X] (1) File automatic [FIXED]
- [ ] (2) [P2] Sync when opening file - more file sync suitable
- [X] (3) [P1] Folder not synced, only discovered [FIXED]
- [ ] (4) [P1] Crash when moving av. offline folder
- [X] (5) [P2] Upload changes in advance [FIXED]
- [ ] (6) [P3] unav. offline icon located in menu, not in toolbar
- [X] (7) Remove all av. offline stuff when account is removed [FIXED]
## Bottom navigation bar [Bugfixing]
At the moment the bottom navigation bar is not working properly. It should update the file list with the new upload list option and show only the files for that shortcut.
PR: https://github.com/owncloud/android/pull/3719
QA: https://github.com/owncloud/android/issues/2818#issuecomment-1216772525
QA reports fixed:
- [x] (1)
- [x] (2)
- [x] (3)
- [x] (4)
- [ ] (5) [P3] Glitch when browsing back after open in
- [X] (6) To fix in https://github.com/owncloud/android/issues/3016
## Conflicts Management [TO DO]
We need to detect if there are new conflicts between local and remote files and let the user choose to keep local, remote or both.
Issue: https://github.com/owncloud/android/issues/3005
## Refresh folder [Tested with folder synchronization]
Refreshing the folder should keep the folder updated with remote content and also, remove any file locally that is not available on the remote server anymore
Issue: https://github.com/owncloud/android/issues/3268
PR: https://github.com/owncloud/android/pull/3709
## File synchronization [Bugfixing]
Files should be synchronized, updating or removing local data in ScopedStorage and database depending on the remote changes.
Issue: https://github.com/owncloud/android/issues/3350
PR: https://github.com/owncloud/android/pull/3704
QA: https://github.com/owncloud/android/issues/2818#issuecomment-1173699585
QA reports fixed:
- [ ] (1) [P1] Updates in server not reflected
- [ ] (2) Badge gone
## Folder synchronization [Bugfixing]
Same as file synchronization but for folders. Recursively.
PR: https://github.com/owncloud/android/pull/3707 & https://github.com/owncloud/android/pull/3709
QA: https://github.com/owncloud/android/issues/2818#issuecomment-1176105909
QA reports fixed:
- [x] (1) [P2] Relocation mechanism [WONT FIX]
- [x] (2)
- [x] (3)
- [x] (4)
- [X] (5) [P3] No error message after pulling down if no connection available [DONE]
- [X] (6) [P1] Whole account refreshed after pulling down in root [FIXED]
- [X] (7) [P1] All files synced after reopening [FIXED]
- [X] (8) [P1] Request flooding [FIXED]
## Uploads migration (Under QA)
Issue: https://github.com/owncloud/android/issues/2858
PR: https://github.com/owncloud/android/pull/3718
QA: https://github.com/owncloud/android/issues/2818#issuecomment-1230150745
QA reports:
- [X] (1) Uploads enqueued forever, after upgrade [FIXED]
- [x] (2) [P1] Sharing big file with oC causes a crash [FIXED]
- [x] https://github.com/owncloud/android/issues/3741
- [x] (3) [P1] Uploads replayed after upgrading [WON'T FIX]
- [ ] (4) [P2] Uploads in progress do not end
## Miscellaneous
https://github.com/owncloud/android/issues/2818#issuecomment-1247805171
- [ ] (1) [P2] Toolbar incorrect after browsing
## Issues to have a look after this:
https://github.com/owncloud/android/issues/3721 [FIXED]
https://github.com/owncloud/android/issues/3741 [P1] [FIXED]
https://github.com/owncloud/android/issues/2070
https://github.com/owncloud/android/issues/2149
https://github.com/owncloud/android/issues/2834
https://github.com/owncloud/android/issues/2829
https://github.com/owncloud/android/issues/2921 [P1] [FIXED]
https://github.com/owncloud/android/issues/3708 [P2] [FIXED]
|
architecture
|
synchronization introduction we have been working on updating the old android app to a new architecture for a long time the new architecture is mvvm and we can find more details about it here to do this we started with shares functionality we continued with authentication and finally we are working on the most crytical part of the app the synchronization this last epic is huge and we created a milestone to keep track of it synchronization app pr library pr atomic operations create folder ✅ issue pr qa qa reports fixed rename ✅ issue pr qa qa reports fixed remove issue pr qa qa reports fixed removing many files blocks the app txt file removal cleans up the file list copy issue pr qa qa reports fixed error message too long move ✅ issue pr qa qa reports fixed transfers transfers have changed completely from the previous version we have moved from foreground services to workmanager they can be done parallely now some notifications have been removed previously it was a little bit messy when several uploads downloads were done very fast uploads there are two types of uploads at the moment plain uploads when we get the content uri via saf or camera uploads plain or chunks depending on file size when we share with oc via documents provider upload via camera and when we upload conflicts issue pr qa qa reports fixed error takes long to happen upload with no connection discussion create or not a deleted target folder folder error notifications missing share txt does not work chunking failed uploads are resumed after removing individually error in maintenance mode folder error share txt does not work create with external not posible to read uploads view when many uploads running uploading many files part of them are missing failed uploads are resumed after clearing all folder error after killing the app same status for uploading and pending uploads clear button in enqueued section of uploads view downloads issue pr qa qa reports fixed crash when download some kind of files notifications downloads in uploads view green badge in downloaded items crash when download is cancelled expired token progress bar missing open with qa qa reports fixed store uploads into the new room database at the moment we have the uploads database in the old database we need to move it to the new one also we need to move the current ocupload model to the domain module by the way we should consider adding downloads to that new table at the moment we don t store the downloads in the database and it could be beneficial issue pr transfers view we need to adapt the uploads screen when we migrate the database the idea is to take advantage of room improvements and observe any change via livedata or flow by the way we think that it would be a good idea to transform the uploads screen into a transfer screen where the user could check the latest transfers not only the uploads but also the downloads issue pr file list view the main screen of the app has been refactored lots of changes have been applied including recycler view live data etc issue pr qa qa reports fixed scroll bar missing av offline as we already did some time ago with the camera uploads periodic work we need to move the av offline job to work manager issue pr qa qa reports fixed file automatic sync when opening file more file sync suitable folder not synced only discovered crash when moving av offline folder upload changes in advance unav offline icon located in menu not in toolbar remove all av offline stuff when account is removed bottom navigation bar at the moment the bottom navigation bar is not working properly it should update the file list with the new upload list option and show only the files for that shortcut pr qa qa reports fixed glitch when browsing back after open in to fix in conflicts management we need to detect if there are new conflicts between local and remote files and let the user choose to keep local remote or both issue refresh folder refreshing the folder should keep the folder updated with remote content and also remove any file locally that is not available on the remote server anymore issue pr file synchronization files should be synchronized updating or removing local data in scopedstorage and database depending on the remote changes issue pr qa qa reports fixed updates in server not reflected badge gone folder synchronization same as file synchronization but for folders recursively pr qa qa reports fixed relocation mechanism no error message after pulling down if no connection available whole account refreshed after pulling down in root all files synced after reopening request flooding uploads migration under qa issue pr qa qa reports uploads enqueued forever after upgrade sharing big file with oc causes a crash uploads replayed after upgrading uploads in progress do not end miscellaneous toolbar incorrect after browsing issues to have a look after this
| 1
|
2,337
| 7,681,311,125
|
IssuesEvent
|
2018-05-16 06:53:33
|
NEEOInc/neeo-sdk
|
https://api.github.com/repos/NEEOInc/neeo-sdk
|
closed
|
Subscribe/Unsubscribe Capabilities
|
scope Architecture status : in progress type : discussion
|
I really like that the SDK will subscribe/unsubscribe from the device. However, for complex devices there needs to be a way to subscribe/unsubscribe from each capability.
Example: let's say I have a complex device with say 50 capabilities. Of those 50, only 10 are general and the other 40 are 'advanced' use (ie think about the 'unpair' on apple tv - won't be used often). Now - some of those 40 may be resource intensive and we don't want to start up those notifications unless we know they will be used.
What would be nice is if the capability is used (in a recipe or shortcut), the brain will 'subscribe' to those capabilities (like it does the device) and 'unsubscribe' if removed
|
1.0
|
Subscribe/Unsubscribe Capabilities - I really like that the SDK will subscribe/unsubscribe from the device. However, for complex devices there needs to be a way to subscribe/unsubscribe from each capability.
Example: let's say I have a complex device with say 50 capabilities. Of those 50, only 10 are general and the other 40 are 'advanced' use (ie think about the 'unpair' on apple tv - won't be used often). Now - some of those 40 may be resource intensive and we don't want to start up those notifications unless we know they will be used.
What would be nice is if the capability is used (in a recipe or shortcut), the brain will 'subscribe' to those capabilities (like it does the device) and 'unsubscribe' if removed
|
architecture
|
subscribe unsubscribe capabilities i really like that the sdk will subscribe unsubscribe from the device however for complex devices there needs to be a way to subscribe unsubscribe from each capability example let s say i have a complex device with say capabilities of those only are general and the other are advanced use ie think about the unpair on apple tv won t be used often now some of those may be resource intensive and we don t want to start up those notifications unless we know they will be used what would be nice is if the capability is used in a recipe or shortcut the brain will subscribe to those capabilities like it does the device and unsubscribe if removed
| 1
|
11,667
| 32,047,776,696
|
IssuesEvent
|
2023-09-23 07:17:00
|
keephq/keep
|
https://api.github.com/repos/keephq/keep
|
closed
|
Improve the "foreach" mechanism to be more solid
|
help wanted architecture
|
# Current state
When running alert in "foreach" mode, the foreach context can be accessed via {{ foreach.value }}
For example, let's consider the next step:
```
- name: check-disk-defects
provider:
type: postgres
config: "{{ providers.postgres-server }}"
with:
query: "select * from disk"
foreach: "{{ steps.this.results }}"
condition:
- name: threshold-condition
type: threshold
value: " {{ foreach.value[13] }} " # disk defect is the 13th column
compare_to: 50, 40, 30
level: major, medium, minor
```
We can see that when we use ` foreach: "{{ steps.this.results }}"` every column can be accessed via " {{ foreach.value[i] }} "
But now we have a problem - the condition applies a level to each foreach - `level: major, medium, minor`, how can we store it as context so it'll be accessible via ` {{ foreach.level }}`?
So the current workaround for that is to add **kwargs to `set_condition_results` so every other context (e.g. `level`) will be stored in the additional context.
# What needs to be done
We need to design a better way for the `foreach` mechanism. I thought maybe using some contextmanager (e.g. __enter__ and __exit__ every time you are in foreach mode) or even treating the foreach context as a dataclass or something
|
1.0
|
Improve the "foreach" mechanism to be more solid - # Current state
When running alert in "foreach" mode, the foreach context can be accessed via {{ foreach.value }}
For example, let's consider the next step:
```
- name: check-disk-defects
provider:
type: postgres
config: "{{ providers.postgres-server }}"
with:
query: "select * from disk"
foreach: "{{ steps.this.results }}"
condition:
- name: threshold-condition
type: threshold
value: " {{ foreach.value[13] }} " # disk defect is the 13th column
compare_to: 50, 40, 30
level: major, medium, minor
```
We can see that when we use ` foreach: "{{ steps.this.results }}"` every column can be accessed via " {{ foreach.value[i] }} "
But now we have a problem - the condition applies a level to each foreach - `level: major, medium, minor`, how can we store it as context so it'll be accessible via ` {{ foreach.level }}`?
So the current workaround for that is to add **kwargs to `set_condition_results` so every other context (e.g. `level`) will be stored in the additional context.
# What needs to be done
We need to design a better way for the `foreach` mechanism. I thought maybe using some contextmanager (e.g. __enter__ and __exit__ every time you are in foreach mode) or even treating the foreach context as a dataclass or something
|
architecture
|
improve the foreach mechanism to be more solid current state when running alert in foreach mode the foreach context can be accessed via foreach value for example let s consider the next step name check disk defects provider type postgres config providers postgres server with query select from disk foreach steps this results condition name threshold condition type threshold value foreach value disk defect is the column compare to level major medium minor we can see that when we use foreach steps this results every column can be accessed via foreach value but now we have a problem the condition applies a level to each foreach level major medium minor how can we store it as context so it ll be accessible via foreach level so the current workaround for that is to add kwargs to set condition results so every other context e g level will be stored in the additional context what needs to be done we need to design a better way for the foreach mechanism i thought maybe using some contextmanager e g enter and exit every time you are in foreach mode or even treating the foreach context as a dataclass or something
| 1
|
6,876
| 15,705,943,254
|
IssuesEvent
|
2021-03-26 16:46:52
|
k3ntako/2do
|
https://api.github.com/repos/k3ntako/2do
|
closed
|
Validation for update todo endpoint
|
Automated Testing Deliver Working Software Service Oriented Architecture
|
Given: The server is running and API endpoints are available,
When: a PATCH request is made to `/api/todo/:id` without a body or a `isCompleted` value
Then: the endpoint should return a 400.
|
1.0
|
Validation for update todo endpoint - Given: The server is running and API endpoints are available,
When: a PATCH request is made to `/api/todo/:id` without a body or a `isCompleted` value
Then: the endpoint should return a 400.
|
architecture
|
validation for update todo endpoint given the server is running and api endpoints are available when a patch request is made to api todo id without a body or a iscompleted value then the endpoint should return a
| 1
|
40,991
| 16,605,293,001
|
IssuesEvent
|
2021-06-02 02:28:53
|
terraform-providers/terraform-provider-azurerm
|
https://api.github.com/repos/terraform-providers/terraform-provider-azurerm
|
closed
|
`azurerm_storage_account` always flagged as changed when using `azure_files_authentication.directory_type = "AADDS"`
|
bug service/storage
|
<!---
Please note the following potential times when an issue might be in Terraform core:
* [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues
* [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues
* [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues
* [Registry](https://registry.terraform.io/) issues
* Spans resources across multiple providers
If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead.
--->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform (and AzureRM Provider) Version
```text
Terraform v0.15.3
on windows_amd64
+ provider registry.terraform.io/hashicorp/azurerm v2.60.0
```
### Affected Resource(s)
* `azurerm_storage_account`
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
resource "azurerm_storage_account" "fslogix" {
name = "fslogix${random_id.random.dec}"
location = var.location
resource_group_name = azurerm_resource_group.fslogix.name
tags = var.tags
account_kind = "FileStorage"
account_tier = "Premium"
account_replication_type = "LRS"
azure_files_authentication {
directory_type = "AADDS"
}
}
```
### Expected Behaviour
There shouldn't be any changes
### Actual Behaviour
Everytime `terraform apply` is executed, the storage account is flagged as changed:
```text
# azurerm_storage_account.fslogix will be updated in-place
~ resource "azurerm_storage_account" "fslogix" {
id = "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/fslogix-rg/providers/Microsoft.Storage/storageAccounts/fslogix200"
name = "fslogix200"
# (20 unchanged attributes hidden)
~ azure_files_authentication {
# (1 unchanged attribute hidden)
- active_directory {
- domain_name = "aadds.example.com" -> null
}
}
# (1 unchanged block hidden)
}
```
### Steps to Reproduce
1. Use `azure_files_authentication.directory_type = "AADDS"`
2. `terraform apply`
|
1.0
|
`azurerm_storage_account` always flagged as changed when using `azure_files_authentication.directory_type = "AADDS"` - <!---
Please note the following potential times when an issue might be in Terraform core:
* [Configuration Language](https://www.terraform.io/docs/configuration/index.html) or resource ordering issues
* [State](https://www.terraform.io/docs/state/index.html) and [State Backend](https://www.terraform.io/docs/backends/index.html) issues
* [Provisioner](https://www.terraform.io/docs/provisioners/index.html) issues
* [Registry](https://registry.terraform.io/) issues
* Spans resources across multiple providers
If you are running into one of these scenarios, we recommend opening an issue in the [Terraform core repository](https://github.com/hashicorp/terraform/) instead.
--->
<!--- Please keep this note for the community --->
### Community Note
* Please vote on this issue by adding a 👍 [reaction](https://blog.github.com/2016-03-10-add-reactions-to-pull-requests-issues-and-comments/) to the original issue to help the community and maintainers prioritize this request
* Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
* If you are interested in working on this issue or have submitted a pull request, please leave a comment
<!--- Thank you for keeping this note for the community --->
### Terraform (and AzureRM Provider) Version
```text
Terraform v0.15.3
on windows_amd64
+ provider registry.terraform.io/hashicorp/azurerm v2.60.0
```
### Affected Resource(s)
* `azurerm_storage_account`
### Terraform Configuration Files
<!--- Information about code formatting: https://help.github.com/articles/basic-writing-and-formatting-syntax/#quoting-code --->
```hcl
resource "azurerm_storage_account" "fslogix" {
name = "fslogix${random_id.random.dec}"
location = var.location
resource_group_name = azurerm_resource_group.fslogix.name
tags = var.tags
account_kind = "FileStorage"
account_tier = "Premium"
account_replication_type = "LRS"
azure_files_authentication {
directory_type = "AADDS"
}
}
```
### Expected Behaviour
There shouldn't be any changes
### Actual Behaviour
Everytime `terraform apply` is executed, the storage account is flagged as changed:
```text
# azurerm_storage_account.fslogix will be updated in-place
~ resource "azurerm_storage_account" "fslogix" {
id = "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/fslogix-rg/providers/Microsoft.Storage/storageAccounts/fslogix200"
name = "fslogix200"
# (20 unchanged attributes hidden)
~ azure_files_authentication {
# (1 unchanged attribute hidden)
- active_directory {
- domain_name = "aadds.example.com" -> null
}
}
# (1 unchanged block hidden)
}
```
### Steps to Reproduce
1. Use `azure_files_authentication.directory_type = "AADDS"`
2. `terraform apply`
|
non_architecture
|
azurerm storage account always flagged as changed when using azure files authentication directory type aadds please note the following potential times when an issue might be in terraform core or resource ordering issues and issues issues issues spans resources across multiple providers if you are running into one of these scenarios we recommend opening an issue in the instead community note please vote on this issue by adding a 👍 to the original issue to help the community and maintainers prioritize this request please do not leave or me too comments they generate extra noise for issue followers and do not help prioritize the request if you are interested in working on this issue or have submitted a pull request please leave a comment terraform and azurerm provider version text terraform on windows provider registry terraform io hashicorp azurerm affected resource s azurerm storage account terraform configuration files hcl resource azurerm storage account fslogix name fslogix random id random dec location var location resource group name azurerm resource group fslogix name tags var tags account kind filestorage account tier premium account replication type lrs azure files authentication directory type aadds expected behaviour there shouldn t be any changes actual behaviour everytime terraform apply is executed the storage account is flagged as changed text azurerm storage account fslogix will be updated in place resource azurerm storage account fslogix id subscriptions resourcegroups fslogix rg providers microsoft storage storageaccounts name unchanged attributes hidden azure files authentication unchanged attribute hidden active directory domain name aadds example com null unchanged block hidden steps to reproduce use azure files authentication directory type aadds terraform apply
| 0
|
11,450
| 30,540,437,396
|
IssuesEvent
|
2023-07-19 20:53:12
|
opendatahub-io/opendatahub-operator
|
https://api.github.com/repos/opendatahub-io/opendatahub-operator
|
closed
|
Create a status field that lists installed components
|
rearchitecture
|
Create a status field that will update the DataScienceCluster status with list of components that are installed at any given moment.
This is useful to be referenced by other components like Dashboard.
|
1.0
|
Create a status field that lists installed components - Create a status field that will update the DataScienceCluster status with list of components that are installed at any given moment.
This is useful to be referenced by other components like Dashboard.
|
architecture
|
create a status field that lists installed components create a status field that will update the datasciencecluster status with list of components that are installed at any given moment this is useful to be referenced by other components like dashboard
| 1
|
2,696
| 8,202,418,656
|
IssuesEvent
|
2018-09-02 09:03:37
|
poanetwork/blockscout
|
https://api.github.com/repos/poanetwork/blockscout
|
opened
|
Cannot allocate 1318267840 bytes of memory
|
bug chain: ETH priority: urgent team: architecture
|
With the large number of deferred processes (internal transactions, balances, and tokens) we have run out of memory on Ethereum Mainnet deployment causing the Erlang VM to crash.
After a certain number of deferred processes, we should start deleting them from Memory in order to stop the crashes.
### Environment
* Operating System: Linux
### Steps to reproduce
1. Deploy Ethereum Mainnnet
2. View `erl_crash.dump` file after 12-14 hours
### Expected behaviour
Not crash.
### Actual behaviour
Crashes
|
1.0
|
Cannot allocate 1318267840 bytes of memory - With the large number of deferred processes (internal transactions, balances, and tokens) we have run out of memory on Ethereum Mainnet deployment causing the Erlang VM to crash.
After a certain number of deferred processes, we should start deleting them from Memory in order to stop the crashes.
### Environment
* Operating System: Linux
### Steps to reproduce
1. Deploy Ethereum Mainnnet
2. View `erl_crash.dump` file after 12-14 hours
### Expected behaviour
Not crash.
### Actual behaviour
Crashes
|
architecture
|
cannot allocate bytes of memory with the large number of deferred processes internal transactions balances and tokens we have run out of memory on ethereum mainnet deployment causing the erlang vm to crash after a certain number of deferred processes we should start deleting them from memory in order to stop the crashes environment operating system linux steps to reproduce deploy ethereum mainnnet view erl crash dump file after hours expected behaviour not crash actual behaviour crashes
| 1
|
9,682
| 25,031,942,885
|
IssuesEvent
|
2022-11-04 13:08:17
|
dotnet/docs
|
https://api.github.com/repos/dotnet/docs
|
closed
|
inefficient code example
|
:watch: Not Triaged Pri1 dotnet-architecture/prod modern-web-apps-azure/tech in-pr
|
in "Encapsulating data" section at https://learn.microsoft.com/en-us/dotnet/architecture/modern-web-apps-azure/work-with-data-in-asp-net-core-apps#encapsulating-data the code sample is inefficient since the update case would require TWO scans of the Items collection, so IMHO better to code the
var existingItem = Items.FirstOrDefault(i => i.CatalogItemId == catalogItemId);
and then test that nullable object
if(existingItem==null){_items.Add(new BasketItem{..}} else existingItem.Quantity+=quantity;
so eliminating 2nd scan, the Any() and the return statement.
Example used here of in-memory database perhaps not too bad, but devs should realise that multiple scans of a real remote relational db could incur 2 roundtrips. doh!
Actually I also quarrel with using the public Items wrapper rather than the private underlying _items List
surely this code should manipulate its private List to avoid List BCL having to brew-up a wrapper [again]?
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: 8e1a443c-c3dd-b865-58b2-0ff0902308ce
* Version Independent ID: d974261f-2544-dee9-f401-e02d715012cb
* Content: [Work with data in ASP.NET Core Apps](https://learn.microsoft.com/en-us/dotnet/architecture/modern-web-apps-azure/work-with-data-in-asp-net-core-apps)
* Content Source: [docs/architecture/modern-web-apps-azure/work-with-data-in-asp-net-core-apps.md](https://github.com/dotnet/docs/blob/main/docs/architecture/modern-web-apps-azure/work-with-data-in-asp-net-core-apps.md)
* Product: **dotnet-architecture**
* Technology: **modern-web-apps-azure**
* GitHub Login: @ardalis
* Microsoft Alias: **wiwagn**
|
1.0
|
inefficient code example - in "Encapsulating data" section at https://learn.microsoft.com/en-us/dotnet/architecture/modern-web-apps-azure/work-with-data-in-asp-net-core-apps#encapsulating-data the code sample is inefficient since the update case would require TWO scans of the Items collection, so IMHO better to code the
var existingItem = Items.FirstOrDefault(i => i.CatalogItemId == catalogItemId);
and then test that nullable object
if(existingItem==null){_items.Add(new BasketItem{..}} else existingItem.Quantity+=quantity;
so eliminating 2nd scan, the Any() and the return statement.
Example used here of in-memory database perhaps not too bad, but devs should realise that multiple scans of a real remote relational db could incur 2 roundtrips. doh!
Actually I also quarrel with using the public Items wrapper rather than the private underlying _items List
surely this code should manipulate its private List to avoid List BCL having to brew-up a wrapper [again]?
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: 8e1a443c-c3dd-b865-58b2-0ff0902308ce
* Version Independent ID: d974261f-2544-dee9-f401-e02d715012cb
* Content: [Work with data in ASP.NET Core Apps](https://learn.microsoft.com/en-us/dotnet/architecture/modern-web-apps-azure/work-with-data-in-asp-net-core-apps)
* Content Source: [docs/architecture/modern-web-apps-azure/work-with-data-in-asp-net-core-apps.md](https://github.com/dotnet/docs/blob/main/docs/architecture/modern-web-apps-azure/work-with-data-in-asp-net-core-apps.md)
* Product: **dotnet-architecture**
* Technology: **modern-web-apps-azure**
* GitHub Login: @ardalis
* Microsoft Alias: **wiwagn**
|
architecture
|
inefficient code example in encapsulating data section at the code sample is inefficient since the update case would require two scans of the items collection so imho better to code the var existingitem items firstordefault i i catalogitemid catalogitemid and then test that nullable object if existingitem null items add new basketitem else existingitem quantity quantity so eliminating scan the any and the return statement example used here of in memory database perhaps not too bad but devs should realise that multiple scans of a real remote relational db could incur roundtrips doh actually i also quarrel with using the public items wrapper rather than the private underlying items list surely this code should manipulate its private list to avoid list bcl having to brew up a wrapper document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id content content source product dotnet architecture technology modern web apps azure github login ardalis microsoft alias wiwagn
| 1
|
64,561
| 7,816,295,562
|
IssuesEvent
|
2018-06-13 03:42:05
|
hackforla/spare
|
https://api.github.com/repos/hackforla/spare
|
closed
|
Design - Create About page
|
design help wanted
|
Design the "About" page. It should include information about:
- Why this project/website exists
- The need and homelessness in LA
- Hack for LA
- Who's on Our Team
|
1.0
|
Design - Create About page - Design the "About" page. It should include information about:
- Why this project/website exists
- The need and homelessness in LA
- Hack for LA
- Who's on Our Team
|
non_architecture
|
design create about page design the about page it should include information about why this project website exists the need and homelessness in la hack for la who s on our team
| 0
|
7,841
| 19,621,623,489
|
IssuesEvent
|
2022-01-07 07:36:28
|
Azure/azure-sdk
|
https://api.github.com/repos/Azure/azure-sdk
|
opened
|
Board Review: Introducing Spring Cloud Azure 4.0
|
architecture board-review
|
Thank you for starting the process for approval of the client library for your Azure service. Thorough review of your client library ensures that your APIs are consistent with the guidelines and the consumers of your client library have a consistently good experience when using Azure.
**The Architecture Board reviews [Track 2 libraries](https://azure.github.io/azure-sdk/general_introduction.html) only.** If your library does not meet this requirement, please reach out to [Architecture Board](adparch@microsoft.com) before creating the issue.
Please reference our [review process guidelines](https://azure.github.io/azure-sdk/policies_reviewprocess.html) to understand what is being asked for in the issue template.
**Before submitting, ensure you adjust the title of the issue appropriately.**
**Note that the required material must be included before a meeting can be scheduled.**
## Contacts and Timeline
* Main contacts:
* Strong Liu (@stliu, Spring Cloud Azure team lead)
* Sean Li (@seanli1988, Spring Cloud Azure PM)
* Xiaolu Dai (@saragluna)
* Expected GA date for this library:
* **mid-March, 2022**
## About the Service
Spring is the most popular application development framework for enterprise Java. There are many Spring projects in the Spring ecosystem to take care of the different needs in application development. The Spring Cloud Azure team is working on the integration between Azure and Spring. We provide Azure support for different Spring projects, such as Spring Boot, Spring Data, Spring Integration, Spring Security, and Spring Cloud Stream.
* Link to documentation introducing/describing the service:
- https://spring.io/
- https://docs.microsoft.com/en-us/azure/developer/java/spring-framework/
* Link to the service REST APIs:
N/A
* Is the goal to release a Public Preview, Private Preview, or GA?
The Spring Cloud Azure libraries exist for years, and this time we are going to GA a major version, the Spring Cloud Azure 4.0, which is a refactored new version.
## About the client library
* Name of client library:
Spring Cloud Azure is our project name. And we will ship ~40 artifacts this time:
- spring-cloud-azure-core
- spring-cloud-azure-service
- spring-cloud-azure-resourcemanger
- spring-cloud-azure-trace-sleuth
- spring-messaging-azure
- spring-messaging-azure-eventhubs
- spring-messaging-azure-servicebus
- spring-messaging-azure-storage-queue
- spring-integration-azure-core
- spring-integration-azure-eventhubs
- spring-integration-azure-servicebus
- spring-integration-azure-storage-queue
- spring-cloud-azure-stream-binder-eventhubs
- spring-cloud-azure-stream-binder-eventhubs-core
- spring-cloud-azure-stream-binder-serviceubs
- spring-cloud-azure-stream-binder-serviceubs-core
- spring-cloud-azure-autoconfigure
- spring-cloud-azure-actuator
- spring-cloud-azure-actuator-autoconfigure
- spring-cloud-azure-starter
- spring-cloud-azure-starter-activedirectory
- spring-cloud-azure-starter-activedirectory-b2c
- spring-cloud-azure-starter-actuator
- spring-cloud-azure-starter-appconfiguration
- spring-cloud-azure-starter-cosmos
- spring-cloud-azure-starter-data-cosmos
- spring-cloud-azure-starter-eventhubs
- spring-cloud-azure-starter-keyvault-secrets
- spring-cloud-azure-starter-servicebus
- spring-cloud-azure-starter-servicebus-jms
- spring-cloud-azure-starter-storage-blob
- spring-cloud-azure-starter-storage-file-share
- spring-cloud-azure-starter-storage-queue
- spring-cloud-azure-starter-integration-eventhubs
- spring-cloud-azure-starter-integration-servicebus
- spring-cloud-azure-starter-integration-storage-queue
- spring-cloud-azure-starter-stream-eventhubs
- spring-cloud-azure-starter-stream-servicebus
* Link to library reference documentation:
https://microsoft.github.io/spring-cloud-azure/4.0.0-beta.3/4.0.0-beta.3/reference/html/index.html
* Is there an existing SDK library? If yes, provide link:
Yes, we have shipped these libraries before, but the artifact names are changed in this version.
https://repo1.maven.org/maven2/com/azure/spring/
## Step 1: Champion Scenarios
Ultimately the library should be easy to use for common scenarios that developers want. Consider the following questions when thinking about champion scenarios:
1. What is the app the developer is building that uses your client library?
Typically a Spring or Spring Boot application.
2. Who is the end-user of the application (the developer's customer)?
The developer's customer.
3. What features of the API need to be explained in the sample so that someone could use this API in real app?
Will explain in the champion scenarios.
4. How does the **authentication** workflow look?
Build on top of azure sdks and azure-identity.
See Champion Scenario section [here](https://azure.github.io/azure-sdk/policies_reviewprocess.html).
Code is appreciated but optional. Pseudocode is fine.
- Use the auto-configuration provided by Spring Boot: https://gist.github.com/saragluna/7e1754a7d17d0820f223ddba8621e2c7.
- Use Resource Handling abstractions provided by Spring Framework: https://gist.github.com/backwind1233/0f5a5e11ed9856db141f4cdc1c285971.
- Use Enterprise Integration Patterns (EIP) supported by Spring Integration: https://gist.github.com/yiliuTo/352b071a717fd90c984bcf03ec5d0fb9.
## Step 2: Quickstart Samples (Optional)
Include samples demonstrating how to consume the client library if available:
We have a sample repo hosting all our sample projects: https://github.com/Azure-Samples/azure-spring-boot-samples/tree/spring-cloud-azure_4.0
## Thank you for your submission!
|
1.0
|
Board Review: Introducing Spring Cloud Azure 4.0 - Thank you for starting the process for approval of the client library for your Azure service. Thorough review of your client library ensures that your APIs are consistent with the guidelines and the consumers of your client library have a consistently good experience when using Azure.
**The Architecture Board reviews [Track 2 libraries](https://azure.github.io/azure-sdk/general_introduction.html) only.** If your library does not meet this requirement, please reach out to [Architecture Board](adparch@microsoft.com) before creating the issue.
Please reference our [review process guidelines](https://azure.github.io/azure-sdk/policies_reviewprocess.html) to understand what is being asked for in the issue template.
**Before submitting, ensure you adjust the title of the issue appropriately.**
**Note that the required material must be included before a meeting can be scheduled.**
## Contacts and Timeline
* Main contacts:
* Strong Liu (@stliu, Spring Cloud Azure team lead)
* Sean Li (@seanli1988, Spring Cloud Azure PM)
* Xiaolu Dai (@saragluna)
* Expected GA date for this library:
* **mid-March, 2022**
## About the Service
Spring is the most popular application development framework for enterprise Java. There are many Spring projects in the Spring ecosystem to take care of the different needs in application development. The Spring Cloud Azure team is working on the integration between Azure and Spring. We provide Azure support for different Spring projects, such as Spring Boot, Spring Data, Spring Integration, Spring Security, and Spring Cloud Stream.
* Link to documentation introducing/describing the service:
- https://spring.io/
- https://docs.microsoft.com/en-us/azure/developer/java/spring-framework/
* Link to the service REST APIs:
N/A
* Is the goal to release a Public Preview, Private Preview, or GA?
The Spring Cloud Azure libraries exist for years, and this time we are going to GA a major version, the Spring Cloud Azure 4.0, which is a refactored new version.
## About the client library
* Name of client library:
Spring Cloud Azure is our project name. And we will ship ~40 artifacts this time:
- spring-cloud-azure-core
- spring-cloud-azure-service
- spring-cloud-azure-resourcemanger
- spring-cloud-azure-trace-sleuth
- spring-messaging-azure
- spring-messaging-azure-eventhubs
- spring-messaging-azure-servicebus
- spring-messaging-azure-storage-queue
- spring-integration-azure-core
- spring-integration-azure-eventhubs
- spring-integration-azure-servicebus
- spring-integration-azure-storage-queue
- spring-cloud-azure-stream-binder-eventhubs
- spring-cloud-azure-stream-binder-eventhubs-core
- spring-cloud-azure-stream-binder-serviceubs
- spring-cloud-azure-stream-binder-serviceubs-core
- spring-cloud-azure-autoconfigure
- spring-cloud-azure-actuator
- spring-cloud-azure-actuator-autoconfigure
- spring-cloud-azure-starter
- spring-cloud-azure-starter-activedirectory
- spring-cloud-azure-starter-activedirectory-b2c
- spring-cloud-azure-starter-actuator
- spring-cloud-azure-starter-appconfiguration
- spring-cloud-azure-starter-cosmos
- spring-cloud-azure-starter-data-cosmos
- spring-cloud-azure-starter-eventhubs
- spring-cloud-azure-starter-keyvault-secrets
- spring-cloud-azure-starter-servicebus
- spring-cloud-azure-starter-servicebus-jms
- spring-cloud-azure-starter-storage-blob
- spring-cloud-azure-starter-storage-file-share
- spring-cloud-azure-starter-storage-queue
- spring-cloud-azure-starter-integration-eventhubs
- spring-cloud-azure-starter-integration-servicebus
- spring-cloud-azure-starter-integration-storage-queue
- spring-cloud-azure-starter-stream-eventhubs
- spring-cloud-azure-starter-stream-servicebus
* Link to library reference documentation:
https://microsoft.github.io/spring-cloud-azure/4.0.0-beta.3/4.0.0-beta.3/reference/html/index.html
* Is there an existing SDK library? If yes, provide link:
Yes, we have shipped these libraries before, but the artifact names are changed in this version.
https://repo1.maven.org/maven2/com/azure/spring/
## Step 1: Champion Scenarios
Ultimately the library should be easy to use for common scenarios that developers want. Consider the following questions when thinking about champion scenarios:
1. What is the app the developer is building that uses your client library?
Typically a Spring or Spring Boot application.
2. Who is the end-user of the application (the developer's customer)?
The developer's customer.
3. What features of the API need to be explained in the sample so that someone could use this API in real app?
Will explain in the champion scenarios.
4. How does the **authentication** workflow look?
Build on top of azure sdks and azure-identity.
See Champion Scenario section [here](https://azure.github.io/azure-sdk/policies_reviewprocess.html).
Code is appreciated but optional. Pseudocode is fine.
- Use the auto-configuration provided by Spring Boot: https://gist.github.com/saragluna/7e1754a7d17d0820f223ddba8621e2c7.
- Use Resource Handling abstractions provided by Spring Framework: https://gist.github.com/backwind1233/0f5a5e11ed9856db141f4cdc1c285971.
- Use Enterprise Integration Patterns (EIP) supported by Spring Integration: https://gist.github.com/yiliuTo/352b071a717fd90c984bcf03ec5d0fb9.
## Step 2: Quickstart Samples (Optional)
Include samples demonstrating how to consume the client library if available:
We have a sample repo hosting all our sample projects: https://github.com/Azure-Samples/azure-spring-boot-samples/tree/spring-cloud-azure_4.0
## Thank you for your submission!
|
architecture
|
board review introducing spring cloud azure thank you for starting the process for approval of the client library for your azure service thorough review of your client library ensures that your apis are consistent with the guidelines and the consumers of your client library have a consistently good experience when using azure the architecture board reviews only if your library does not meet this requirement please reach out to adparch microsoft com before creating the issue please reference our to understand what is being asked for in the issue template before submitting ensure you adjust the title of the issue appropriately note that the required material must be included before a meeting can be scheduled contacts and timeline main contacts strong liu stliu spring cloud azure team lead sean li spring cloud azure pm xiaolu dai saragluna expected ga date for this library mid march about the service spring is the most popular application development framework for enterprise java there are many spring projects in the spring ecosystem to take care of the different needs in application development the spring cloud azure team is working on the integration between azure and spring we provide azure support for different spring projects such as spring boot spring data spring integration spring security and spring cloud stream link to documentation introducing describing the service link to the service rest apis n a is the goal to release a public preview private preview or ga the spring cloud azure libraries exist for years and this time we are going to ga a major version the spring cloud azure which is a refactored new version about the client library name of client library spring cloud azure is our project name and we will ship artifacts this time spring cloud azure core spring cloud azure service spring cloud azure resourcemanger spring cloud azure trace sleuth spring messaging azure spring messaging azure eventhubs spring messaging azure servicebus spring messaging azure storage queue spring integration azure core spring integration azure eventhubs spring integration azure servicebus spring integration azure storage queue spring cloud azure stream binder eventhubs spring cloud azure stream binder eventhubs core spring cloud azure stream binder serviceubs spring cloud azure stream binder serviceubs core spring cloud azure autoconfigure spring cloud azure actuator spring cloud azure actuator autoconfigure spring cloud azure starter spring cloud azure starter activedirectory spring cloud azure starter activedirectory spring cloud azure starter actuator spring cloud azure starter appconfiguration spring cloud azure starter cosmos spring cloud azure starter data cosmos spring cloud azure starter eventhubs spring cloud azure starter keyvault secrets spring cloud azure starter servicebus spring cloud azure starter servicebus jms spring cloud azure starter storage blob spring cloud azure starter storage file share spring cloud azure starter storage queue spring cloud azure starter integration eventhubs spring cloud azure starter integration servicebus spring cloud azure starter integration storage queue spring cloud azure starter stream eventhubs spring cloud azure starter stream servicebus link to library reference documentation is there an existing sdk library if yes provide link yes we have shipped these libraries before but the artifact names are changed in this version step champion scenarios ultimately the library should be easy to use for common scenarios that developers want consider the following questions when thinking about champion scenarios what is the app the developer is building that uses your client library typically a spring or spring boot application who is the end user of the application the developer s customer the developer s customer what features of the api need to be explained in the sample so that someone could use this api in real app will explain in the champion scenarios how does the authentication workflow look build on top of azure sdks and azure identity see champion scenario section code is appreciated but optional pseudocode is fine use the auto configuration provided by spring boot use resource handling abstractions provided by spring framework use enterprise integration patterns eip supported by spring integration step quickstart samples optional include samples demonstrating how to consume the client library if available we have a sample repo hosting all our sample projects thank you for your submission
| 1
|
11,025
| 27,777,356,716
|
IssuesEvent
|
2023-03-16 18:10:52
|
Azure/azure-sdk
|
https://api.github.com/repos/Azure/azure-sdk
|
opened
|
Board Review: Supporting Http Trailers
|
architecture board-review
|
We are requesting Azure.Core support HTTP trailers in .NET, Java, Python, C++, and Go as an Gallium ask.
More info - https://dev.azure.com/devdiv/DevDiv/_workitems/edit/1749252
|
1.0
|
Board Review: Supporting Http Trailers - We are requesting Azure.Core support HTTP trailers in .NET, Java, Python, C++, and Go as an Gallium ask.
More info - https://dev.azure.com/devdiv/DevDiv/_workitems/edit/1749252
|
architecture
|
board review supporting http trailers we are requesting azure core support http trailers in net java python c and go as an gallium ask more info
| 1
|
10,059
| 26,164,021,224
|
IssuesEvent
|
2023-01-01 02:07:07
|
facebook/react-native
|
https://api.github.com/repos/facebook/react-native
|
closed
|
[0.68] redefinition of funcitons in Props.h generated by codegen
|
Stale Platform: Android Needs: Author Feedback Needs: Repro Tech: Codegen Type: New Architecture
|
### Description
I found that codegen genated a cpp file which has two functions with the same signature, which will lead to redifine problem in C++:
```
In file included from /Users/x/source/qrn-lib/yReact/Android/QRNLib/build/generated/source/codegen/jni/react/renderer/components/my_appmodules/ShadowNodes.cpp:11:
In file included from /Users/x/source/qrn-lib/yReact/Android/QRNLib/build/generated/source/codegen/jni/react/renderer/components/my_appmodules/ShadowNodes.h:14:
/Users/x/source/qrn-lib/yReact/Android/QRNLib/build/generated/source/codegen/jni/react/renderer/components/my_appmodules/Props.h:619:20: error: redefinition of 'fromRawValue'
static inline void fromRawValue(const PropsParserContext& context, const RawValue &value, ArrayPropsNativeComponentViewSizesMask &result) {
^
/Users/x/source/qrn-lib/yReact/Android/QRNLib/build/generated/source/codegen/jni/react/renderer/components/my_appmodules/Props.h:524:20: note: previous definition is here
static inline void fromRawValue(const PropsParserContext& context, const RawValue &value, ModalHostViewSupportedOrientationsMask &result) {
^
/Users/x/source/qrn-lib/yReact/Android/QRNLib/build/generated/source/codegen/jni/react/renderer/components/my_appmodules/Props.h:634:27: error: redefinition of 'toString'
static inline std::string toString(const ArrayPropsNativeComponentViewSizesMask &value) {
^
/Users/x/source/qrn-lib/yReact/Android/QRNLib/build/generated/source/codegen/jni/react/renderer/components/my_appmodules/Props.h:551:27: note: previous definition is here
static inline std::string toString(const ModalHostViewSupportedOrientationsMask &value) {
^
2 errors generated.
```
In Cpp file, type `ModalHostViewSupportedOrientationsMask` and type `ArrayPropsNativeComponentViewSizesMask` are both `uint32_t`.
I followed this [tutorial](https://reactnative.dev/docs/new-architecture-app-intro)
### Version
0.68.1
### Output of `npx react-native info`
I download react native source manually, didn't use npm.
### Steps to reproduce
I followed this [tutorial](https://reactnative.dev/docs/new-architecture-app-intro)
### Snack, code example, screenshot, or link to a repository
```
apply plugin: 'com.android.library'
apply plugin: "com.facebook.react"
react {
root = rootDir
reactNativeDir = rootDir
libraryName = "my_appmodules"
}
android {
compileSdkVersion rootProject.properties.compileSdkVersion as int
compileOptions {
sourceCompatibility JavaVersion.VERSION_1_8
targetCompatibility JavaVersion.VERSION_1_8
}
defaultConfig {
minSdkVersion rootProject.properties.minSdkVersion as int
targetSdkVersion rootProject.properties.targetSdkVersion as int
versionCode 1
versionName "1.0"
externalNativeBuild {
ndkBuild {
arguments "APP_PLATFORM=android-21",
"APP_STL=c++_shared",
"NDK_TOOLCHAIN_VERSION=clang",
"GENERATED_SRC_DIR=$buildDir/generated/source",
"PROJECT_BUILD_DIR=$buildDir",
"REACT_ANDROID_DIR=$rootDir/ReactAndroid",
"REACT_ANDROID_BUILD_DIR=$rootDir/ReactAndroid/build"
cFlags "-Wall", "-Werror", "-fexceptions", "-frtti", "-DWITH_INSPECTOR=1"
cppFlags "-std=c++17"
targets "my_appmodules"
}
}
}
externalNativeBuild {
ndkBuild {
path "$projectDir/src/main/jni/Android.mk"
}
}
def reactAndroidProjectDir = project(':ReactAndroid').projectDir
def packageReactNdkLibs = tasks.register("packageReactNdkLibs", Copy) {
dependsOn(":ReactAndroid:packageReactNdkLibsForBuck")
dependsOn("generateCodegenArtifactsFromSchema")
from("$reactAndroidProjectDir/src/main/jni/prebuilt/lib")
into("$buildDir/react-ndk/exported")
}
afterEvaluate {
preBuild.dependsOn(packageReactNdkLibs)
configureNdkBuildDebug.dependsOn(preBuild)
configureNdkBuildRelease.dependsOn(preBuild)
}
packagingOptions {
pickFirst '**/libhermes.so'
pickFirst '**/libjsc.so'
}
...
}
buildscript {
repositories {
maven {
url "https://repo1.maven.org/maven2"
}
}
}
allprojects {
repositories {
maven {
url "https://repo1.maven.org/maven2"
}
}
}
dependencies {
api fileTree(include: ['*.jar'], dir: 'libs')
implementation project(":ReactAndroid")
...
}
```
|
1.0
|
[0.68] redefinition of funcitons in Props.h generated by codegen - ### Description
I found that codegen genated a cpp file which has two functions with the same signature, which will lead to redifine problem in C++:
```
In file included from /Users/x/source/qrn-lib/yReact/Android/QRNLib/build/generated/source/codegen/jni/react/renderer/components/my_appmodules/ShadowNodes.cpp:11:
In file included from /Users/x/source/qrn-lib/yReact/Android/QRNLib/build/generated/source/codegen/jni/react/renderer/components/my_appmodules/ShadowNodes.h:14:
/Users/x/source/qrn-lib/yReact/Android/QRNLib/build/generated/source/codegen/jni/react/renderer/components/my_appmodules/Props.h:619:20: error: redefinition of 'fromRawValue'
static inline void fromRawValue(const PropsParserContext& context, const RawValue &value, ArrayPropsNativeComponentViewSizesMask &result) {
^
/Users/x/source/qrn-lib/yReact/Android/QRNLib/build/generated/source/codegen/jni/react/renderer/components/my_appmodules/Props.h:524:20: note: previous definition is here
static inline void fromRawValue(const PropsParserContext& context, const RawValue &value, ModalHostViewSupportedOrientationsMask &result) {
^
/Users/x/source/qrn-lib/yReact/Android/QRNLib/build/generated/source/codegen/jni/react/renderer/components/my_appmodules/Props.h:634:27: error: redefinition of 'toString'
static inline std::string toString(const ArrayPropsNativeComponentViewSizesMask &value) {
^
/Users/x/source/qrn-lib/yReact/Android/QRNLib/build/generated/source/codegen/jni/react/renderer/components/my_appmodules/Props.h:551:27: note: previous definition is here
static inline std::string toString(const ModalHostViewSupportedOrientationsMask &value) {
^
2 errors generated.
```
In Cpp file, type `ModalHostViewSupportedOrientationsMask` and type `ArrayPropsNativeComponentViewSizesMask` are both `uint32_t`.
I followed this [tutorial](https://reactnative.dev/docs/new-architecture-app-intro)
### Version
0.68.1
### Output of `npx react-native info`
I download react native source manually, didn't use npm.
### Steps to reproduce
I followed this [tutorial](https://reactnative.dev/docs/new-architecture-app-intro)
### Snack, code example, screenshot, or link to a repository
```
apply plugin: 'com.android.library'
apply plugin: "com.facebook.react"
react {
root = rootDir
reactNativeDir = rootDir
libraryName = "my_appmodules"
}
android {
compileSdkVersion rootProject.properties.compileSdkVersion as int
compileOptions {
sourceCompatibility JavaVersion.VERSION_1_8
targetCompatibility JavaVersion.VERSION_1_8
}
defaultConfig {
minSdkVersion rootProject.properties.minSdkVersion as int
targetSdkVersion rootProject.properties.targetSdkVersion as int
versionCode 1
versionName "1.0"
externalNativeBuild {
ndkBuild {
arguments "APP_PLATFORM=android-21",
"APP_STL=c++_shared",
"NDK_TOOLCHAIN_VERSION=clang",
"GENERATED_SRC_DIR=$buildDir/generated/source",
"PROJECT_BUILD_DIR=$buildDir",
"REACT_ANDROID_DIR=$rootDir/ReactAndroid",
"REACT_ANDROID_BUILD_DIR=$rootDir/ReactAndroid/build"
cFlags "-Wall", "-Werror", "-fexceptions", "-frtti", "-DWITH_INSPECTOR=1"
cppFlags "-std=c++17"
targets "my_appmodules"
}
}
}
externalNativeBuild {
ndkBuild {
path "$projectDir/src/main/jni/Android.mk"
}
}
def reactAndroidProjectDir = project(':ReactAndroid').projectDir
def packageReactNdkLibs = tasks.register("packageReactNdkLibs", Copy) {
dependsOn(":ReactAndroid:packageReactNdkLibsForBuck")
dependsOn("generateCodegenArtifactsFromSchema")
from("$reactAndroidProjectDir/src/main/jni/prebuilt/lib")
into("$buildDir/react-ndk/exported")
}
afterEvaluate {
preBuild.dependsOn(packageReactNdkLibs)
configureNdkBuildDebug.dependsOn(preBuild)
configureNdkBuildRelease.dependsOn(preBuild)
}
packagingOptions {
pickFirst '**/libhermes.so'
pickFirst '**/libjsc.so'
}
...
}
buildscript {
repositories {
maven {
url "https://repo1.maven.org/maven2"
}
}
}
allprojects {
repositories {
maven {
url "https://repo1.maven.org/maven2"
}
}
}
dependencies {
api fileTree(include: ['*.jar'], dir: 'libs')
implementation project(":ReactAndroid")
...
}
```
|
architecture
|
redefinition of funcitons in props h generated by codegen description i found that codegen genated a cpp file which has two functions with the same signature which will lead to redifine problem in c in file included from users x source qrn lib yreact android qrnlib build generated source codegen jni react renderer components my appmodules shadownodes cpp in file included from users x source qrn lib yreact android qrnlib build generated source codegen jni react renderer components my appmodules shadownodes h users x source qrn lib yreact android qrnlib build generated source codegen jni react renderer components my appmodules props h error redefinition of fromrawvalue static inline void fromrawvalue const propsparsercontext context const rawvalue value arraypropsnativecomponentviewsizesmask result users x source qrn lib yreact android qrnlib build generated source codegen jni react renderer components my appmodules props h note previous definition is here static inline void fromrawvalue const propsparsercontext context const rawvalue value modalhostviewsupportedorientationsmask result users x source qrn lib yreact android qrnlib build generated source codegen jni react renderer components my appmodules props h error redefinition of tostring static inline std string tostring const arraypropsnativecomponentviewsizesmask value users x source qrn lib yreact android qrnlib build generated source codegen jni react renderer components my appmodules props h note previous definition is here static inline std string tostring const modalhostviewsupportedorientationsmask value errors generated in cpp file type modalhostviewsupportedorientationsmask and type arraypropsnativecomponentviewsizesmask are both t i followed this version output of npx react native info i download react native source manually didn t use npm steps to reproduce i followed this snack code example screenshot or link to a repository apply plugin com android library apply plugin com facebook react react root rootdir reactnativedir rootdir libraryname my appmodules android compilesdkversion rootproject properties compilesdkversion as int compileoptions sourcecompatibility javaversion version targetcompatibility javaversion version defaultconfig minsdkversion rootproject properties minsdkversion as int targetsdkversion rootproject properties targetsdkversion as int versioncode versionname externalnativebuild ndkbuild arguments app platform android app stl c shared ndk toolchain version clang generated src dir builddir generated source project build dir builddir react android dir rootdir reactandroid react android build dir rootdir reactandroid build cflags wall werror fexceptions frtti dwith inspector cppflags std c targets my appmodules externalnativebuild ndkbuild path projectdir src main jni android mk def reactandroidprojectdir project reactandroid projectdir def packagereactndklibs tasks register packagereactndklibs copy dependson reactandroid packagereactndklibsforbuck dependson generatecodegenartifactsfromschema from reactandroidprojectdir src main jni prebuilt lib into builddir react ndk exported afterevaluate prebuild dependson packagereactndklibs configurendkbuilddebug dependson prebuild configurendkbuildrelease dependson prebuild packagingoptions pickfirst libhermes so pickfirst libjsc so buildscript repositories maven url allprojects repositories maven url dependencies api filetree include dir libs implementation project reactandroid
| 1
|
408,158
| 11,942,318,198
|
IssuesEvent
|
2020-04-02 20:04:26
|
kubernetes/release
|
https://api.github.com/repos/kubernetes/release
|
closed
|
Stopped publishing /bin folders of CI builds
|
area/release-eng kind/bug priority/critical-urgent sig/release
|
#### What happened:
"/bin" folder is not published in CI builds. This broke all the kubeadm master jobs.
https://testgrid.k8s.io/sig-release-master-informing#kubeadm-kinder-master
#### What you expected to happen:
The CI builds published should contain bin folder with the executables.
#### How to reproduce it (as minimally and precisely as possible):
Look for bin folder in
https://console.cloud.google.com/storage/browser/kubernetes-release-dev/ci/v1.19.0-alpha.1.247%2B2fd8debe9b913f
#### Anything else we need to know?:
#### Environment:
- Cloud provider or hardware configuration:
- OS (e.g: `cat /etc/os-release`):
- Kernel (e.g. `uname -a`):
- Others:
|
1.0
|
Stopped publishing /bin folders of CI builds - #### What happened:
"/bin" folder is not published in CI builds. This broke all the kubeadm master jobs.
https://testgrid.k8s.io/sig-release-master-informing#kubeadm-kinder-master
#### What you expected to happen:
The CI builds published should contain bin folder with the executables.
#### How to reproduce it (as minimally and precisely as possible):
Look for bin folder in
https://console.cloud.google.com/storage/browser/kubernetes-release-dev/ci/v1.19.0-alpha.1.247%2B2fd8debe9b913f
#### Anything else we need to know?:
#### Environment:
- Cloud provider or hardware configuration:
- OS (e.g: `cat /etc/os-release`):
- Kernel (e.g. `uname -a`):
- Others:
|
non_architecture
|
stopped publishing bin folders of ci builds what happened bin folder is not published in ci builds this broke all the kubeadm master jobs what you expected to happen the ci builds published should contain bin folder with the executables how to reproduce it as minimally and precisely as possible look for bin folder in anything else we need to know environment cloud provider or hardware configuration os e g cat etc os release kernel e g uname a others
| 0
|
2,001
| 7,137,471,995
|
IssuesEvent
|
2018-01-23 11:07:12
|
Tendrl/documentation
|
https://api.github.com/repos/Tendrl/documentation
|
closed
|
Object model for tendrl central store
|
architecture
|
It is not clear if there would be a generic data model followed by all the SDS systems to populate data in central store or SDS specific objects as is would be maintained. If as is SDS specific objects are to be maintained, does it mean that tendrl App would have SDS system specific logic written to process and return as output of REST GET calls.
It would get more clearer if data models are published here and clearly explained if a generic model is suggested or a SDS specific and if SDS specific how the App tackles them.
|
1.0
|
Object model for tendrl central store - It is not clear if there would be a generic data model followed by all the SDS systems to populate data in central store or SDS specific objects as is would be maintained. If as is SDS specific objects are to be maintained, does it mean that tendrl App would have SDS system specific logic written to process and return as output of REST GET calls.
It would get more clearer if data models are published here and clearly explained if a generic model is suggested or a SDS specific and if SDS specific how the App tackles them.
|
architecture
|
object model for tendrl central store it is not clear if there would be a generic data model followed by all the sds systems to populate data in central store or sds specific objects as is would be maintained if as is sds specific objects are to be maintained does it mean that tendrl app would have sds system specific logic written to process and return as output of rest get calls it would get more clearer if data models are published here and clearly explained if a generic model is suggested or a sds specific and if sds specific how the app tackles them
| 1
|
819,195
| 30,723,461,450
|
IssuesEvent
|
2023-07-27 17:40:56
|
nck-2/test-rep
|
https://api.github.com/repos/nck-2/test-rep
|
closed
|
Implement JS scripting
|
enhancement priority::low
|
*Created by: glookka*
Implement JS scripting for the 1st release of Manticore
┆Issue is synchronized with this [Gitlab issue](https://gitlab.com/manticoresearch/dev/-/issues/17)
|
1.0
|
Implement JS scripting - *Created by: glookka*
Implement JS scripting for the 1st release of Manticore
┆Issue is synchronized with this [Gitlab issue](https://gitlab.com/manticoresearch/dev/-/issues/17)
|
non_architecture
|
implement js scripting created by glookka implement js scripting for the release of manticore ┆issue is synchronized with this
| 0
|
92,034
| 18,762,088,450
|
IssuesEvent
|
2021-11-05 17:42:35
|
microsoft/pxt-arcade
|
https://api.github.com/repos/microsoft/pxt-arcade
|
reopened
|
Tiktok logo on hoc2021 offcenter, doesn't show up in safari
|
browser compat hour of code
|
**Describe the bug**
on chrome the icon is slightly off center:

and on safari the icon is missing / there is just a circle
**To Reproduce**
Steps to reproduce the behavior:
1. Go to https://arcade.makecode.com/hour-of-code-2021
2. go down to bottom of page ('follow us on social media')
3. look at tiktok icon on far right
4. See error
**Expected behavior**
should be centered with other icons
on safari (mac and ipad), the icon is also missing (and is offcenter in the other direction :) )
|
1.0
|
Tiktok logo on hoc2021 offcenter, doesn't show up in safari - **Describe the bug**
on chrome the icon is slightly off center:

and on safari the icon is missing / there is just a circle
**To Reproduce**
Steps to reproduce the behavior:
1. Go to https://arcade.makecode.com/hour-of-code-2021
2. go down to bottom of page ('follow us on social media')
3. look at tiktok icon on far right
4. See error
**Expected behavior**
should be centered with other icons
on safari (mac and ipad), the icon is also missing (and is offcenter in the other direction :) )
|
non_architecture
|
tiktok logo on offcenter doesn t show up in safari describe the bug on chrome the icon is slightly off center and on safari the icon is missing there is just a circle to reproduce steps to reproduce the behavior go to go down to bottom of page follow us on social media look at tiktok icon on far right see error expected behavior should be centered with other icons on safari mac and ipad the icon is also missing and is offcenter in the other direction
| 0
|
77,009
| 26,717,950,451
|
IssuesEvent
|
2023-01-28 19:22:00
|
scipy/scipy
|
https://api.github.com/repos/scipy/scipy
|
opened
|
BUG: filtfilt doesn't work with complex coefficient filters
|
defect
|
### Describe your issue.
When given complex coefficient filters, `filtfilt` behaves unexpectedly; specifically, for a bandpass filter from +45Hz to +55Hz, it zeros a signal at +50Hz. I think this is probably because when run backward, the conjugate of the coefficients should be used.
See code below, which generates this. I expect the `filtfilt` output to be similar to the `lfilter` output.

### Reproducing Code Example
```python
import numpy as np
from scipy import signal
import matplotlib.pyplot as plt
# centre frequency for filter [Hz]
fcentre = 50
# filter passband width [Hz]
fwidth = 5
# sample rate [Hz]
fs = 1e3
z, p, k = signal.butter(2, 2*np.pi*fwidth/2, output='zpk', fs=fs)
z = z.astype(complex)
p = p.astype(complex)
# rotate filter zeros and poles to be centred about fcentre
z *= np.exp(2j * np.pi * fcentre/fs)
p *= np.exp(2j * np.pi * fcentre/fs)
b, a = signal.zpk2tf(z, p, k)
# ## complex frequency response ##
# f = np.linspace(-200,200,1001)
# gjw = signal.freqz(b, a, worN=f, fs=fs)[1]
# fig, ax = plt.subplots(2, sharex=True)
# ax[0].semilogy(f, abs(gjw))
# ax[1].plot(f, np.rad2deg(np.angle(gjw)))
## application ##
# generate signal with +50Hz and -50Hz
# time [s]
t = np.arange(100) / fs
# input
u = (np.exp(2j * np.pi * fcentre * t)
+ 0.5 * np.exp(-2j * np.pi * fcentre * t))
# lfilter output
y = signal.lfilter(b, a, u)
# filtfilt output
yy = signal.filtfilt(b, a, u)
fig, ax = plt.subplots(2, sharex=True)
ax[0].plot(t, u.real, label='input')
ax[0].plot(t, y.real, label='lfilter')
ax[0].plot(t, yy.real, label='filtfilt')
ax[0].legend(loc='upper right')
ax[0].set_ylabel('real part')
ax[1].plot(t, u.imag)
ax[1].plot(t, y.imag)
ax[1].plot(t, yy.imag)
ax[1].set_xlabel('time [s]')
ax[1].set_ylabel('imag part')
plt.show()
```
### Error message
```shell
No error; see description.
```
### SciPy/NumPy/Python version information
1.10.0 1.24.1 sys.version_info(major=3, minor=11, micro=0, releaselevel='final', serial=0)
|
1.0
|
BUG: filtfilt doesn't work with complex coefficient filters - ### Describe your issue.
When given complex coefficient filters, `filtfilt` behaves unexpectedly; specifically, for a bandpass filter from +45Hz to +55Hz, it zeros a signal at +50Hz. I think this is probably because when run backward, the conjugate of the coefficients should be used.
See code below, which generates this. I expect the `filtfilt` output to be similar to the `lfilter` output.

### Reproducing Code Example
```python
import numpy as np
from scipy import signal
import matplotlib.pyplot as plt
# centre frequency for filter [Hz]
fcentre = 50
# filter passband width [Hz]
fwidth = 5
# sample rate [Hz]
fs = 1e3
z, p, k = signal.butter(2, 2*np.pi*fwidth/2, output='zpk', fs=fs)
z = z.astype(complex)
p = p.astype(complex)
# rotate filter zeros and poles to be centred about fcentre
z *= np.exp(2j * np.pi * fcentre/fs)
p *= np.exp(2j * np.pi * fcentre/fs)
b, a = signal.zpk2tf(z, p, k)
# ## complex frequency response ##
# f = np.linspace(-200,200,1001)
# gjw = signal.freqz(b, a, worN=f, fs=fs)[1]
# fig, ax = plt.subplots(2, sharex=True)
# ax[0].semilogy(f, abs(gjw))
# ax[1].plot(f, np.rad2deg(np.angle(gjw)))
## application ##
# generate signal with +50Hz and -50Hz
# time [s]
t = np.arange(100) / fs
# input
u = (np.exp(2j * np.pi * fcentre * t)
+ 0.5 * np.exp(-2j * np.pi * fcentre * t))
# lfilter output
y = signal.lfilter(b, a, u)
# filtfilt output
yy = signal.filtfilt(b, a, u)
fig, ax = plt.subplots(2, sharex=True)
ax[0].plot(t, u.real, label='input')
ax[0].plot(t, y.real, label='lfilter')
ax[0].plot(t, yy.real, label='filtfilt')
ax[0].legend(loc='upper right')
ax[0].set_ylabel('real part')
ax[1].plot(t, u.imag)
ax[1].plot(t, y.imag)
ax[1].plot(t, yy.imag)
ax[1].set_xlabel('time [s]')
ax[1].set_ylabel('imag part')
plt.show()
```
### Error message
```shell
No error; see description.
```
### SciPy/NumPy/Python version information
1.10.0 1.24.1 sys.version_info(major=3, minor=11, micro=0, releaselevel='final', serial=0)
|
non_architecture
|
bug filtfilt doesn t work with complex coefficient filters describe your issue when given complex coefficient filters filtfilt behaves unexpectedly specifically for a bandpass filter from to it zeros a signal at i think this is probably because when run backward the conjugate of the coefficients should be used see code below which generates this i expect the filtfilt output to be similar to the lfilter output reproducing code example python import numpy as np from scipy import signal import matplotlib pyplot as plt centre frequency for filter fcentre filter passband width fwidth sample rate fs z p k signal butter np pi fwidth output zpk fs fs z z astype complex p p astype complex rotate filter zeros and poles to be centred about fcentre z np exp np pi fcentre fs p np exp np pi fcentre fs b a signal z p k complex frequency response f np linspace gjw signal freqz b a worn f fs fs fig ax plt subplots sharex true ax semilogy f abs gjw ax plot f np np angle gjw application generate signal with and time t np arange fs input u np exp np pi fcentre t np exp np pi fcentre t lfilter output y signal lfilter b a u filtfilt output yy signal filtfilt b a u fig ax plt subplots sharex true ax plot t u real label input ax plot t y real label lfilter ax plot t yy real label filtfilt ax legend loc upper right ax set ylabel real part ax plot t u imag ax plot t y imag ax plot t yy imag ax set xlabel time ax set ylabel imag part plt show error message shell no error see description scipy numpy python version information sys version info major minor micro releaselevel final serial
| 0
|
660,909
| 22,035,831,513
|
IssuesEvent
|
2022-05-28 15:07:12
|
ApplETS/Notre-Dame
|
https://api.github.com/repos/ApplETS/Notre-Dame
|
closed
|
Downgrade minimum supported iOS version to 12
|
bug platform: ios ready to develop priority: high
|
**Describe the bug**
Some iPhone users have reported that the application crash on boot on iOS 13.
**Smartphone (please complete the following information):**
- Device: iPhone
- OS: iOS13
- Version 4.6.2
**Additional context**
Look at the firebase crashlytics around 19h 48min 45s the 26 November 2021. A user with the problem opened the application
|
1.0
|
Downgrade minimum supported iOS version to 12 - **Describe the bug**
Some iPhone users have reported that the application crash on boot on iOS 13.
**Smartphone (please complete the following information):**
- Device: iPhone
- OS: iOS13
- Version 4.6.2
**Additional context**
Look at the firebase crashlytics around 19h 48min 45s the 26 November 2021. A user with the problem opened the application
|
non_architecture
|
downgrade minimum supported ios version to describe the bug some iphone users have reported that the application crash on boot on ios smartphone please complete the following information device iphone os version additional context look at the firebase crashlytics around the november a user with the problem opened the application
| 0
|
4,757
| 11,671,000,842
|
IssuesEvent
|
2020-03-04 01:43:45
|
escobard/create-app
|
https://api.github.com/repos/escobard/create-app
|
opened
|
App - Docker Compose - Integration tests + e2e tests
|
API Architecture DB DevOps Docker Integration Tests Sequelize
|
foundation for the #31 EPIC.
1. create a `docker-compose` job that spins up the `PGDB` and the `API`.
1. run all integration tests vs the DB.
1. if passed, install UI + run e2e jobs (part of a future story).
|
1.0
|
App - Docker Compose - Integration tests + e2e tests - foundation for the #31 EPIC.
1. create a `docker-compose` job that spins up the `PGDB` and the `API`.
1. run all integration tests vs the DB.
1. if passed, install UI + run e2e jobs (part of a future story).
|
architecture
|
app docker compose integration tests tests foundation for the epic create a docker compose job that spins up the pgdb and the api run all integration tests vs the db if passed install ui run jobs part of a future story
| 1
|
10,389
| 26,895,492,364
|
IssuesEvent
|
2023-02-06 12:06:59
|
OasisLMF/OasisPlatform
|
https://api.github.com/repos/OasisLMF/OasisPlatform
|
closed
|
Fix Piwind testing on platform 2
|
bug scalable architecture build system
|
<!--- IMPORTANT: Please apply the relevant labels, for example if this issue is needed as a backported fix add the label `LTS fix` (Long term support fix) -->
## Issue Description
Piwind integration testing is failing on platform2 with different results https://github.com/OasisLMF/OasisPlatform/actions/runs/4075699753/jobs/7022589594. Investigate and fix
|
1.0
|
Fix Piwind testing on platform 2 - <!--- IMPORTANT: Please apply the relevant labels, for example if this issue is needed as a backported fix add the label `LTS fix` (Long term support fix) -->
## Issue Description
Piwind integration testing is failing on platform2 with different results https://github.com/OasisLMF/OasisPlatform/actions/runs/4075699753/jobs/7022589594. Investigate and fix
|
architecture
|
fix piwind testing on platform issue description piwind integration testing is failing on with different results investigate and fix
| 1
|
566,728
| 16,828,067,434
|
IssuesEvent
|
2021-06-17 21:39:29
|
canonical-web-and-design/ubuntu.com
|
https://api.github.com/repos/canonical-web-and-design/ubuntu.com
|
opened
|
Further improve by ignoring known URLs in the docs link checks
|
Priority: Low
|
## Summary
Can you filter out these links? they are in the footer, beyond my control:
https://www.openstack.org/projects/openstack-faq/
https://www.openstack.org/projects/
https://groups.openstack.org/
(to be clear, in charm-deployment-guide)
|
1.0
|
Further improve by ignoring known URLs in the docs link checks - ## Summary
Can you filter out these links? they are in the footer, beyond my control:
https://www.openstack.org/projects/openstack-faq/
https://www.openstack.org/projects/
https://groups.openstack.org/
(to be clear, in charm-deployment-guide)
|
non_architecture
|
further improve by ignoring known urls in the docs link checks summary can you filter out these links they are in the footer beyond my control to be clear in charm deployment guide
| 0
|
411,187
| 12,015,454,818
|
IssuesEvent
|
2020-04-10 14:01:38
|
AY1920S2-CS2103T-W17-2/main
|
https://api.github.com/repos/AY1920S2-CS2103T-W17-2/main
|
closed
|
Update suggestions generation to take in a list of corrections
|
priority.High status.Ongoing type.Enhancement
|
The `CorrectionEngine` now returns a list of `correctedItems` instead of just 1 `correctedItem`.
|
1.0
|
Update suggestions generation to take in a list of corrections - The `CorrectionEngine` now returns a list of `correctedItems` instead of just 1 `correctedItem`.
|
non_architecture
|
update suggestions generation to take in a list of corrections the correctionengine now returns a list of correcteditems instead of just correcteditem
| 0
|
582,364
| 17,359,823,814
|
IssuesEvent
|
2021-07-29 18:55:04
|
yugabyte/yugabyte-db
|
https://api.github.com/repos/yugabyte/yugabyte-db
|
closed
|
[Platform] Provider creation in platform using Yugabundle is failing
|
2.7.2 Backport Completed 2.7.2 Backport Required area/platform priority/high
|
Yugabundle installation went fine with 2.7.2-b194 but couldn't create any provider. Can you please check this http://10.9.141.69/tasks/e49d047f-2783-4506-bb43-46c0be10a72f (demo/Password#123). Use yb-dev-aws-2.pem to ssh. cc @Wesley Wang
Error:
021-07-16 01:50:07,767 [DEBUG] from ShellProcessHandler in TaskPool-CloudBootstrap(14cb4c8c-81f2-41a8-8587-fb70d63eca58)-0 - ModuleNotFoundError: No module named '_cffi_backend'
2021-07-16 01:50:07,767 [INFO] from ShellProcessHandler in TaskPool-CloudBootstrap(14cb4c8c-81f2-41a8-8587-fb70d63eca58)-0 - Completed proc 'bin/ybcloud.sh aws network bootstrap {"errorString":null,"providerUUID":"14cb4c8c-81f2-41a8-8587-fb70d63eca58","perRegionMetadata":{"us-west-2":{"vpcId":null,"vpcCidr":null,"azToSubnetIds":null,"subnetId":null,"customImageId":null,"customSecurityGroupId":null}},"keyPairName":null,"sshPrivateKeyContent":null,"sshUser":null,"airGapInstall":false,"sshPort":54422,"hostVpcId":null,"hostVpcRegion":null,"customHostCidrs":[],"destVpcId":null}' status=failure code=1 [ 280 ms ]
2021-07-16 01:50:07,767 [ERROR] from DevopsBase in TaskPool-CloudBootstrap(14cb4c8c-81f2-41a8-8587-fb70d63eca58)-0 - Traceback (most recent call last):
File "/opt/yugabyte/packages/yugabyte-2.7.2.0-b194/devops/python3_installed_modules/bin/ybcloud.py", line 11, in <module>
from ybops.cloud.ybcloud import YbCloud
File "/opt/yugabyte/packages/yugabyte-2.7.2.0-b194/devops/python3_installed_modules/ybops/cloud/ybcloud.py", line 15, in <module>
from ybops.cloud.aws.cloud import AwsCloud
File "/opt/yugabyte/packages/yugabyte-2.7.2.0-b194/devops/python3_installed_modules/ybops/cloud/aws/cloud.py", line 23, in <module>
from ybops.cloud.aws.command import AwsInstanceCommand, AwsNetworkCommand, \
File "/opt/yugabyte/packages/yugabyte-2.7.2.0-b194/devops/python3_installed_modules/ybops/cloud/aws/command.py", line 11, in <module>
from ybops.cloud.aws.method import AwsProvisionInstancesMethod, AwsCreateInstancesMethod, \
File "/opt/yugabyte/packages/yugabyte-2.7.2.0-b194/devops/python3_installed_modules/ybops/cloud/aws/method.py", line 11, in <module>
from ybops.cloud.common.method import ListInstancesMethod, CreateInstancesMethod, \
File "/opt/yugabyte/packages/yugabyte-2.7.2.0-b194/devops/python3_installed_modules/ybops/cloud/common/method.py", line 20, in <module>
from ybops.utils import get_ssh_host_port, wait_for_ssh, get_path_from_yb, \
File "/opt/yugabyte/packages/yugabyte-2.7.2.0-b194/devops/python3_installed_modules/ybops/utils/__init__.py", line 18, in <module>
import paramiko
File "/opt/yugabyte/packages/yugabyte-2.7.2.0-b194/devops/python3_installed_modules/paramiko/__init__.py", line 22, in <module>
from paramiko.transport import SecurityOptions, Transport
File "/opt/yugabyte/packages/yugabyte-2.7.2.0-b194/devops/python3_installed_modules/paramiko/transport.py", line 90, in <module>
from paramiko.ed25519key import Ed25519Key
File "/opt/yugabyte/packages/yugabyte-2.7.2.0-b194/devops/python3_installed_modules/paramiko/ed25519key.py", line 17, in <module>
import bcrypt
File "/opt/yugabyte/packages/yugabyte-2.7.2.0-b194/devops/python3_installed_modules/bcrypt/__init__.py", line 25, in <module>
from . import _bcrypt # type: ignore
ModuleNotFoundError: No module named '_cffi_backend'
2021-07-16 01:50:07,768 [ERROR] from SubTaskGroup in TaskPool-0 - Failed to execute task type subtasks.cloud.CloudSetup UUID 6b4d8fe9-c98a-4f23-a5f3-637392463afd details {"errorString":null,"providerUUID":"14cb4c8c-81f2-41a8-8587-fb70d63eca58","customPayload":"{\"errorString\":null,\"providerUUID\":\"14cb4c8c-81f2-41a8-8587-fb70d63eca58\",\"perRegionMetadata\":{\"us-west-2\":{\"vpcId\":null,\"vpcCidr\":null,\"azToSubnetIds\":null,\"subnetId\":null,\"customImageId\":null,\"customSecurityGroupId\":null}},\"keyPairName\":null,\"sshPrivateKeyContent\":null,\"sshUser\":null,\"airGapInstall\":false,\"sshPort\":54422,\"hostVpcId\":null,\"hostVpcRegion\":null,\"customHostCidrs\":[],\"destVpcId\":null}"}, hit error.
java.util.concurrent.ExecutionException: java.lang.RuntimeException: YBCloud command network (bootstrap) failed to execute.
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at com.yugabyte.yw.commissioner.SubTaskGroup.waitFor(SubTaskGroup.java:181)
at com.yugabyte.yw.commissioner.SubTaskGroupQueue.run(SubTaskGroupQueue.java:39)
at com.yugabyte.yw.commissioner.tasks.CloudBootstrap.run(CloudBootstrap.java:172)
at com.yugabyte.yw.commissioner.TaskRunner.run(TaskRunner.java:145)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: YBCloud command network (bootstrap) failed to execute.
at com.yugabyte.yw.commissioner.tasks.subtasks.cloud.CloudSetup.run(CloudSetup.java:45)
... 5 common frames omitted
2021-07-16 01:50:07,770 [ERROR] from SubTaskGroupQueue in TaskPool-0 - SubTaskGroup 'Create Cloud setup task : completed 0 out of 1 tasks.' waitFor() returned failed status.
2021-07-16 01:50:07,773 [ERROR] from TaskRunner in TaskPool-0 - Error running task
java.lang.RuntimeException: Create Cloud setup task : completed 0 out of 1 tasks. failed.
at com.yugabyte.yw.commissioner.SubTaskGroupQueue.run(SubTaskGroupQueue.java:52)
at com.yugabyte.yw.commissioner.tasks.CloudBootstrap.run(CloudBootstrap.java:172)
at com.yugabyte.yw.commissioner.TaskRunner.run(TaskRunner.java:145)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2021-07-16 01:50:07,776 [INFO] from TaskRunner in TaskPool-0 - Updating task [taskType : CloudBootstrap, taskState: Running] to new state Failure
2021-07-16 01:50:07,996 [INFO] from Commissioner in TaskProgressMonitor - Task task-info {taskType : CloudBootstrap, taskState: Failure}, task {CloudBootstrap(14cb4c8c-81f2-41a8-8587-fb70d63eca58)} has failed.
|
1.0
|
[Platform] Provider creation in platform using Yugabundle is failing - Yugabundle installation went fine with 2.7.2-b194 but couldn't create any provider. Can you please check this http://10.9.141.69/tasks/e49d047f-2783-4506-bb43-46c0be10a72f (demo/Password#123). Use yb-dev-aws-2.pem to ssh. cc @Wesley Wang
Error:
021-07-16 01:50:07,767 [DEBUG] from ShellProcessHandler in TaskPool-CloudBootstrap(14cb4c8c-81f2-41a8-8587-fb70d63eca58)-0 - ModuleNotFoundError: No module named '_cffi_backend'
2021-07-16 01:50:07,767 [INFO] from ShellProcessHandler in TaskPool-CloudBootstrap(14cb4c8c-81f2-41a8-8587-fb70d63eca58)-0 - Completed proc 'bin/ybcloud.sh aws network bootstrap {"errorString":null,"providerUUID":"14cb4c8c-81f2-41a8-8587-fb70d63eca58","perRegionMetadata":{"us-west-2":{"vpcId":null,"vpcCidr":null,"azToSubnetIds":null,"subnetId":null,"customImageId":null,"customSecurityGroupId":null}},"keyPairName":null,"sshPrivateKeyContent":null,"sshUser":null,"airGapInstall":false,"sshPort":54422,"hostVpcId":null,"hostVpcRegion":null,"customHostCidrs":[],"destVpcId":null}' status=failure code=1 [ 280 ms ]
2021-07-16 01:50:07,767 [ERROR] from DevopsBase in TaskPool-CloudBootstrap(14cb4c8c-81f2-41a8-8587-fb70d63eca58)-0 - Traceback (most recent call last):
File "/opt/yugabyte/packages/yugabyte-2.7.2.0-b194/devops/python3_installed_modules/bin/ybcloud.py", line 11, in <module>
from ybops.cloud.ybcloud import YbCloud
File "/opt/yugabyte/packages/yugabyte-2.7.2.0-b194/devops/python3_installed_modules/ybops/cloud/ybcloud.py", line 15, in <module>
from ybops.cloud.aws.cloud import AwsCloud
File "/opt/yugabyte/packages/yugabyte-2.7.2.0-b194/devops/python3_installed_modules/ybops/cloud/aws/cloud.py", line 23, in <module>
from ybops.cloud.aws.command import AwsInstanceCommand, AwsNetworkCommand, \
File "/opt/yugabyte/packages/yugabyte-2.7.2.0-b194/devops/python3_installed_modules/ybops/cloud/aws/command.py", line 11, in <module>
from ybops.cloud.aws.method import AwsProvisionInstancesMethod, AwsCreateInstancesMethod, \
File "/opt/yugabyte/packages/yugabyte-2.7.2.0-b194/devops/python3_installed_modules/ybops/cloud/aws/method.py", line 11, in <module>
from ybops.cloud.common.method import ListInstancesMethod, CreateInstancesMethod, \
File "/opt/yugabyte/packages/yugabyte-2.7.2.0-b194/devops/python3_installed_modules/ybops/cloud/common/method.py", line 20, in <module>
from ybops.utils import get_ssh_host_port, wait_for_ssh, get_path_from_yb, \
File "/opt/yugabyte/packages/yugabyte-2.7.2.0-b194/devops/python3_installed_modules/ybops/utils/__init__.py", line 18, in <module>
import paramiko
File "/opt/yugabyte/packages/yugabyte-2.7.2.0-b194/devops/python3_installed_modules/paramiko/__init__.py", line 22, in <module>
from paramiko.transport import SecurityOptions, Transport
File "/opt/yugabyte/packages/yugabyte-2.7.2.0-b194/devops/python3_installed_modules/paramiko/transport.py", line 90, in <module>
from paramiko.ed25519key import Ed25519Key
File "/opt/yugabyte/packages/yugabyte-2.7.2.0-b194/devops/python3_installed_modules/paramiko/ed25519key.py", line 17, in <module>
import bcrypt
File "/opt/yugabyte/packages/yugabyte-2.7.2.0-b194/devops/python3_installed_modules/bcrypt/__init__.py", line 25, in <module>
from . import _bcrypt # type: ignore
ModuleNotFoundError: No module named '_cffi_backend'
2021-07-16 01:50:07,768 [ERROR] from SubTaskGroup in TaskPool-0 - Failed to execute task type subtasks.cloud.CloudSetup UUID 6b4d8fe9-c98a-4f23-a5f3-637392463afd details {"errorString":null,"providerUUID":"14cb4c8c-81f2-41a8-8587-fb70d63eca58","customPayload":"{\"errorString\":null,\"providerUUID\":\"14cb4c8c-81f2-41a8-8587-fb70d63eca58\",\"perRegionMetadata\":{\"us-west-2\":{\"vpcId\":null,\"vpcCidr\":null,\"azToSubnetIds\":null,\"subnetId\":null,\"customImageId\":null,\"customSecurityGroupId\":null}},\"keyPairName\":null,\"sshPrivateKeyContent\":null,\"sshUser\":null,\"airGapInstall\":false,\"sshPort\":54422,\"hostVpcId\":null,\"hostVpcRegion\":null,\"customHostCidrs\":[],\"destVpcId\":null}"}, hit error.
java.util.concurrent.ExecutionException: java.lang.RuntimeException: YBCloud command network (bootstrap) failed to execute.
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:192)
at com.yugabyte.yw.commissioner.SubTaskGroup.waitFor(SubTaskGroup.java:181)
at com.yugabyte.yw.commissioner.SubTaskGroupQueue.run(SubTaskGroupQueue.java:39)
at com.yugabyte.yw.commissioner.tasks.CloudBootstrap.run(CloudBootstrap.java:172)
at com.yugabyte.yw.commissioner.TaskRunner.run(TaskRunner.java:145)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: YBCloud command network (bootstrap) failed to execute.
at com.yugabyte.yw.commissioner.tasks.subtasks.cloud.CloudSetup.run(CloudSetup.java:45)
... 5 common frames omitted
2021-07-16 01:50:07,770 [ERROR] from SubTaskGroupQueue in TaskPool-0 - SubTaskGroup 'Create Cloud setup task : completed 0 out of 1 tasks.' waitFor() returned failed status.
2021-07-16 01:50:07,773 [ERROR] from TaskRunner in TaskPool-0 - Error running task
java.lang.RuntimeException: Create Cloud setup task : completed 0 out of 1 tasks. failed.
at com.yugabyte.yw.commissioner.SubTaskGroupQueue.run(SubTaskGroupQueue.java:52)
at com.yugabyte.yw.commissioner.tasks.CloudBootstrap.run(CloudBootstrap.java:172)
at com.yugabyte.yw.commissioner.TaskRunner.run(TaskRunner.java:145)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2021-07-16 01:50:07,776 [INFO] from TaskRunner in TaskPool-0 - Updating task [taskType : CloudBootstrap, taskState: Running] to new state Failure
2021-07-16 01:50:07,996 [INFO] from Commissioner in TaskProgressMonitor - Task task-info {taskType : CloudBootstrap, taskState: Failure}, task {CloudBootstrap(14cb4c8c-81f2-41a8-8587-fb70d63eca58)} has failed.
|
non_architecture
|
provider creation in platform using yugabundle is failing yugabundle installation went fine with but couldn t create any provider can you please check this demo password use yb dev aws pem to ssh cc wesley wang error from shellprocesshandler in taskpool cloudbootstrap modulenotfounderror no module named cffi backend from shellprocesshandler in taskpool cloudbootstrap completed proc bin ybcloud sh aws network bootstrap errorstring null provideruuid perregionmetadata us west vpcid null vpccidr null aztosubnetids null subnetid null customimageid null customsecuritygroupid null keypairname null sshprivatekeycontent null sshuser null airgapinstall false sshport hostvpcid null hostvpcregion null customhostcidrs destvpcid null status failure code from devopsbase in taskpool cloudbootstrap traceback most recent call last file opt yugabyte packages yugabyte devops installed modules bin ybcloud py line in from ybops cloud ybcloud import ybcloud file opt yugabyte packages yugabyte devops installed modules ybops cloud ybcloud py line in from ybops cloud aws cloud import awscloud file opt yugabyte packages yugabyte devops installed modules ybops cloud aws cloud py line in from ybops cloud aws command import awsinstancecommand awsnetworkcommand file opt yugabyte packages yugabyte devops installed modules ybops cloud aws command py line in from ybops cloud aws method import awsprovisioninstancesmethod awscreateinstancesmethod file opt yugabyte packages yugabyte devops installed modules ybops cloud aws method py line in from ybops cloud common method import listinstancesmethod createinstancesmethod file opt yugabyte packages yugabyte devops installed modules ybops cloud common method py line in from ybops utils import get ssh host port wait for ssh get path from yb file opt yugabyte packages yugabyte devops installed modules ybops utils init py line in import paramiko file opt yugabyte packages yugabyte devops installed modules paramiko init py line in from paramiko transport import securityoptions transport file opt yugabyte packages yugabyte devops installed modules paramiko transport py line in from paramiko import file opt yugabyte packages yugabyte devops installed modules paramiko py line in import bcrypt file opt yugabyte packages yugabyte devops installed modules bcrypt init py line in from import bcrypt type ignore modulenotfounderror no module named cffi backend from subtaskgroup in taskpool failed to execute task type subtasks cloud cloudsetup uuid details errorstring null provideruuid custompayload errorstring null provideruuid perregionmetadata us west vpcid null vpccidr null aztosubnetids null subnetid null customimageid null customsecuritygroupid null keypairname null sshprivatekeycontent null sshuser null airgapinstall false sshport hostvpcid null hostvpcregion null customhostcidrs destvpcid null hit error java util concurrent executionexception java lang runtimeexception ybcloud command network bootstrap failed to execute at java util concurrent futuretask report futuretask java at java util concurrent futuretask get futuretask java at com yugabyte yw commissioner subtaskgroup waitfor subtaskgroup java at com yugabyte yw commissioner subtaskgroupqueue run subtaskgroupqueue java at com yugabyte yw commissioner tasks cloudbootstrap run cloudbootstrap java at com yugabyte yw commissioner taskrunner run taskrunner java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java caused by java lang runtimeexception ybcloud command network bootstrap failed to execute at com yugabyte yw commissioner tasks subtasks cloud cloudsetup run cloudsetup java common frames omitted from subtaskgroupqueue in taskpool subtaskgroup create cloud setup task completed out of tasks waitfor returned failed status from taskrunner in taskpool error running task java lang runtimeexception create cloud setup task completed out of tasks failed at com yugabyte yw commissioner subtaskgroupqueue run subtaskgroupqueue java at com yugabyte yw commissioner tasks cloudbootstrap run cloudbootstrap java at com yugabyte yw commissioner taskrunner run taskrunner java at java util concurrent executors runnableadapter call executors java at java util concurrent futuretask run futuretask java at java util concurrent threadpoolexecutor runworker threadpoolexecutor java at java util concurrent threadpoolexecutor worker run threadpoolexecutor java at java lang thread run thread java from taskrunner in taskpool updating task to new state failure from commissioner in taskprogressmonitor task task info tasktype cloudbootstrap taskstate failure task cloudbootstrap has failed
| 0
|
734,052
| 25,336,711,560
|
IssuesEvent
|
2022-11-18 17:28:07
|
episphere/connect
|
https://api.github.com/repos/episphere/connect
|
closed
|
Participant Search should not require all fields
|
Biospecimen Dashboard Priority 1
|
On the participant search page, any field should be able to return a result.
ex1: if only First Name is entered, the search should return all PTs with that first name.
ex2: If only Last Name is entered, the search should return all PTs with that last name
ex3: If First and Last name are entered, search returns all PTs with that combo of first and last name
ex4: If only DOB is entered, all PTs with that DOB should be returned
etc. etc.
Here's the query "requiring" all three fields, and a null DOB being searched as "undefined"

|
1.0
|
Participant Search should not require all fields - On the participant search page, any field should be able to return a result.
ex1: if only First Name is entered, the search should return all PTs with that first name.
ex2: If only Last Name is entered, the search should return all PTs with that last name
ex3: If First and Last name are entered, search returns all PTs with that combo of first and last name
ex4: If only DOB is entered, all PTs with that DOB should be returned
etc. etc.
Here's the query "requiring" all three fields, and a null DOB being searched as "undefined"

|
non_architecture
|
participant search should not require all fields on the participant search page any field should be able to return a result if only first name is entered the search should return all pts with that first name if only last name is entered the search should return all pts with that last name if first and last name are entered search returns all pts with that combo of first and last name if only dob is entered all pts with that dob should be returned etc etc here s the query requiring all three fields and a null dob being searched as undefined
| 0
|
11,508
| 30,795,349,129
|
IssuesEvent
|
2023-07-31 19:24:49
|
opendatahub-io/opendatahub-operator
|
https://api.github.com/repos/opendatahub-io/opendatahub-operator
|
closed
|
Update distributed workloads to sub components
|
rearchitecture priority/high
|
Add `codeflare` and `kuberay` as separate components instead of one component of `distributedworkloads
|
1.0
|
Update distributed workloads to sub components - Add `codeflare` and `kuberay` as separate components instead of one component of `distributedworkloads
|
architecture
|
update distributed workloads to sub components add codeflare and kuberay as separate components instead of one component of distributedworkloads
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.