Unnamed: 0
int64 0
832k
| id
float64 2.49B
32.1B
| type
stringclasses 1
value | created_at
stringlengths 19
19
| repo
stringlengths 7
112
| repo_url
stringlengths 36
141
| action
stringclasses 3
values | title
stringlengths 2
665
| labels
stringlengths 4
554
| body
stringlengths 3
235k
| index
stringclasses 6
values | text_combine
stringlengths 96
235k
| label
stringclasses 2
values | text
stringlengths 96
196k
| binary_label
int64 0
1
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
96,422
| 20,017,081,280
|
IssuesEvent
|
2022-02-01 13:10:23
|
Regalis11/Barotrauma
|
https://api.github.com/repos/Regalis11/Barotrauma
|
closed
|
Error while attempting to host campaign server
|
Bug Duplicate Code Crash
|
- [✓] I have searched the issue tracker to check if the issue has already been reported.
**Description**
Unable to host a campaign online server. I have tried verifying files as well as reinstalling.
**Steps To Reproduce**
- Create a server and click on the campaign mission type
- hourglass loading cursor comes up
- error message appears
- get sent to the server browser page
Happens every time
**Version**
v0.15.23.0
Windows 10 (can provide further specifications if needed)
**Additional information**
Every time this happens there are three files that fail to be validated.
**Log:**
Error while reading a message from server. {Object reference not set to an instance of an object.}
at Barotrauma.MultiPlayerCampaignSetupUI.UpdateLoadMenu(IEnumerable`1 saveFiles) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\Screens\CampaignSetupUI\MultiPlayerCampaignSetupUI.cs:line 267
at Barotrauma.MultiPlayerCampaignSetupUI..ctor(GUIComponent newGameContainer, GUIComponent loadGameContainer, IEnumerable`1 saveFiles) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\Screens\CampaignSetupUI\MultiPlayerCampaignSetupUI.cs:line 194
at Barotrauma.MultiPlayerCampaign.StartCampaignSetup(IEnumerable`1 saveFiles) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\GameSession\GameModes\MultiPlayerCampaign.cs:line 63
at Barotrauma.Networking.GameClient.ReadDataMessage(IReadMessage inc) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\Networking\GameClient.cs:line 918
at Barotrauma.Networking.SteamP2POwnerPeer.HandleDataMessage(IReadMessage inc) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\Networking\Primitives\Peers\SteamP2POwnerPeer.cs:line 0
at Barotrauma.Networking.SteamP2POwnerPeer.Update(Single deltaTime) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\Networking\Primitives\Peers\SteamP2POwnerPeer.cs:line 227
at Barotrauma.Networking.GameClient.Update(Single deltaTime) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\Networking\GameClient.cs:line 641
|
1.0
|
Error while attempting to host campaign server -
- [✓] I have searched the issue tracker to check if the issue has already been reported.
**Description**
Unable to host a campaign online server. I have tried verifying files as well as reinstalling.
**Steps To Reproduce**
- Create a server and click on the campaign mission type
- hourglass loading cursor comes up
- error message appears
- get sent to the server browser page
Happens every time
**Version**
v0.15.23.0
Windows 10 (can provide further specifications if needed)
**Additional information**
Every time this happens there are three files that fail to be validated.
**Log:**
Error while reading a message from server. {Object reference not set to an instance of an object.}
at Barotrauma.MultiPlayerCampaignSetupUI.UpdateLoadMenu(IEnumerable`1 saveFiles) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\Screens\CampaignSetupUI\MultiPlayerCampaignSetupUI.cs:line 267
at Barotrauma.MultiPlayerCampaignSetupUI..ctor(GUIComponent newGameContainer, GUIComponent loadGameContainer, IEnumerable`1 saveFiles) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\Screens\CampaignSetupUI\MultiPlayerCampaignSetupUI.cs:line 194
at Barotrauma.MultiPlayerCampaign.StartCampaignSetup(IEnumerable`1 saveFiles) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\GameSession\GameModes\MultiPlayerCampaign.cs:line 63
at Barotrauma.Networking.GameClient.ReadDataMessage(IReadMessage inc) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\Networking\GameClient.cs:line 918
at Barotrauma.Networking.SteamP2POwnerPeer.HandleDataMessage(IReadMessage inc) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\Networking\Primitives\Peers\SteamP2POwnerPeer.cs:line 0
at Barotrauma.Networking.SteamP2POwnerPeer.Update(Single deltaTime) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\Networking\Primitives\Peers\SteamP2POwnerPeer.cs:line 227
at Barotrauma.Networking.GameClient.Update(Single deltaTime) in <DEV>\Barotrauma\BarotraumaClient\ClientSource\Networking\GameClient.cs:line 641
|
non_infrastructure
|
error while attempting to host campaign server i have searched the issue tracker to check if the issue has already been reported description unable to host a campaign online server i have tried verifying files as well as reinstalling steps to reproduce create a server and click on the campaign mission type hourglass loading cursor comes up error message appears get sent to the server browser page happens every time version windows can provide further specifications if needed additional information every time this happens there are three files that fail to be validated log error while reading a message from server object reference not set to an instance of an object at barotrauma multiplayercampaignsetupui updateloadmenu ienumerable savefiles in barotrauma barotraumaclient clientsource screens campaignsetupui multiplayercampaignsetupui cs line at barotrauma multiplayercampaignsetupui ctor guicomponent newgamecontainer guicomponent loadgamecontainer ienumerable savefiles in barotrauma barotraumaclient clientsource screens campaignsetupui multiplayercampaignsetupui cs line at barotrauma multiplayercampaign startcampaignsetup ienumerable savefiles in barotrauma barotraumaclient clientsource gamesession gamemodes multiplayercampaign cs line at barotrauma networking gameclient readdatamessage ireadmessage inc in barotrauma barotraumaclient clientsource networking gameclient cs line at barotrauma networking handledatamessage ireadmessage inc in barotrauma barotraumaclient clientsource networking primitives peers cs line at barotrauma networking update single deltatime in barotrauma barotraumaclient clientsource networking primitives peers cs line at barotrauma networking gameclient update single deltatime in barotrauma barotraumaclient clientsource networking gameclient cs line
| 0
|
3,270
| 4,175,346,865
|
IssuesEvent
|
2016-06-21 16:34:40
|
jlongster/debugger.html
|
https://api.github.com/repos/jlongster/debugger.html
|
closed
|
Add component unit tests
|
infrastructure
|
It would be nice to be able to write unit tests against our components.
Tests would render components with fixture data, like storybook, and have assertions on the shape of the component and handler functions.
We did some of this investigation work on tuesday:
things to consider:
+ jsdom environment
+ shallow render
|
1.0
|
Add component unit tests - It would be nice to be able to write unit tests against our components.
Tests would render components with fixture data, like storybook, and have assertions on the shape of the component and handler functions.
We did some of this investigation work on tuesday:
things to consider:
+ jsdom environment
+ shallow render
|
infrastructure
|
add component unit tests it would be nice to be able to write unit tests against our components tests would render components with fixture data like storybook and have assertions on the shape of the component and handler functions we did some of this investigation work on tuesday things to consider jsdom environment shallow render
| 1
|
24,753
| 17,691,448,768
|
IssuesEvent
|
2021-08-24 10:26:35
|
wellcomecollection/platform
|
https://api.github.com/repos/wellcomecollection/platform
|
closed
|
Clean up the miro-migration VHS
|
🚧 Infrastructure
|
The data in that table is very out-of-date – it refers to objects in Miro buckets that no longer exist. We should archive the contents, then deleted the associated infrastructure.
|
1.0
|
Clean up the miro-migration VHS - The data in that table is very out-of-date – it refers to objects in Miro buckets that no longer exist. We should archive the contents, then deleted the associated infrastructure.
|
infrastructure
|
clean up the miro migration vhs the data in that table is very out of date – it refers to objects in miro buckets that no longer exist we should archive the contents then deleted the associated infrastructure
| 1
|
18,151
| 12,811,874,946
|
IssuesEvent
|
2020-07-04 01:59:41
|
CodeForBaltimore/Bmore-Responsive
|
https://api.github.com/repos/CodeForBaltimore/Bmore-Responsive
|
closed
|
Refactor Casbin db connection to remove SQL from logs
|
duplicate infrastructure
|
### Task
Currently, the Casbin db connection defaults to a robust logging. We do not need this level of detail in the logs for production.
### Acceptance Criteria
- [x] Casbin db connection implements options
|
1.0
|
Refactor Casbin db connection to remove SQL from logs - ### Task
Currently, the Casbin db connection defaults to a robust logging. We do not need this level of detail in the logs for production.
### Acceptance Criteria
- [x] Casbin db connection implements options
|
infrastructure
|
refactor casbin db connection to remove sql from logs task currently the casbin db connection defaults to a robust logging we do not need this level of detail in the logs for production acceptance criteria casbin db connection implements options
| 1
|
270,387
| 8,459,564,122
|
IssuesEvent
|
2018-10-22 16:16:49
|
aeternity/elixir-node
|
https://api.github.com/repos/aeternity/elixir-node
|
closed
|
Rox DB not working if Peristence GenServer crashes
|
bug discussion low-priority
|
So if the Persistence GenServer, crashes, it is being restarted by it's Supervisor.
Problem is that the way RoxDB is designed, at least in the library that we use, we cannot open it again until it is closed (at least that's what I have found).
But we cannot manually close the DB. By design it is made to be automatically closed when the BEAM VM garbage collects it:
Quoting:
`The database will automatically be closed when the BEAM VM releases it for garbage collection.`
So if we try to open the RoxDB again, we get the following message:
`{:error, "IO error: While lock file: /home/gspasov/1work/aeternity/elixir-node/test/LOCK: No locks available"}`
This means that we cannot get the reference of the DB, not of the families, until the VM garbage collects it, i.e. until we restart the project.
This faces us with the question of how do we deal with this issue. For me there are 2 options:
- Go around the problem and figure out a workaround (which is not a solution in my opinion). For me this will be to use another GenServer to keep the state of the DB in 2 places, so if one of them crashes we still have the db and families references;
- Maybe use another DB?
|
1.0
|
Rox DB not working if Peristence GenServer crashes - So if the Persistence GenServer, crashes, it is being restarted by it's Supervisor.
Problem is that the way RoxDB is designed, at least in the library that we use, we cannot open it again until it is closed (at least that's what I have found).
But we cannot manually close the DB. By design it is made to be automatically closed when the BEAM VM garbage collects it:
Quoting:
`The database will automatically be closed when the BEAM VM releases it for garbage collection.`
So if we try to open the RoxDB again, we get the following message:
`{:error, "IO error: While lock file: /home/gspasov/1work/aeternity/elixir-node/test/LOCK: No locks available"}`
This means that we cannot get the reference of the DB, not of the families, until the VM garbage collects it, i.e. until we restart the project.
This faces us with the question of how do we deal with this issue. For me there are 2 options:
- Go around the problem and figure out a workaround (which is not a solution in my opinion). For me this will be to use another GenServer to keep the state of the DB in 2 places, so if one of them crashes we still have the db and families references;
- Maybe use another DB?
|
non_infrastructure
|
rox db not working if peristence genserver crashes so if the persistence genserver crashes it is being restarted by it s supervisor problem is that the way roxdb is designed at least in the library that we use we cannot open it again until it is closed at least that s what i have found but we cannot manually close the db by design it is made to be automatically closed when the beam vm garbage collects it quoting the database will automatically be closed when the beam vm releases it for garbage collection so if we try to open the roxdb again we get the following message error io error while lock file home gspasov aeternity elixir node test lock no locks available this means that we cannot get the reference of the db not of the families until the vm garbage collects it i e until we restart the project this faces us with the question of how do we deal with this issue for me there are options go around the problem and figure out a workaround which is not a solution in my opinion for me this will be to use another genserver to keep the state of the db in places so if one of them crashes we still have the db and families references maybe use another db
| 0
|
172,303
| 13,299,988,935
|
IssuesEvent
|
2020-08-25 10:36:33
|
mattermost/mattermost-server
|
https://api.github.com/repos/mattermost/mattermost-server
|
closed
|
Write Cypress test: "MM-T385 Invite new user to closed team using email invite"
|
Area/E2E Tests Difficulty/1:Easy Hackfest Help Wanted
|
This is part of __Cypress Test Automation Hackfest 🚀__. Please read more at https://github.com/mattermost/mattermost-server/issues/15120.
See our [end-to-end testing documentation](https://developers.mattermost.com/contribute/webapp/end-to-end-tests/) for reference.
<article class="mb-32"><h1 class="text-6xl md:text-7xl lg:text-8xl font-bold tracking-tighter leading-tight md:leading-none mb-12 text-center md:text-left">MM-T385 Invite new user to closed team using email invite</h1><div class="max-w-2xl mx-auto"><div><h3>Steps </h3><ol><li>Ensure that Main Menu ➜ Team Settings ➜ Allow any user with an account on this server... is set to `No`</li><li>Ensure "Allow only users with a specific email domain to join this team" is blank (i.e. any email address can be invited)</li><li>Open Main Menu and click `Invite People`</li><li>Enter an email address you can access (test user may access email via inbucket)</li><li>Click `Invite Members`</li><li>Check your email, and open the email with subject line:</li><li>`[Mattermost] invited you to join Team</li><li>Open the `Join Team` link in a separate / incognito browser</li><li>Create a new account using the email address you sent the invite to</li></ol><h3>Test Data</h3><img src="https://smartbear-tm4j-prod-us-west-2-attachment-rich-text.s3.us-west-2.amazonaws.com/embedded-f3277290f945470c4add5d21ef3dc7ca7b74388fc7152bfb6b99ae58c66a95a8-1579118958795-2020-01-15_15-08-40.png" style="width: 175px;" class="fr-fil fr-dii"><img src="https://smartbear-tm4j-prod-us-west-2-attachment-rich-text.s3.us-west-2.amazonaws.com/embedded-f3277290f945470c4add5d21ef3dc7ca7b74388fc7152bfb6b99ae58c66a95a8-1579118985721-2020-01-15_15-07-48.png" style="width: 123px;" class="fr-fil fr-dii"><h3>Expected</h3>New user is viewing Town Square channel of that team and "Welcome to Mattermost" tutorial is displayed in the center channel<hr></div></div></article>
**Test Folder:** `/cypress/integration/team_settings`
**Test code arrangement:**
```
describe('Team Settings', () => {
it('MM-T385 Invite new user to closed team using email invite', () => {
// code
});
});
```
If you're interested, please comment here and come [join our "Contributors" community channel](https://community.mattermost.com/core/channels/tickets) on our daily build server, where you can discuss questions with community members and the Mattermost core team. For technical advice or questions, please [join our "Developers" community channel](https://community.mattermost.com/core/channels/developers).
New contributors please see our [Developer's Guide](https://developers.mattermost.com/contribute/getting-started/).
|
1.0
|
Write Cypress test: "MM-T385 Invite new user to closed team using email invite" - This is part of __Cypress Test Automation Hackfest 🚀__. Please read more at https://github.com/mattermost/mattermost-server/issues/15120.
See our [end-to-end testing documentation](https://developers.mattermost.com/contribute/webapp/end-to-end-tests/) for reference.
<article class="mb-32"><h1 class="text-6xl md:text-7xl lg:text-8xl font-bold tracking-tighter leading-tight md:leading-none mb-12 text-center md:text-left">MM-T385 Invite new user to closed team using email invite</h1><div class="max-w-2xl mx-auto"><div><h3>Steps </h3><ol><li>Ensure that Main Menu ➜ Team Settings ➜ Allow any user with an account on this server... is set to `No`</li><li>Ensure "Allow only users with a specific email domain to join this team" is blank (i.e. any email address can be invited)</li><li>Open Main Menu and click `Invite People`</li><li>Enter an email address you can access (test user may access email via inbucket)</li><li>Click `Invite Members`</li><li>Check your email, and open the email with subject line:</li><li>`[Mattermost] invited you to join Team</li><li>Open the `Join Team` link in a separate / incognito browser</li><li>Create a new account using the email address you sent the invite to</li></ol><h3>Test Data</h3><img src="https://smartbear-tm4j-prod-us-west-2-attachment-rich-text.s3.us-west-2.amazonaws.com/embedded-f3277290f945470c4add5d21ef3dc7ca7b74388fc7152bfb6b99ae58c66a95a8-1579118958795-2020-01-15_15-08-40.png" style="width: 175px;" class="fr-fil fr-dii"><img src="https://smartbear-tm4j-prod-us-west-2-attachment-rich-text.s3.us-west-2.amazonaws.com/embedded-f3277290f945470c4add5d21ef3dc7ca7b74388fc7152bfb6b99ae58c66a95a8-1579118985721-2020-01-15_15-07-48.png" style="width: 123px;" class="fr-fil fr-dii"><h3>Expected</h3>New user is viewing Town Square channel of that team and "Welcome to Mattermost" tutorial is displayed in the center channel<hr></div></div></article>
**Test Folder:** `/cypress/integration/team_settings`
**Test code arrangement:**
```
describe('Team Settings', () => {
it('MM-T385 Invite new user to closed team using email invite', () => {
// code
});
});
```
If you're interested, please comment here and come [join our "Contributors" community channel](https://community.mattermost.com/core/channels/tickets) on our daily build server, where you can discuss questions with community members and the Mattermost core team. For technical advice or questions, please [join our "Developers" community channel](https://community.mattermost.com/core/channels/developers).
New contributors please see our [Developer's Guide](https://developers.mattermost.com/contribute/getting-started/).
|
non_infrastructure
|
write cypress test mm invite new user to closed team using email invite this is part of cypress test automation hackfest 🚀 please read more at see our for reference mm invite new user to closed team using email invite steps ensure that main menu ➜ team settings ➜ allow any user with an account on this server is set to no ensure allow only users with a specific email domain to join this team is blank i e any email address can be invited open main menu and click invite people enter an email address you can access test user may access email via inbucket click invite members check your email and open the email with subject line invited you to join team open the join team link in a separate incognito browser create a new account using the email address you sent the invite to test data expected new user is viewing town square channel of that team and welcome to mattermost tutorial is displayed in the center channel test folder cypress integration team settings test code arrangement describe team settings it mm invite new user to closed team using email invite code if you re interested please comment here and come on our daily build server where you can discuss questions with community members and the mattermost core team for technical advice or questions please new contributors please see our
| 0
|
15,058
| 11,310,078,055
|
IssuesEvent
|
2020-01-19 17:12:59
|
vlsidlyarevich/ideal-shop
|
https://api.github.com/repos/vlsidlyarevich/ideal-shop
|
closed
|
Setup parent maven/gradle project
|
infrastructure
|
For the purposes of microservice development we need parent project which will hold our Spring cloud version and other libs/plugins. It can be placed in root of project BUT there should be no modules section because we want to have separated services to multiply all the advantages of using separated stuff.
|
1.0
|
Setup parent maven/gradle project - For the purposes of microservice development we need parent project which will hold our Spring cloud version and other libs/plugins. It can be placed in root of project BUT there should be no modules section because we want to have separated services to multiply all the advantages of using separated stuff.
|
infrastructure
|
setup parent maven gradle project for the purposes of microservice development we need parent project which will hold our spring cloud version and other libs plugins it can be placed in root of project but there should be no modules section because we want to have separated services to multiply all the advantages of using separated stuff
| 1
|
15,316
| 11,456,621,820
|
IssuesEvent
|
2020-02-06 21:38:18
|
enarx/enarx
|
https://api.github.com/repos/enarx/enarx
|
opened
|
pre-push tests run in the current working tree
|
infrastructure
|
This means we can get false positives and false negatives because we're evaluating code that isn't checked in.
|
1.0
|
pre-push tests run in the current working tree - This means we can get false positives and false negatives because we're evaluating code that isn't checked in.
|
infrastructure
|
pre push tests run in the current working tree this means we can get false positives and false negatives because we re evaluating code that isn t checked in
| 1
|
1,953
| 3,440,217,428
|
IssuesEvent
|
2015-12-14 13:38:50
|
hackndev/zinc
|
https://api.github.com/repos/hackndev/zinc
|
closed
|
Fix makefile to build cargoized examples
|
infrastructure nightly fallout ready
|
Makefile is currently broken from #318 and will not build examples as expected. The whole its existence is slightly questionable now, as it's basically pre and post-processing around cargo. Maybe we need to make a simple wrapper around cargo anyway (sounds like a reasonable option given how cargo isn't that much cross-build friendly)?
|
1.0
|
Fix makefile to build cargoized examples - Makefile is currently broken from #318 and will not build examples as expected. The whole its existence is slightly questionable now, as it's basically pre and post-processing around cargo. Maybe we need to make a simple wrapper around cargo anyway (sounds like a reasonable option given how cargo isn't that much cross-build friendly)?
|
infrastructure
|
fix makefile to build cargoized examples makefile is currently broken from and will not build examples as expected the whole its existence is slightly questionable now as it s basically pre and post processing around cargo maybe we need to make a simple wrapper around cargo anyway sounds like a reasonable option given how cargo isn t that much cross build friendly
| 1
|
13,313
| 10,199,053,276
|
IssuesEvent
|
2019-08-13 07:30:28
|
npgsql/npgsql
|
https://api.github.com/repos/npgsql/npgsql
|
closed
|
Move version prefix to directory build properties
|
infrastructure
|
All projects in the `src` directory should inherit `VersionPrefix` from the central place which is `Directory.Build.props`. The `bump.sh` script must be updated too.
|
1.0
|
Move version prefix to directory build properties - All projects in the `src` directory should inherit `VersionPrefix` from the central place which is `Directory.Build.props`. The `bump.sh` script must be updated too.
|
infrastructure
|
move version prefix to directory build properties all projects in the src directory should inherit versionprefix from the central place which is directory build props the bump sh script must be updated too
| 1
|
249,575
| 26,954,447,098
|
IssuesEvent
|
2023-02-08 14:01:58
|
simplycubed/terraform-google-static-assets
|
https://api.github.com/repos/simplycubed/terraform-google-static-assets
|
closed
|
CVE-2016-9123 (High) detected in github.com/docker/distribution-v2.8.1+incompatible - autoclosed
|
security vulnerability
|
## CVE-2016-9123 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/docker/distribution-v2.8.1+incompatible</b></p></summary>
<p></p>
<p>Library home page: <a href="https://proxy.golang.org/github.com/docker/distribution/@v/v2.8.1+incompatible.zip">https://proxy.golang.org/github.com/docker/distribution/@v/v2.8.1+incompatible.zip</a></p>
<p>
Dependency Hierarchy:
- github.com/gruntwork-io/terratest-v0.40.17 (Root Library)
- github.com/google/go-containerregistry-v0.9.0
- :x: **github.com/docker/distribution-v2.8.1+incompatible** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/simplycubed/terraform-google-static-assets/commit/e49e2f33b77657ce4ab7eac9abebafc4a1fd18ba">e49e2f33b77657ce4ab7eac9abebafc4a1fd18ba</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
go-jose before 1.0.5 suffers from a CBC-HMAC integer overflow on 32-bit architectures. An integer overflow could lead to authentication bypass for CBC-HMAC encrypted ciphertexts on 32-bit architectures.
<p>Publish Date: 2017-03-28
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-9123>CVE-2016-9123</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/GO-2020-0009">https://osv.dev/vulnerability/GO-2020-0009</a></p>
<p>Release Date: 2017-03-28</p>
<p>Fix Resolution: v1.0.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2016-9123 (High) detected in github.com/docker/distribution-v2.8.1+incompatible - autoclosed - ## CVE-2016-9123 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>github.com/docker/distribution-v2.8.1+incompatible</b></p></summary>
<p></p>
<p>Library home page: <a href="https://proxy.golang.org/github.com/docker/distribution/@v/v2.8.1+incompatible.zip">https://proxy.golang.org/github.com/docker/distribution/@v/v2.8.1+incompatible.zip</a></p>
<p>
Dependency Hierarchy:
- github.com/gruntwork-io/terratest-v0.40.17 (Root Library)
- github.com/google/go-containerregistry-v0.9.0
- :x: **github.com/docker/distribution-v2.8.1+incompatible** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/simplycubed/terraform-google-static-assets/commit/e49e2f33b77657ce4ab7eac9abebafc4a1fd18ba">e49e2f33b77657ce4ab7eac9abebafc4a1fd18ba</a></p>
<p>Found in base branch: <b>master</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
go-jose before 1.0.5 suffers from a CBC-HMAC integer overflow on 32-bit architectures. An integer overflow could lead to authentication bypass for CBC-HMAC encrypted ciphertexts on 32-bit architectures.
<p>Publish Date: 2017-03-28
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2016-9123>CVE-2016-9123</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: None
- Integrity Impact: High
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/GO-2020-0009">https://osv.dev/vulnerability/GO-2020-0009</a></p>
<p>Release Date: 2017-03-28</p>
<p>Fix Resolution: v1.0.5</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_infrastructure
|
cve high detected in github com docker distribution incompatible autoclosed cve high severity vulnerability vulnerable library github com docker distribution incompatible library home page a href dependency hierarchy github com gruntwork io terratest root library github com google go containerregistry x github com docker distribution incompatible vulnerable library found in head commit a href found in base branch master vulnerability details go jose before suffers from a cbc hmac integer overflow on bit architectures an integer overflow could lead to authentication bypass for cbc hmac encrypted ciphertexts on bit architectures publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
132,305
| 10,740,997,231
|
IssuesEvent
|
2019-10-29 19:18:53
|
OpenLiberty/open-liberty
|
https://api.github.com/repos/OpenLiberty/open-liberty
|
opened
|
Test Failure: com.ibm.ws.threading.policy.PolicyExecutorTest.testGroupedSubmits
|
team:Zombie Apocalypse test bug
|
```
testGroupedSubmits:junit.framework.AssertionFailedError: 2019-10-26-16:50:02:473 The response did not contain [SUCCESS]. Full output is:
ERROR: Caught exception attempting to call test method testGroupedSubmits on servlet web.PolicyExecutorServlet
java.util.concurrent.ExecutionException: java.lang.IllegalStateException: Attempted arrival of unregistered party for java.util.concurrent.Phaser@86a90676[phase = 3 parties = 8 arrived = 8]
at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:205)
at web.PolicyExecutorServlet.testGroupedSubmits(PolicyExecutorServlet.java:1751)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at componenttest.app.FATServlet.doGet(FATServlet.java:71)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1230)
at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:729)
at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:426)
at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1218)
at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1002)
at com.ibm.ws.webcontainer.servlet.CacheServletWrapper.handleRequest(CacheServletWrapper.java:75)
at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:938)
at com.ibm.ws.webcontainer.osgi.DynamicVirtualHost$2.run(DynamicVirtualHost.java:279)
at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink$TaskWrapper.run(HttpDispatcherLink.java:1136)
at com.ibm.ws.http.dispat
```
This test failure occurs due to a subtle behavior of java.util.concurrent.Phaser.
The test case relies upon phaser.arriveAndAwaitAdvance, which it falsely assumes to be atomic. JavaDoc, however, states that it is equivalent to awaitAdvance(arrive()). This is important because, with arrive being an independent operation from the advance, it becomes possible, upon reaching phase 3 for accumulating tasks from the previous group (the ones intended for phase 3) to overlap the arrive operations from those that are intended for phase 4. This means there is a timing window for more than 8 parties to attempt to arrive at phase 3, thus causing the failure:
```
java.lang.IllegalStateException: Attempted arrival of unregistered party for java.util.concurrent.Phaser@86a90676[phase = 3 parties = 8 arrived = 8]
```
Here is one way the problem can occur:
8 tasks from first group attempt to arriveAndWaitForAdvance at phase 0.
After 6 exit the method, 2 (from this first group) can remain in progress.
8 tasks from the second group attempt to arriveAndWaitForAdvance for phase 1.
The 2 from the first group and 4 from the second group exit the method,
leaving 4 (from the second group) in progress.
8 tasks from the third group attempt to arriveAndWaitForAdvance for phase 2.
The 4 from the second group and 2 from the third group exit the method,
leaving 6 (from the third group) in progress.
8 tasks from the fourth group attempt to arriveAndWaitForAdvance for phase 3.
The 6 from the third group exit the method,
leaving all 8 (from the fourth group) in progress.
8 tasks from the fifth group attempt to arriveAndWaitForAdvance for phase 4,
however, nothing has forced phase 3 to have ended at this point and so any number of these could attempt to arrive into phase 3 and fail due to extra unregistered parties.
The simplest correction to the test that otherwise preserves its logic would be to eliminate the final group of submits such that there is no group 5 to make an unreliable attempt at a fourth phase, instead making 3 the final phase.
|
1.0
|
Test Failure: com.ibm.ws.threading.policy.PolicyExecutorTest.testGroupedSubmits - ```
testGroupedSubmits:junit.framework.AssertionFailedError: 2019-10-26-16:50:02:473 The response did not contain [SUCCESS]. Full output is:
ERROR: Caught exception attempting to call test method testGroupedSubmits on servlet web.PolicyExecutorServlet
java.util.concurrent.ExecutionException: java.lang.IllegalStateException: Attempted arrival of unregistered party for java.util.concurrent.Phaser@86a90676[phase = 3 parties = 8 arrived = 8]
at java.base/java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.base/java.util.concurrent.FutureTask.get(FutureTask.java:205)
at web.PolicyExecutorServlet.testGroupedSubmits(PolicyExecutorServlet.java:1751)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at componenttest.app.FATServlet.doGet(FATServlet.java:71)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at com.ibm.ws.webcontainer.servlet.ServletWrapper.service(ServletWrapper.java:1230)
at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:729)
at com.ibm.ws.webcontainer.servlet.ServletWrapper.handleRequest(ServletWrapper.java:426)
at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1218)
at com.ibm.ws.webcontainer.filter.WebAppFilterManager.invokeFilters(WebAppFilterManager.java:1002)
at com.ibm.ws.webcontainer.servlet.CacheServletWrapper.handleRequest(CacheServletWrapper.java:75)
at com.ibm.ws.webcontainer.WebContainer.handleRequest(WebContainer.java:938)
at com.ibm.ws.webcontainer.osgi.DynamicVirtualHost$2.run(DynamicVirtualHost.java:279)
at com.ibm.ws.http.dispatcher.internal.channel.HttpDispatcherLink$TaskWrapper.run(HttpDispatcherLink.java:1136)
at com.ibm.ws.http.dispat
```
This test failure occurs due to a subtle behavior of java.util.concurrent.Phaser.
The test case relies upon phaser.arriveAndAwaitAdvance, which it falsely assumes to be atomic. JavaDoc, however, states that it is equivalent to awaitAdvance(arrive()). This is important because, with arrive being an independent operation from the advance, it becomes possible, upon reaching phase 3 for accumulating tasks from the previous group (the ones intended for phase 3) to overlap the arrive operations from those that are intended for phase 4. This means there is a timing window for more than 8 parties to attempt to arrive at phase 3, thus causing the failure:
```
java.lang.IllegalStateException: Attempted arrival of unregistered party for java.util.concurrent.Phaser@86a90676[phase = 3 parties = 8 arrived = 8]
```
Here is one way the problem can occur:
8 tasks from first group attempt to arriveAndWaitForAdvance at phase 0.
After 6 exit the method, 2 (from this first group) can remain in progress.
8 tasks from the second group attempt to arriveAndWaitForAdvance for phase 1.
The 2 from the first group and 4 from the second group exit the method,
leaving 4 (from the second group) in progress.
8 tasks from the third group attempt to arriveAndWaitForAdvance for phase 2.
The 4 from the second group and 2 from the third group exit the method,
leaving 6 (from the third group) in progress.
8 tasks from the fourth group attempt to arriveAndWaitForAdvance for phase 3.
The 6 from the third group exit the method,
leaving all 8 (from the fourth group) in progress.
8 tasks from the fifth group attempt to arriveAndWaitForAdvance for phase 4,
however, nothing has forced phase 3 to have ended at this point and so any number of these could attempt to arrive into phase 3 and fail due to extra unregistered parties.
The simplest correction to the test that otherwise preserves its logic would be to eliminate the final group of submits such that there is no group 5 to make an unreliable attempt at a fourth phase, instead making 3 the final phase.
|
non_infrastructure
|
test failure com ibm ws threading policy policyexecutortest testgroupedsubmits testgroupedsubmits junit framework assertionfailederror the response did not contain full output is error caught exception attempting to call test method testgroupedsubmits on servlet web policyexecutorservlet java util concurrent executionexception java lang illegalstateexception attempted arrival of unregistered party for java util concurrent phaser at java base java util concurrent futuretask report futuretask java at java base java util concurrent futuretask get futuretask java at web policyexecutorservlet testgroupedsubmits policyexecutorservlet java at java base jdk internal reflect nativemethodaccessorimpl native method at java base jdk internal reflect nativemethodaccessorimpl invoke nativemethodaccessorimpl java at java base jdk internal reflect delegatingmethodaccessorimpl invoke delegatingmethodaccessorimpl java at componenttest app fatservlet doget fatservlet java at javax servlet http httpservlet service httpservlet java at javax servlet http httpservlet service httpservlet java at com ibm ws webcontainer servlet servletwrapper service servletwrapper java at com ibm ws webcontainer servlet servletwrapper handlerequest servletwrapper java at com ibm ws webcontainer servlet servletwrapper handlerequest servletwrapper java at com ibm ws webcontainer filter webappfiltermanager invokefilters webappfiltermanager java at com ibm ws webcontainer filter webappfiltermanager invokefilters webappfiltermanager java at com ibm ws webcontainer servlet cacheservletwrapper handlerequest cacheservletwrapper java at com ibm ws webcontainer webcontainer handlerequest webcontainer java at com ibm ws webcontainer osgi dynamicvirtualhost run dynamicvirtualhost java at com ibm ws http dispatcher internal channel httpdispatcherlink taskwrapper run httpdispatcherlink java at com ibm ws http dispat this test failure occurs due to a subtle behavior of java util concurrent phaser the test case relies upon phaser arriveandawaitadvance which it falsely assumes to be atomic javadoc however states that it is equivalent to awaitadvance arrive this is important because with arrive being an independent operation from the advance it becomes possible upon reaching phase for accumulating tasks from the previous group the ones intended for phase to overlap the arrive operations from those that are intended for phase this means there is a timing window for more than parties to attempt to arrive at phase thus causing the failure java lang illegalstateexception attempted arrival of unregistered party for java util concurrent phaser here is one way the problem can occur tasks from first group attempt to arriveandwaitforadvance at phase after exit the method from this first group can remain in progress tasks from the second group attempt to arriveandwaitforadvance for phase the from the first group and from the second group exit the method leaving from the second group in progress tasks from the third group attempt to arriveandwaitforadvance for phase the from the second group and from the third group exit the method leaving from the third group in progress tasks from the fourth group attempt to arriveandwaitforadvance for phase the from the third group exit the method leaving all from the fourth group in progress tasks from the fifth group attempt to arriveandwaitforadvance for phase however nothing has forced phase to have ended at this point and so any number of these could attempt to arrive into phase and fail due to extra unregistered parties the simplest correction to the test that otherwise preserves its logic would be to eliminate the final group of submits such that there is no group to make an unreliable attempt at a fourth phase instead making the final phase
| 0
|
33,330
| 27,392,187,434
|
IssuesEvent
|
2023-02-28 16:58:57
|
celestiaorg/test-infra
|
https://api.github.com/repos/celestiaorg/test-infra
|
closed
|
testground/app/infra: Piping metrics from validators into influxdb
|
enhancement testground infrastructure
|
ATM, celestia-app/core has all the metrics necessary to analyse network behaviour from a validator's perspective.
We need to find a way how to pipe all those emitted metrics into testground's influxDB for post-execution analysis
|
1.0
|
testground/app/infra: Piping metrics from validators into influxdb - ATM, celestia-app/core has all the metrics necessary to analyse network behaviour from a validator's perspective.
We need to find a way how to pipe all those emitted metrics into testground's influxDB for post-execution analysis
|
infrastructure
|
testground app infra piping metrics from validators into influxdb atm celestia app core has all the metrics necessary to analyse network behaviour from a validator s perspective we need to find a way how to pipe all those emitted metrics into testground s influxdb for post execution analysis
| 1
|
17,446
| 12,037,653,094
|
IssuesEvent
|
2020-04-13 22:21:41
|
geneontology/pipeline
|
https://api.github.com/repos/geneontology/pipeline
|
closed
|
Pipeline fails on ecocyc sanity check
|
bug (B: affects usability)
|
Currently, due to crossing the a date watershed, ecocyc fails the Sanity I category, halting the pipeline.
@pgaudet waiting for feedback from ecocyc about whether the dropped IEAs can be reduced.
In the interim, just so things like testing and ontology releases can go forward, I'll be easing the restrictions on ecosys in sanity checks.
|
True
|
Pipeline fails on ecocyc sanity check - Currently, due to crossing the a date watershed, ecocyc fails the Sanity I category, halting the pipeline.
@pgaudet waiting for feedback from ecocyc about whether the dropped IEAs can be reduced.
In the interim, just so things like testing and ontology releases can go forward, I'll be easing the restrictions on ecosys in sanity checks.
|
non_infrastructure
|
pipeline fails on ecocyc sanity check currently due to crossing the a date watershed ecocyc fails the sanity i category halting the pipeline pgaudet waiting for feedback from ecocyc about whether the dropped ieas can be reduced in the interim just so things like testing and ontology releases can go forward i ll be easing the restrictions on ecosys in sanity checks
| 0
|
11,356
| 9,115,954,562
|
IssuesEvent
|
2019-02-22 07:23:02
|
askmench/mench-web-app
|
https://api.github.com/repos/askmench/mench-web-app
|
closed
|
DB Time estimate in seconds
|
DB/Server/Infrastructure
|
Currently, time is stored in hours which causes some issues when rounding down. Need to convert all to seconds to remove rounding errors
|
1.0
|
DB Time estimate in seconds - Currently, time is stored in hours which causes some issues when rounding down. Need to convert all to seconds to remove rounding errors
|
infrastructure
|
db time estimate in seconds currently time is stored in hours which causes some issues when rounding down need to convert all to seconds to remove rounding errors
| 1
|
65,664
| 12,652,433,675
|
IssuesEvent
|
2020-06-17 03:34:26
|
microsoft/Azure-Kinect-Sensor-SDK
|
https://api.github.com/repos/microsoft/Azure-Kinect-Sensor-SDK
|
opened
|
Error E1696 cannot open source file "k4a/k4a.hpp" | green screen example
|
Bug Code Sample Triage Needed
|
When trying to build ALL_BUILD in the green screen project within Visual Studio 2019, I get the following error:
`Error (active) E1696 cannot open source file "k4a/k4a.hpp"`
I've tried:
- Installing the Kinect Azure libraries via NuGet
- Including a k4a folder in the project root with k4a.hpp inside,
- Right clicking _ALL_BUILD → Properties → Configuration Properties → VC++ Directories_ and adding the path to k4a.hpp under _Include Directories_.
**To Reproduce**
1. Use CMake GUI to configure and generate project files.
2. Open Project.sln
3. Right click ALL_BUILD in Solution Explorer
4. Click Build
5. Error appears in Error List
**Desktop (please complete the following information):**
- Windows 10 Version 1909 for x64
- Azure Kinect SDK v1.4.0
|
1.0
|
Error E1696 cannot open source file "k4a/k4a.hpp" | green screen example - When trying to build ALL_BUILD in the green screen project within Visual Studio 2019, I get the following error:
`Error (active) E1696 cannot open source file "k4a/k4a.hpp"`
I've tried:
- Installing the Kinect Azure libraries via NuGet
- Including a k4a folder in the project root with k4a.hpp inside,
- Right clicking _ALL_BUILD → Properties → Configuration Properties → VC++ Directories_ and adding the path to k4a.hpp under _Include Directories_.
**To Reproduce**
1. Use CMake GUI to configure and generate project files.
2. Open Project.sln
3. Right click ALL_BUILD in Solution Explorer
4. Click Build
5. Error appears in Error List
**Desktop (please complete the following information):**
- Windows 10 Version 1909 for x64
- Azure Kinect SDK v1.4.0
|
non_infrastructure
|
error cannot open source file hpp green screen example when trying to build all build in the green screen project within visual studio i get the following error error active cannot open source file hpp i ve tried installing the kinect azure libraries via nuget including a folder in the project root with hpp inside right clicking all build → properties → configuration properties → vc directories and adding the path to hpp under include directories to reproduce use cmake gui to configure and generate project files open project sln right click all build in solution explorer click build error appears in error list desktop please complete the following information windows version for azure kinect sdk
| 0
|
256,755
| 19,457,376,086
|
IssuesEvent
|
2021-12-23 01:47:10
|
JosephJamesCoop/your-portland-itinerary
|
https://api.github.com/repos/JosephJamesCoop/your-portland-itinerary
|
closed
|
Local Storage
|
documentation enhancement
|
incorporate client-side storage to store persistent data. Allow application to retain clients itinerary add ons or removals.
|
1.0
|
Local Storage - incorporate client-side storage to store persistent data. Allow application to retain clients itinerary add ons or removals.
|
non_infrastructure
|
local storage incorporate client side storage to store persistent data allow application to retain clients itinerary add ons or removals
| 0
|
245,530
| 26,549,261,612
|
IssuesEvent
|
2023-01-20 05:26:28
|
nidhi7598/linux-3.0.35_CVE-2022-45934
|
https://api.github.com/repos/nidhi7598/linux-3.0.35_CVE-2022-45934
|
opened
|
WS-2022-0018 (High) detected in linuxlinux-3.0.49
|
security vulnerability
|
## WS-2022-0018 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-3.0.49</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v3.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v3.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-3.0.35_CVE-2022-45934/commit/5e23b7f9d2dd0154edd54986754eecd5b5308571">5e23b7f9d2dd0154edd54986754eecd5b5308571</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/ipv4/af_inet.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
net: fix use-after-free in tw_timer_handler
<p>Publish Date: 2022-01-11
<p>URL: <a href=https://github.com/gregkh/linux/commit/08eacbd141e2495d2fcdde84358a06c4f95cbb13>WS-2022-0018</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/GSD-2022-1000053">https://osv.dev/vulnerability/GSD-2022-1000053</a></p>
<p>Release Date: 2022-01-11</p>
<p>Fix Resolution: v5.15.13</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
WS-2022-0018 (High) detected in linuxlinux-3.0.49 - ## WS-2022-0018 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-3.0.49</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v3.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v3.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/nidhi7598/linux-3.0.35_CVE-2022-45934/commit/5e23b7f9d2dd0154edd54986754eecd5b5308571">5e23b7f9d2dd0154edd54986754eecd5b5308571</a></p>
<p>Found in base branch: <b>master</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (1)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/net/ipv4/af_inet.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
net: fix use-after-free in tw_timer_handler
<p>Publish Date: 2022-01-11
<p>URL: <a href=https://github.com/gregkh/linux/commit/08eacbd141e2495d2fcdde84358a06c4f95cbb13>WS-2022-0018</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>7.8</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://osv.dev/vulnerability/GSD-2022-1000053">https://osv.dev/vulnerability/GSD-2022-1000053</a></p>
<p>Release Date: 2022-01-11</p>
<p>Fix Resolution: v5.15.13</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_infrastructure
|
ws high detected in linuxlinux ws high severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch master vulnerable source files net af inet c vulnerability details net fix use after free in tw timer handler publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend
| 0
|
463,378
| 13,264,268,879
|
IssuesEvent
|
2020-08-21 03:08:12
|
mikeshardmind/SinbadCogs
|
https://api.github.com/repos/mikeshardmind/SinbadCogs
|
closed
|
[V3 RSS] Multiple update improvement.
|
Low priority blocked enhancement
|
On high traffic feeds (ex. a reddit rss feed) there needs to be a way to collect multiple posts into one message from the bot. Should be optional to prevent it from being a formatting issue.
|
1.0
|
[V3 RSS] Multiple update improvement. - On high traffic feeds (ex. a reddit rss feed) there needs to be a way to collect multiple posts into one message from the bot. Should be optional to prevent it from being a formatting issue.
|
non_infrastructure
|
multiple update improvement on high traffic feeds ex a reddit rss feed there needs to be a way to collect multiple posts into one message from the bot should be optional to prevent it from being a formatting issue
| 0
|
35,579
| 14,749,831,448
|
IssuesEvent
|
2021-01-08 00:27:32
|
Azure/azure-sdk-for-net
|
https://api.github.com/repos/Azure/azure-sdk-for-net
|
closed
|
FrontDoor FrontendEndpoint update did report MetodNotAllowed
|
App Services Mgmt Service Attention customer-reported needs-team-attention question
|
We are working on a new platform. We wane use FrontDoor but for that, we need to automate the Certification Replace Process. I Downloaded the Preview SDK and try to implement that. Because it's all-new there is now Documentation or Examples available at the moment. I am not sure if I di something wrong or if it’s a bug or not implemented yet. I Would Really appreciate some help.
The following code illustrates what I try to do.
```
var sp = new ServicePrincipalLoginInformation
{
ClientId = "xxxxxxx-xxxxxxxxx-xxxxxx",
ClientSecret = "xxxxxxx-xxxxxxxxx-xxxxxx"
};
var credentials = new AzureCredentials(sp, context.Config.Azure.TenantId, AzureEnvironment.AzureGlobalCloud);
var client = new FrontDoorManagementClient(credentials)
{
SubscriptionId = "xxxxxxx-xxxxxxxxx-xxxxxx"
};
//Getting all FrontDoor instances
var list = await client.FrontDoors.ListWithHttpMessagesAsync();
//Select the FrontDoor instance
var ff = list.Body.Single(e => e.FriendlyName == frontDoorName);
//Select the FrontendEndpoint by Hostname
var root = ff.FrontendEndpoints.Single(e => e.HostName == domain);
//KeyVault ResourceID
var id = "/subscriptions/xxxxxxx-xxxxxxxxx-xxxxxx/resourceGroups/xxxxxxxxx/providers/Microsoft.KeyVault/vaults/xxxxxxxxx";
//Clone the Endpoint and add KeyVault Certificate config
var endpoint = new FrontendEndpoint(
id: root.Id,
hostName: root.HostName,
sessionAffinityEnabledState: root.SessionAffinityEnabledState,
webApplicationFirewallPolicyLink: root.WebApplicationFirewallPolicyLink,
name: root.Name,
sessionAffinityTtlSeconds: root.SessionAffinityTtlSeconds,
customHttpsConfiguration: new CustomHttpsConfiguration
{
CertificateSource = "AzureKeyVault",
Vault = new KeyVaultCertificateSourceParametersVault(id: id),
SecretName = "XXX",
SecretVersion = "XXX"
});
//Update -- Call failed: Operation returned an invalid status code 'MethodNotAllowed'
await client.FrontendEndpoints.CreateOrUpdateAsync(resourceGroup, frontDoorName, root.Name, endpoint);
```
|
2.0
|
FrontDoor FrontendEndpoint update did report MetodNotAllowed - We are working on a new platform. We wane use FrontDoor but for that, we need to automate the Certification Replace Process. I Downloaded the Preview SDK and try to implement that. Because it's all-new there is now Documentation or Examples available at the moment. I am not sure if I di something wrong or if it’s a bug or not implemented yet. I Would Really appreciate some help.
The following code illustrates what I try to do.
```
var sp = new ServicePrincipalLoginInformation
{
ClientId = "xxxxxxx-xxxxxxxxx-xxxxxx",
ClientSecret = "xxxxxxx-xxxxxxxxx-xxxxxx"
};
var credentials = new AzureCredentials(sp, context.Config.Azure.TenantId, AzureEnvironment.AzureGlobalCloud);
var client = new FrontDoorManagementClient(credentials)
{
SubscriptionId = "xxxxxxx-xxxxxxxxx-xxxxxx"
};
//Getting all FrontDoor instances
var list = await client.FrontDoors.ListWithHttpMessagesAsync();
//Select the FrontDoor instance
var ff = list.Body.Single(e => e.FriendlyName == frontDoorName);
//Select the FrontendEndpoint by Hostname
var root = ff.FrontendEndpoints.Single(e => e.HostName == domain);
//KeyVault ResourceID
var id = "/subscriptions/xxxxxxx-xxxxxxxxx-xxxxxx/resourceGroups/xxxxxxxxx/providers/Microsoft.KeyVault/vaults/xxxxxxxxx";
//Clone the Endpoint and add KeyVault Certificate config
var endpoint = new FrontendEndpoint(
id: root.Id,
hostName: root.HostName,
sessionAffinityEnabledState: root.SessionAffinityEnabledState,
webApplicationFirewallPolicyLink: root.WebApplicationFirewallPolicyLink,
name: root.Name,
sessionAffinityTtlSeconds: root.SessionAffinityTtlSeconds,
customHttpsConfiguration: new CustomHttpsConfiguration
{
CertificateSource = "AzureKeyVault",
Vault = new KeyVaultCertificateSourceParametersVault(id: id),
SecretName = "XXX",
SecretVersion = "XXX"
});
//Update -- Call failed: Operation returned an invalid status code 'MethodNotAllowed'
await client.FrontendEndpoints.CreateOrUpdateAsync(resourceGroup, frontDoorName, root.Name, endpoint);
```
|
non_infrastructure
|
frontdoor frontendendpoint update did report metodnotallowed we are working on a new platform we wane use frontdoor but for that we need to automate the certification replace process i downloaded the preview sdk and try to implement that because it s all new there is now documentation or examples available at the moment i am not sure if i di something wrong or if it’s a bug or not implemented yet i would really appreciate some help the following code illustrates what i try to do var sp new serviceprincipallogininformation clientid xxxxxxx xxxxxxxxx xxxxxx clientsecret xxxxxxx xxxxxxxxx xxxxxx var credentials new azurecredentials sp context config azure tenantid azureenvironment azureglobalcloud var client new frontdoormanagementclient credentials subscriptionid xxxxxxx xxxxxxxxx xxxxxx getting all frontdoor instances var list await client frontdoors listwithhttpmessagesasync select the frontdoor instance var ff list body single e e friendlyname frontdoorname select the frontendendpoint by hostname var root ff frontendendpoints single e e hostname domain keyvault resourceid var id subscriptions xxxxxxx xxxxxxxxx xxxxxx resourcegroups xxxxxxxxx providers microsoft keyvault vaults xxxxxxxxx clone the endpoint and add keyvault certificate config var endpoint new frontendendpoint id root id hostname root hostname sessionaffinityenabledstate root sessionaffinityenabledstate webapplicationfirewallpolicylink root webapplicationfirewallpolicylink name root name sessionaffinityttlseconds root sessionaffinityttlseconds customhttpsconfiguration new customhttpsconfiguration certificatesource azurekeyvault vault new keyvaultcertificatesourceparametersvault id id secretname xxx secretversion xxx update call failed operation returned an invalid status code methodnotallowed await client frontendendpoints createorupdateasync resourcegroup frontdoorname root name endpoint
| 0
|
19,770
| 5,932,256,796
|
IssuesEvent
|
2017-05-24 08:53:17
|
jtreml/f1ticker
|
https://api.github.com/repos/jtreml/f1ticker
|
opened
|
Visual Improvements
|
CodePlex
|
<b>juergentreml[CodePlex]</b> <br />Adjusted border colors for flyout window and gadget itself, Adjusted text size and inserted horizontal lines for spacing, Corrected bugs regarding content aligning and justifying in the gadget
|
1.0
|
Visual Improvements - <b>juergentreml[CodePlex]</b> <br />Adjusted border colors for flyout window and gadget itself, Adjusted text size and inserted horizontal lines for spacing, Corrected bugs regarding content aligning and justifying in the gadget
|
non_infrastructure
|
visual improvements juergentreml adjusted border colors for flyout window and gadget itself adjusted text size and inserted horizontal lines for spacing corrected bugs regarding content aligning and justifying in the gadget
| 0
|
230,157
| 18,508,006,794
|
IssuesEvent
|
2021-10-19 21:12:02
|
nbrugger-tgm/reactj
|
https://api.github.com/repos/nbrugger-tgm/reactj
|
closed
|
[CI] Add code-coverage with Codacy
|
testing
|
As Codacy analysis works better than code-climate i would like code-coverage reports to be sent to codacy
Ref : https://docs.codacy.com/coverage-reporter/#generating-coverage
Integrate the test reporting into `CircleCI` since the format there is easier than Github Actions
|
1.0
|
[CI] Add code-coverage with Codacy - As Codacy analysis works better than code-climate i would like code-coverage reports to be sent to codacy
Ref : https://docs.codacy.com/coverage-reporter/#generating-coverage
Integrate the test reporting into `CircleCI` since the format there is easier than Github Actions
|
non_infrastructure
|
add code coverage with codacy as codacy analysis works better than code climate i would like code coverage reports to be sent to codacy ref integrate the test reporting into circleci since the format there is easier than github actions
| 0
|
27,346
| 21,648,052,592
|
IssuesEvent
|
2022-05-06 06:01:28
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
closed
|
Reevaluate tests/tests*OutsideWindows.txt files for .NET Core 2.0
|
test-enhancement area-Infrastructure-coreclr no-recent-activity backlog-cleanup-candidate
|
Make sure these files accurately reflect the state of tests in the tree. Some may be passing now due to netstandard2.0 work for example.
|
1.0
|
Reevaluate tests/tests*OutsideWindows.txt files for .NET Core 2.0 - Make sure these files accurately reflect the state of tests in the tree. Some may be passing now due to netstandard2.0 work for example.
|
infrastructure
|
reevaluate tests tests outsidewindows txt files for net core make sure these files accurately reflect the state of tests in the tree some may be passing now due to work for example
| 1
|
112,440
| 9,574,617,838
|
IssuesEvent
|
2019-05-07 02:36:56
|
codice/ddf
|
https://api.github.com/repos/codice/ddf
|
closed
|
Add unit tests for map settings, info, and context menu
|
:microscope: Test Improvements
|
<!--
Have you read DDF's Code of Conduct? By filing an Issue, you are
expected to comply with it, including treating everyone with respect:
https://github.com/codice/ddf/blob/master/.github/CODE_OF_CONDUCT.md
Do you want to ask a question? Are you looking for support? The DDF
Developers group - https://groups.google.com/forum/#!forum/ddf-developers
is the best place for getting support.
-->
### Description
Add unit tests for map-settings/info/context-menu.
#### Expected behavior:
Unit tests for map-settings, map-info, map-context-menu run during build process.
### Version
N/A
### Additional Information
N/A
|
1.0
|
Add unit tests for map settings, info, and context menu - <!--
Have you read DDF's Code of Conduct? By filing an Issue, you are
expected to comply with it, including treating everyone with respect:
https://github.com/codice/ddf/blob/master/.github/CODE_OF_CONDUCT.md
Do you want to ask a question? Are you looking for support? The DDF
Developers group - https://groups.google.com/forum/#!forum/ddf-developers
is the best place for getting support.
-->
### Description
Add unit tests for map-settings/info/context-menu.
#### Expected behavior:
Unit tests for map-settings, map-info, map-context-menu run during build process.
### Version
N/A
### Additional Information
N/A
|
non_infrastructure
|
add unit tests for map settings info and context menu have you read ddf s code of conduct by filing an issue you are expected to comply with it including treating everyone with respect do you want to ask a question are you looking for support the ddf developers group is the best place for getting support description add unit tests for map settings info context menu expected behavior unit tests for map settings map info map context menu run during build process version n a additional information n a
| 0
|
4,049
| 4,788,692,599
|
IssuesEvent
|
2016-10-30 18:03:12
|
LOZORD/xanadu
|
https://api.github.com/repos/LOZORD/xanadu
|
closed
|
Add testing
|
hacktoberfest help wanted infrastructure
|
Testings should be ran as `npm run test`. I'm enforcing (:sunglasses:) that we use Mocha, Chai, and Sinon for testing (as `devDependencies`). The testing directory `test/` structure should mimic exactly the structure of `dist/` (which mimics `src/` via Babel).
|
1.0
|
Add testing - Testings should be ran as `npm run test`. I'm enforcing (:sunglasses:) that we use Mocha, Chai, and Sinon for testing (as `devDependencies`). The testing directory `test/` structure should mimic exactly the structure of `dist/` (which mimics `src/` via Babel).
|
infrastructure
|
add testing testings should be ran as npm run test i m enforcing sunglasses that we use mocha chai and sinon for testing as devdependencies the testing directory test structure should mimic exactly the structure of dist which mimics src via babel
| 1
|
8,515
| 7,463,571,918
|
IssuesEvent
|
2018-04-01 07:30:22
|
RITlug/TigerOS
|
https://api.github.com/repos/RITlug/TigerOS
|
opened
|
Remove dash-to-dock repackaging from RITlug repositories and mirrors website
|
duplicate easyfix infrastructure priority:low
|
<!--
Thanks for filing a new issue on TigerOS! To help us help you, please use
this template for filing your bug, feature request, or other topic.
If you use this template, it helps the developers review your ticket and
figure out the problem. If you don't use this template, we may close your
issue as not enough information.
-->
# Summary
Since repackaging dash-to-dock, an official RPM has been created for this package. Thus, our repackage is now no longer necessary.
<!--
Choose the type of issue you are filing. You can choose one by typing [X]
in one of the fields. For example, if a bug report, change the line below
to…
[X] Bug report
-->
* This issue is a…
* [ ] Bug report
* [ ] Feature request
* [X] Other issue
* [ ] Question <!-- Please read the wiki first! -->
* **Describe the issue / feature in 1-2 sentences**:
# Details
The builder.ritlug.com website also currently hosts the repkg dash-to-dock. This can be removed due to the new [dash-to-dock](https://github.com/RITlug/tigeros-dash-to-dock "TigerOS dash-to-dock") package no longer needing this repackage.
<!--
If you have other details to include, like screenshots, stacktraces, or
something more detailed, please include it here!
If you have a long stacktrace, DO NOT PASTE IT HERE! Please use Pastebin
and add a link here.
-->
<!--
Phew, all done! Thank you so much for filing a new issue! We'll try to get
back to you soon.
-->
|
1.0
|
Remove dash-to-dock repackaging from RITlug repositories and mirrors website - <!--
Thanks for filing a new issue on TigerOS! To help us help you, please use
this template for filing your bug, feature request, or other topic.
If you use this template, it helps the developers review your ticket and
figure out the problem. If you don't use this template, we may close your
issue as not enough information.
-->
# Summary
Since repackaging dash-to-dock, an official RPM has been created for this package. Thus, our repackage is now no longer necessary.
<!--
Choose the type of issue you are filing. You can choose one by typing [X]
in one of the fields. For example, if a bug report, change the line below
to…
[X] Bug report
-->
* This issue is a…
* [ ] Bug report
* [ ] Feature request
* [X] Other issue
* [ ] Question <!-- Please read the wiki first! -->
* **Describe the issue / feature in 1-2 sentences**:
# Details
The builder.ritlug.com website also currently hosts the repkg dash-to-dock. This can be removed due to the new [dash-to-dock](https://github.com/RITlug/tigeros-dash-to-dock "TigerOS dash-to-dock") package no longer needing this repackage.
<!--
If you have other details to include, like screenshots, stacktraces, or
something more detailed, please include it here!
If you have a long stacktrace, DO NOT PASTE IT HERE! Please use Pastebin
and add a link here.
-->
<!--
Phew, all done! Thank you so much for filing a new issue! We'll try to get
back to you soon.
-->
|
infrastructure
|
remove dash to dock repackaging from ritlug repositories and mirrors website thanks for filing a new issue on tigeros to help us help you please use this template for filing your bug feature request or other topic if you use this template it helps the developers review your ticket and figure out the problem if you don t use this template we may close your issue as not enough information summary since repackaging dash to dock an official rpm has been created for this package thus our repackage is now no longer necessary choose the type of issue you are filing you can choose one by typing in one of the fields for example if a bug report change the line below to… bug report this issue is a… bug report feature request other issue question describe the issue feature in sentences details the builder ritlug com website also currently hosts the repkg dash to dock this can be removed due to the new tigeros dash to dock package no longer needing this repackage if you have other details to include like screenshots stacktraces or something more detailed please include it here if you have a long stacktrace do not paste it here please use pastebin and add a link here phew all done thank you so much for filing a new issue we ll try to get back to you soon
| 1
|
280,015
| 8,677,001,773
|
IssuesEvent
|
2018-11-30 15:38:16
|
DemocraciaEnRed/leyesabiertas-web
|
https://api.github.com/repos/DemocraciaEnRed/leyesabiertas-web
|
closed
|
Cambiar titulo, bajada y mail oficial
|
priority: high
|
- [x] El nombre de la plataforma debe ser Portal de Leyes Abiertas
- [x] Bajada (texto debajo del titulo): Plataforma de intervención ciudadana en propuestas de ley
- [x] Agregar el mail oficial en contacto e info estática
|
1.0
|
Cambiar titulo, bajada y mail oficial - - [x] El nombre de la plataforma debe ser Portal de Leyes Abiertas
- [x] Bajada (texto debajo del titulo): Plataforma de intervención ciudadana en propuestas de ley
- [x] Agregar el mail oficial en contacto e info estática
|
non_infrastructure
|
cambiar titulo bajada y mail oficial el nombre de la plataforma debe ser portal de leyes abiertas bajada texto debajo del titulo plataforma de intervención ciudadana en propuestas de ley agregar el mail oficial en contacto e info estática
| 0
|
770
| 2,891,875,529
|
IssuesEvent
|
2015-06-15 09:14:52
|
insieme/insieme
|
https://api.github.com/repos/insieme/insieme
|
opened
|
iPic3D integration tests
|
enhancement infrastructure
|
Make the iPic3D code ready for the integration testing framework, create separate task for it on the continuous integration server.
|
1.0
|
iPic3D integration tests - Make the iPic3D code ready for the integration testing framework, create separate task for it on the continuous integration server.
|
infrastructure
|
integration tests make the code ready for the integration testing framework create separate task for it on the continuous integration server
| 1
|
9,939
| 8,257,876,052
|
IssuesEvent
|
2018-09-13 07:19:26
|
raiden-network/raiden
|
https://api.github.com/repos/raiden-network/raiden
|
closed
|
Fix automatic deployment
|
P2 infrastructure
|
## Problem Definition
During the last release we noticed that the automated release system doesn't work correctly.
Needs to be fixed.
Details in the [travis build](https://travis-ci.org/raiden-network/raiden/builds/405474632)
|
1.0
|
Fix automatic deployment - ## Problem Definition
During the last release we noticed that the automated release system doesn't work correctly.
Needs to be fixed.
Details in the [travis build](https://travis-ci.org/raiden-network/raiden/builds/405474632)
|
infrastructure
|
fix automatic deployment problem definition during the last release we noticed that the automated release system doesn t work correctly needs to be fixed details in the
| 1
|
607,428
| 18,782,335,068
|
IssuesEvent
|
2021-11-08 08:29:24
|
code-ready/crc
|
https://api.github.com/repos/code-ready/crc
|
closed
|
[BUG] Unable to upgrade according the documentation with windows tray enabled
|
kind/bug priority/minor status/stale
|
### General information
Tested on downstream environments
## CRC version
```bash
CodeReady Containers version: 1.25.0+0e5748c8
OpenShift version: 4.7.5 (embedded in executable)
```
## CRC config
```bash
- consent-telemetry : no
- enable-experimental-features : true
```
## Host Operating System
```bash
OS Name: Microsoft Windows 10 Pro
OS Version: 10.0.19042 N/A Build 19042
```
### Steps to reproduce
1. crc config set enable-experimental-features true
2. crc setup
2. crc delete
3. trying to update crc binary
### Expected
Binary can be updated with newer version according to the defined steps on [documentation](https://code-ready.github.io/crc/#upgrading-codeready-containers_gsg)
### Actual
Can not copy the new binary (can not delete the previous one due to file lock)

In this scenario a cleanup command is required to destroy the dangling proces
```bash
crc cleanup
```
### Logs
```bash
PS C:\Users\crcqe> crc setup
INFO Checking if podman remote executable is cached
INFO Checking if admin-helper executable is cached
INFO Checking minimum RAM requirements
INFO Checking if running in a shell with administrator rights
INFO Checking Windows 10 release
INFO Checking Windows edition
INFO Checking if Hyper-V is installed and operational
INFO Checking if user is a member of the Hyper-V Administrators group
INFO Checking if Hyper-V service is enabled
INFO Checking if the Hyper-V virtual switch exist
INFO Found Virtual Switch to use: Default Switch
INFO Checking if tray executable is present
INFO Checking if CodeReady Containers daemon is installed
INFO Installing CodeReady Containers daemon
INFO Will run as admin: Create symlink to daemon batch file in start-up folder
INFO Checking if tray is installed
INFO Installing CodeReady Containers tray
INFO Will run as admin: Create symlink to tray in start-up folder
INFO Checking if CRC bundle is extracted in '$HOME/.crc'
INFO Checking if C:\Users\crcqe\.crc\cache\crc_hyperv_4.7.5.crcbundle exists
Your system is correctly setup for using CodeReady Containers, you can now run 'crc start' to start the OpenShift cluster
PS C:\Users\crcqe> crc delete --log-level debug
DEBU CodeReady Containers version: 1.25.0+0e5748c8
DEBU OpenShift version: 4.7.5 (embedded in executable)
DEBU Running 'crc delete'
DEBU Checking file: C:\Users\crcqe\.crc\machines\crc\.crc-exist
Machine does not exist. Use 'crc start' to create it
```
|
1.0
|
[BUG] Unable to upgrade according the documentation with windows tray enabled - ### General information
Tested on downstream environments
## CRC version
```bash
CodeReady Containers version: 1.25.0+0e5748c8
OpenShift version: 4.7.5 (embedded in executable)
```
## CRC config
```bash
- consent-telemetry : no
- enable-experimental-features : true
```
## Host Operating System
```bash
OS Name: Microsoft Windows 10 Pro
OS Version: 10.0.19042 N/A Build 19042
```
### Steps to reproduce
1. crc config set enable-experimental-features true
2. crc setup
2. crc delete
3. trying to update crc binary
### Expected
Binary can be updated with newer version according to the defined steps on [documentation](https://code-ready.github.io/crc/#upgrading-codeready-containers_gsg)
### Actual
Can not copy the new binary (can not delete the previous one due to file lock)

In this scenario a cleanup command is required to destroy the dangling proces
```bash
crc cleanup
```
### Logs
```bash
PS C:\Users\crcqe> crc setup
INFO Checking if podman remote executable is cached
INFO Checking if admin-helper executable is cached
INFO Checking minimum RAM requirements
INFO Checking if running in a shell with administrator rights
INFO Checking Windows 10 release
INFO Checking Windows edition
INFO Checking if Hyper-V is installed and operational
INFO Checking if user is a member of the Hyper-V Administrators group
INFO Checking if Hyper-V service is enabled
INFO Checking if the Hyper-V virtual switch exist
INFO Found Virtual Switch to use: Default Switch
INFO Checking if tray executable is present
INFO Checking if CodeReady Containers daemon is installed
INFO Installing CodeReady Containers daemon
INFO Will run as admin: Create symlink to daemon batch file in start-up folder
INFO Checking if tray is installed
INFO Installing CodeReady Containers tray
INFO Will run as admin: Create symlink to tray in start-up folder
INFO Checking if CRC bundle is extracted in '$HOME/.crc'
INFO Checking if C:\Users\crcqe\.crc\cache\crc_hyperv_4.7.5.crcbundle exists
Your system is correctly setup for using CodeReady Containers, you can now run 'crc start' to start the OpenShift cluster
PS C:\Users\crcqe> crc delete --log-level debug
DEBU CodeReady Containers version: 1.25.0+0e5748c8
DEBU OpenShift version: 4.7.5 (embedded in executable)
DEBU Running 'crc delete'
DEBU Checking file: C:\Users\crcqe\.crc\machines\crc\.crc-exist
Machine does not exist. Use 'crc start' to create it
```
|
non_infrastructure
|
unable to upgrade according the documentation with windows tray enabled general information tested on downstream environments crc version bash codeready containers version openshift version embedded in executable crc config bash consent telemetry no enable experimental features true host operating system bash os name microsoft windows pro os version n a build steps to reproduce crc config set enable experimental features true crc setup crc delete trying to update crc binary expected binary can be updated with newer version according to the defined steps on actual can not copy the new binary can not delete the previous one due to file lock in this scenario a cleanup command is required to destroy the dangling proces bash crc cleanup logs bash ps c users crcqe crc setup info checking if podman remote executable is cached info checking if admin helper executable is cached info checking minimum ram requirements info checking if running in a shell with administrator rights info checking windows release info checking windows edition info checking if hyper v is installed and operational info checking if user is a member of the hyper v administrators group info checking if hyper v service is enabled info checking if the hyper v virtual switch exist info found virtual switch to use default switch info checking if tray executable is present info checking if codeready containers daemon is installed info installing codeready containers daemon info will run as admin create symlink to daemon batch file in start up folder info checking if tray is installed info installing codeready containers tray info will run as admin create symlink to tray in start up folder info checking if crc bundle is extracted in home crc info checking if c users crcqe crc cache crc hyperv crcbundle exists your system is correctly setup for using codeready containers you can now run crc start to start the openshift cluster ps c users crcqe crc delete log level debug debu codeready containers version debu openshift version embedded in executable debu running crc delete debu checking file c users crcqe crc machines crc crc exist machine does not exist use crc start to create it
| 0
|
874
| 2,984,923,265
|
IssuesEvent
|
2015-07-18 13:48:04
|
hackndev/zinc
|
https://api.github.com/repos/hackndev/zinc
|
closed
|
Modify examples to be dedicated crates
|
cleanup infrastructure nightly fallout
|
As a followup to #330, we need to refactor all the examples to be dedicated crates. This also shows how hard zinc is to use for external users. I'd expect a zinc app to be just one more crate. It is unreasonable to expect the users to download zinc source and add a new "example" entry.
* [x] [blink](https://github.com/farcaller/zinc/commit/23ba2d49d214d4f45e7ae2a14e2280072bab441d) in #318
* [x] blink_k20
* [x] blink_k20_isr
* [x] blink_lpc17xx
* [x] blink_pt
* [x] blink_stm32f4
* [x] blink_stm32l1
* [x] blink_tiva_c
* [x] bluenrg_stm32l1
* [x] dht22
* [x] empty
* [x] lcd_tiva_c
* [x] uart
* [x] uart_tiva_c
* [x] usart_stm32l1
|
1.0
|
Modify examples to be dedicated crates - As a followup to #330, we need to refactor all the examples to be dedicated crates. This also shows how hard zinc is to use for external users. I'd expect a zinc app to be just one more crate. It is unreasonable to expect the users to download zinc source and add a new "example" entry.
* [x] [blink](https://github.com/farcaller/zinc/commit/23ba2d49d214d4f45e7ae2a14e2280072bab441d) in #318
* [x] blink_k20
* [x] blink_k20_isr
* [x] blink_lpc17xx
* [x] blink_pt
* [x] blink_stm32f4
* [x] blink_stm32l1
* [x] blink_tiva_c
* [x] bluenrg_stm32l1
* [x] dht22
* [x] empty
* [x] lcd_tiva_c
* [x] uart
* [x] uart_tiva_c
* [x] usart_stm32l1
|
infrastructure
|
modify examples to be dedicated crates as a followup to we need to refactor all the examples to be dedicated crates this also shows how hard zinc is to use for external users i d expect a zinc app to be just one more crate it is unreasonable to expect the users to download zinc source and add a new example entry in blink blink isr blink blink pt blink blink blink tiva c bluenrg empty lcd tiva c uart uart tiva c usart
| 1
|
14,952
| 3,907,998,169
|
IssuesEvent
|
2016-04-19 14:37:49
|
plk/biblatex
|
https://api.github.com/repos/plk/biblatex
|
closed
|
Add a "quick start" guide to the manual
|
documentation enhancement
|
Just documenting another item on the to-do list. Any suggestions for the format or content would be welcome here.
|
1.0
|
Add a "quick start" guide to the manual - Just documenting another item on the to-do list. Any suggestions for the format or content would be welcome here.
|
non_infrastructure
|
add a quick start guide to the manual just documenting another item on the to do list any suggestions for the format or content would be welcome here
| 0
|
4,133
| 4,836,653,200
|
IssuesEvent
|
2016-11-08 20:13:26
|
devtools-html/debugger.html
|
https://api.github.com/repos/devtools-html/debugger.html
|
closed
|
`npm run firefox` is not starting firefox with --start-debugger-server
|
infrastructure
|
When we upgraded selenium + geckodriver, we stopped passing _--start-debugger-server_ into the firefox command.
I created this issue with `geckodriver` to follow up earlier today and will look into it tomorrow https://github.com/mozilla/geckodriver/issues/260. The solution should be a fairly simple api change.
|
1.0
|
`npm run firefox` is not starting firefox with --start-debugger-server - When we upgraded selenium + geckodriver, we stopped passing _--start-debugger-server_ into the firefox command.
I created this issue with `geckodriver` to follow up earlier today and will look into it tomorrow https://github.com/mozilla/geckodriver/issues/260. The solution should be a fairly simple api change.
|
infrastructure
|
npm run firefox is not starting firefox with start debugger server when we upgraded selenium geckodriver we stopped passing start debugger server into the firefox command i created this issue with geckodriver to follow up earlier today and will look into it tomorrow the solution should be a fairly simple api change
| 1
|
7,864
| 7,114,538,065
|
IssuesEvent
|
2018-01-18 01:17:26
|
dotnet/corefx
|
https://api.github.com/repos/dotnet/corefx
|
closed
|
init-tools.cmd hangs when running out of disk space
|
area-Infrastructure enhancement
|
After open the cmd file,I just get the following info.
Installing dotnet cli...
I have checked the log file
> Running init-tools.cmd
> Installing 'https://dotnetcli.azureedge.net/dotnet/Sdk/2.0.0/dotnet-sdk-2.0.0-win-x64.zip' to 'D:\ChuckLu\Git\GitHub\dotnet\corefx\Tools\dotnetcli\dotnet-sdk-2.0.0-win-x64.zip'
>
|
1.0
|
init-tools.cmd hangs when running out of disk space - After open the cmd file,I just get the following info.
Installing dotnet cli...
I have checked the log file
> Running init-tools.cmd
> Installing 'https://dotnetcli.azureedge.net/dotnet/Sdk/2.0.0/dotnet-sdk-2.0.0-win-x64.zip' to 'D:\ChuckLu\Git\GitHub\dotnet\corefx\Tools\dotnetcli\dotnet-sdk-2.0.0-win-x64.zip'
>
|
infrastructure
|
init tools cmd hangs when running out of disk space after open the cmd file i just get the following info installing dotnet cli i have checked the log file running init tools cmd installing to d chucklu git github dotnet corefx tools dotnetcli dotnet sdk win zip
| 1
|
35,699
| 32,050,369,892
|
IssuesEvent
|
2023-09-23 13:30:30
|
IntelPython/dpctl
|
https://api.github.com/repos/IntelPython/dpctl
|
opened
|
Implement GH action to purge dppy/label/dev of old artifacts
|
infrastructure
|
The `dppy/label/dev` channel often runs out of space.
It would be useful to also have a cron-scheduled GH action to purge old artifacts from the channel.
|
1.0
|
Implement GH action to purge dppy/label/dev of old artifacts - The `dppy/label/dev` channel often runs out of space.
It would be useful to also have a cron-scheduled GH action to purge old artifacts from the channel.
|
infrastructure
|
implement gh action to purge dppy label dev of old artifacts the dppy label dev channel often runs out of space it would be useful to also have a cron scheduled gh action to purge old artifacts from the channel
| 1
|
6,523
| 6,495,665,193
|
IssuesEvent
|
2017-08-22 06:48:41
|
SatelliteQE/robottelo
|
https://api.github.com/repos/SatelliteQE/robottelo
|
closed
|
Turn RHEL image name constants into robottelo.properties' settings
|
6.1 6.2 6.3 enhancement Infrastructure RFE
|
We need to have it because of Vault Requests and we dont want to change the code seamlessly with evry RHEL dot release.
Settings is the right way.
Related changes in robottelo-ci are covered by https://github.com/SatelliteQE/robottelo-ci/issues/497
|
1.0
|
Turn RHEL image name constants into robottelo.properties' settings - We need to have it because of Vault Requests and we dont want to change the code seamlessly with evry RHEL dot release.
Settings is the right way.
Related changes in robottelo-ci are covered by https://github.com/SatelliteQE/robottelo-ci/issues/497
|
infrastructure
|
turn rhel image name constants into robottelo properties settings we need to have it because of vault requests and we dont want to change the code seamlessly with evry rhel dot release settings is the right way related changes in robottelo ci are covered by
| 1
|
8,346
| 7,349,200,502
|
IssuesEvent
|
2018-03-08 09:50:55
|
outcobra/outstanding-cobra
|
https://api.github.com/repos/outcobra/outstanding-cobra
|
opened
|
Security audit
|
M-C-backend M-C-infrastructure P-3-medium T-task
|
We should perform a quick security audit for our application. Including manual and automated testing (e.g. [Vega Report](https://subgraph.com/vega/index.en.html)).
The servers are already being scanned weekly by OpenVAS/Greenbone and issues fixed accordingly.
|
1.0
|
Security audit - We should perform a quick security audit for our application. Including manual and automated testing (e.g. [Vega Report](https://subgraph.com/vega/index.en.html)).
The servers are already being scanned weekly by OpenVAS/Greenbone and issues fixed accordingly.
|
infrastructure
|
security audit we should perform a quick security audit for our application including manual and automated testing e g the servers are already being scanned weekly by openvas greenbone and issues fixed accordingly
| 1
|
70,320
| 3,322,382,569
|
IssuesEvent
|
2015-11-09 14:17:16
|
ow2-proactive/studio
|
https://api.github.com/repos/ow2-proactive/studio
|
opened
|
Drag&Drop fails sometime if make it slowly.
|
priority:minor
|
Drag&Drop fails sometime if make it slowly.
The tasks dropdown (id="task-menu") in the studio is closed every time the function isConnected (studio-client.js) is triggered.
We should prevent this behaviour because when you don’t know the interface, you make it slowly and this is disturbing.
|
1.0
|
Drag&Drop fails sometime if make it slowly. - Drag&Drop fails sometime if make it slowly.
The tasks dropdown (id="task-menu") in the studio is closed every time the function isConnected (studio-client.js) is triggered.
We should prevent this behaviour because when you don’t know the interface, you make it slowly and this is disturbing.
|
non_infrastructure
|
drag drop fails sometime if make it slowly drag drop fails sometime if make it slowly the tasks dropdown id task menu in the studio is closed every time the function isconnected studio client js is triggered we should prevent this behaviour because when you don’t know the interface you make it slowly and this is disturbing
| 0
|
822,523
| 30,876,241,248
|
IssuesEvent
|
2023-08-03 14:27:41
|
etro-js/etro
|
https://api.github.com/repos/etro-js/etro
|
opened
|
Add `onDraw` option to `Movie.record()`
|
type:feature priority:medium
|
This optional user-provided callback should run at the end of every call to `Movie._render()`
|
1.0
|
Add `onDraw` option to `Movie.record()` - This optional user-provided callback should run at the end of every call to `Movie._render()`
|
non_infrastructure
|
add ondraw option to movie record this optional user provided callback should run at the end of every call to movie render
| 0
|
92,437
| 8,364,005,818
|
IssuesEvent
|
2018-10-03 21:20:37
|
bokeh/bokeh
|
https://api.github.com/repos/bokeh/bokeh
|
closed
|
verify_all() doesn't give information what failed
|
tag: component: tests type: bug
|
This is the output from `py.test`:
```
================================================================= FAILURES ==================================================================
_________________________________________________________ Test___all__.test___all__ _________________________________________________________
self = <bokeh._testing.util.api.verify_all.<locals>.Test___all__ object at 0x7f2107dbab70>
def test___all__(self):
if isinstance(module, string_types):
mod = importlib.import_module(module)
else:
mod = module
assert hasattr(mod, "__all__")
> assert mod.__all__ == ALL
E AssertionError
bokeh/_testing/util/api.py:52: AssertionError
```
I don't know what's the origin of failure and what's the difference. Running py.test with `-vv` helps to establish the offending file. To fix this, either `test__all__` has to be implemented, so that it reports the use-site (not the implementation site), or assertions should have informative error messages.
|
1.0
|
verify_all() doesn't give information what failed - This is the output from `py.test`:
```
================================================================= FAILURES ==================================================================
_________________________________________________________ Test___all__.test___all__ _________________________________________________________
self = <bokeh._testing.util.api.verify_all.<locals>.Test___all__ object at 0x7f2107dbab70>
def test___all__(self):
if isinstance(module, string_types):
mod = importlib.import_module(module)
else:
mod = module
assert hasattr(mod, "__all__")
> assert mod.__all__ == ALL
E AssertionError
bokeh/_testing/util/api.py:52: AssertionError
```
I don't know what's the origin of failure and what's the difference. Running py.test with `-vv` helps to establish the offending file. To fix this, either `test__all__` has to be implemented, so that it reports the use-site (not the implementation site), or assertions should have informative error messages.
|
non_infrastructure
|
verify all doesn t give information what failed this is the output from py test failures test all test all self test all object at def test all self if isinstance module string types mod importlib import module module else mod module assert hasattr mod all assert mod all all e assertionerror bokeh testing util api py assertionerror i don t know what s the origin of failure and what s the difference running py test with vv helps to establish the offending file to fix this either test all has to be implemented so that it reports the use site not the implementation site or assertions should have informative error messages
| 0
|
343
| 2,652,902,403
|
IssuesEvent
|
2015-03-16 19:58:03
|
mroth/emojitrack-web
|
https://api.github.com/repos/mroth/emojitrack-web
|
opened
|
admin pages bootstrap 3 transition
|
infrastructure
|
_From @mroth on March 27, 2014 0:4_
and redesign a little to be more legible on mobile, so i can check up on things remotely more effectively
_Copied from original issue: mroth/emojitrack#28_
|
1.0
|
admin pages bootstrap 3 transition - _From @mroth on March 27, 2014 0:4_
and redesign a little to be more legible on mobile, so i can check up on things remotely more effectively
_Copied from original issue: mroth/emojitrack#28_
|
infrastructure
|
admin pages bootstrap transition from mroth on march and redesign a little to be more legible on mobile so i can check up on things remotely more effectively copied from original issue mroth emojitrack
| 1
|
4,224
| 3,003,352,068
|
IssuesEvent
|
2015-07-24 23:05:23
|
ash-lang/ash
|
https://api.github.com/repos/ash-lang/ash
|
opened
|
Default constructor body and super-class constructor calls.
|
analysis code-gen grammar proposal
|
If a class uses a default constructor and its superclass has a non-empty constructor, one of the superclass constructors must be called.
```
class Person(name : String, age : int)
class Student(name : String, age : int, year : int) : Person(name, age)
```
Add a `construct` keyword that allows a class with a default constructor to execute code when the default constructor is called and after the fields have been assigned.
```
class Person(name : String, age : int) {
construct {
println("My default constructor was called!")
}
}
|
1.0
|
Default constructor body and super-class constructor calls. - If a class uses a default constructor and its superclass has a non-empty constructor, one of the superclass constructors must be called.
```
class Person(name : String, age : int)
class Student(name : String, age : int, year : int) : Person(name, age)
```
Add a `construct` keyword that allows a class with a default constructor to execute code when the default constructor is called and after the fields have been assigned.
```
class Person(name : String, age : int) {
construct {
println("My default constructor was called!")
}
}
|
non_infrastructure
|
default constructor body and super class constructor calls if a class uses a default constructor and its superclass has a non empty constructor one of the superclass constructors must be called class person name string age int class student name string age int year int person name age add a construct keyword that allows a class with a default constructor to execute code when the default constructor is called and after the fields have been assigned class person name string age int construct println my default constructor was called
| 0
|
398,551
| 27,200,762,735
|
IssuesEvent
|
2023-02-20 09:33:13
|
acikkaynak/afetharita-roadmap
|
https://api.github.com/repos/acikkaynak/afetharita-roadmap
|
opened
|
[ACT]: Documenting achievements, could have been better and problems sections
|
documentation action
|
## Description
According to the decision that has been made at the [meeting](https://github.com/acikkaynak/afetharita-roadmap/blob/main/Notes/Meetings/20230219.md) documentation for the below three sections should have been completed.
1. In the short term - What did we achieve?
2. What could’ve done better?
3. What kind of problems we had?
## Items to Complete
- [x] In the short term - What did we achieve?
- [x] What could’ve done better?
- [x] What kind of problems we had?
## Supporting Information (Optional)
https://github.com/acikkaynak/afetharita-roadmap/wiki/Mapping-the-Disaster:-The-Story-of-Afet-Harita
|
1.0
|
[ACT]: Documenting achievements, could have been better and problems sections - ## Description
According to the decision that has been made at the [meeting](https://github.com/acikkaynak/afetharita-roadmap/blob/main/Notes/Meetings/20230219.md) documentation for the below three sections should have been completed.
1. In the short term - What did we achieve?
2. What could’ve done better?
3. What kind of problems we had?
## Items to Complete
- [x] In the short term - What did we achieve?
- [x] What could’ve done better?
- [x] What kind of problems we had?
## Supporting Information (Optional)
https://github.com/acikkaynak/afetharita-roadmap/wiki/Mapping-the-Disaster:-The-Story-of-Afet-Harita
|
non_infrastructure
|
documenting achievements could have been better and problems sections description according to the decision that has been made at the documentation for the below three sections should have been completed in the short term what did we achieve what could’ve done better what kind of problems we had items to complete in the short term what did we achieve what could’ve done better what kind of problems we had supporting information optional
| 0
|
126,209
| 4,974,148,686
|
IssuesEvent
|
2016-12-06 04:50:21
|
kduske/TrenchBroom
|
https://api.github.com/repos/kduske/TrenchBroom
|
reopened
|
Copy Paste Operation Causes Grid Misalignment
|
bug Platform:All Priority:Medium
|
Steps to reproduce:
1) New map.
2) Create a 16 unit cube at the edge of the starter brush.
3) Copy the 16 unit cube.
4) Paste the 16 unit cube.
The pasted cube will be misaligned and you will need to lower the grid size to position it flush against the starter brush. If you use the duplication operation, the new brush aligns just fine.
TrenchBroom 2.0.0 Beta Build 2f3c498 RelWithDebInfo
As always, ignore if already reported.
|
1.0
|
Copy Paste Operation Causes Grid Misalignment - Steps to reproduce:
1) New map.
2) Create a 16 unit cube at the edge of the starter brush.
3) Copy the 16 unit cube.
4) Paste the 16 unit cube.
The pasted cube will be misaligned and you will need to lower the grid size to position it flush against the starter brush. If you use the duplication operation, the new brush aligns just fine.
TrenchBroom 2.0.0 Beta Build 2f3c498 RelWithDebInfo
As always, ignore if already reported.
|
non_infrastructure
|
copy paste operation causes grid misalignment steps to reproduce new map create a unit cube at the edge of the starter brush copy the unit cube paste the unit cube the pasted cube will be misaligned and you will need to lower the grid size to position it flush against the starter brush if you use the duplication operation the new brush aligns just fine trenchbroom beta build relwithdebinfo as always ignore if already reported
| 0
|
101,082
| 30,863,061,675
|
IssuesEvent
|
2023-08-03 05:44:56
|
vuejs/vitepress
|
https://api.github.com/repos/vuejs/vitepress
|
closed
|
outDir logic is too confusing now
|
bug build
|
### Describe the bug
I'm trying to build a site in a custom folder and noticed several issues.
My site is located in the folder `sites/mysite.com`.
When I run following command in the root of my project:
```
npx vitepress build sites/mysite.com --outDir public
```
Instead of writing to ${workplaceFolder}/public it actually still resolves outDir relatively to sites/mySite.com so to make it working I need to use currently `../../public` or `$(pwd)/public` which are both too confusing because from CLI call it looks like i write to something above.
My suggestion is that relative path needs to be resolved relatively to cwd, not a docs folder.
But even like that what I find even more strange - this setting only impacts assets, while actual html pages are still located in the .vitepress/dist folder. Do you know how to fix that too? THanks!
### Reproduction
Just create a nested project like sites/test.site and try to build it to a public/test.site folder in your root.
### Expected behavior
- command like `vitepress build path/to/my/site --outDir public` resolves to a public folder in your root - not in the package.
- html pages should be also built respectively to outDir parameter
### System Info
```sh
System:
OS: Linux 5.15 Debian GNU/Linux 11 (bullseye) 11 (bullseye)
CPU: (12) x64 12th Gen Intel(R) Core(TM) i7-1265U
Memory: 11.75 GB / 15.34 GB
Container: Yes
Shell: 5.1.4 - /bin/bash
Binaries:
Node: 20.3.1 - /usr/local/bin/node
Yarn: 1.22.19 - /usr/local/bin/yarn
npm: 9.6.7 - /usr/local/bin/npm
pnpm: 8.6.6 - /usr/local/share/npm-global/bin/pnpm
npmPackages:
vitepress: ^1.0.0-beta.6 => 1.0.0-beta.6
```
### Additional context
_No response_
### Validations
- [X] Check if you're on the [latest VitePress version](https://github.com/vuejs/vitepress/releases/latest).
- [X] Follow our [Code of Conduct](https://vuejs.org/about/coc.html)
- [X] Read the [docs](https://vitepress.dev).
- [X] Check that there isn't already an issue that reports the same bug to avoid creating a duplicate.
|
1.0
|
outDir logic is too confusing now - ### Describe the bug
I'm trying to build a site in a custom folder and noticed several issues.
My site is located in the folder `sites/mysite.com`.
When I run following command in the root of my project:
```
npx vitepress build sites/mysite.com --outDir public
```
Instead of writing to ${workplaceFolder}/public it actually still resolves outDir relatively to sites/mySite.com so to make it working I need to use currently `../../public` or `$(pwd)/public` which are both too confusing because from CLI call it looks like i write to something above.
My suggestion is that relative path needs to be resolved relatively to cwd, not a docs folder.
But even like that what I find even more strange - this setting only impacts assets, while actual html pages are still located in the .vitepress/dist folder. Do you know how to fix that too? THanks!
### Reproduction
Just create a nested project like sites/test.site and try to build it to a public/test.site folder in your root.
### Expected behavior
- command like `vitepress build path/to/my/site --outDir public` resolves to a public folder in your root - not in the package.
- html pages should be also built respectively to outDir parameter
### System Info
```sh
System:
OS: Linux 5.15 Debian GNU/Linux 11 (bullseye) 11 (bullseye)
CPU: (12) x64 12th Gen Intel(R) Core(TM) i7-1265U
Memory: 11.75 GB / 15.34 GB
Container: Yes
Shell: 5.1.4 - /bin/bash
Binaries:
Node: 20.3.1 - /usr/local/bin/node
Yarn: 1.22.19 - /usr/local/bin/yarn
npm: 9.6.7 - /usr/local/bin/npm
pnpm: 8.6.6 - /usr/local/share/npm-global/bin/pnpm
npmPackages:
vitepress: ^1.0.0-beta.6 => 1.0.0-beta.6
```
### Additional context
_No response_
### Validations
- [X] Check if you're on the [latest VitePress version](https://github.com/vuejs/vitepress/releases/latest).
- [X] Follow our [Code of Conduct](https://vuejs.org/about/coc.html)
- [X] Read the [docs](https://vitepress.dev).
- [X] Check that there isn't already an issue that reports the same bug to avoid creating a duplicate.
|
non_infrastructure
|
outdir logic is too confusing now describe the bug i m trying to build a site in a custom folder and noticed several issues my site is located in the folder sites mysite com when i run following command in the root of my project npx vitepress build sites mysite com outdir public instead of writing to workplacefolder public it actually still resolves outdir relatively to sites mysite com so to make it working i need to use currently public or pwd public which are both too confusing because from cli call it looks like i write to something above my suggestion is that relative path needs to be resolved relatively to cwd not a docs folder but even like that what i find even more strange this setting only impacts assets while actual html pages are still located in the vitepress dist folder do you know how to fix that too thanks reproduction just create a nested project like sites test site and try to build it to a public test site folder in your root expected behavior command like vitepress build path to my site outdir public resolves to a public folder in your root not in the package html pages should be also built respectively to outdir parameter system info sh system os linux debian gnu linux bullseye bullseye cpu gen intel r core tm memory gb gb container yes shell bin bash binaries node usr local bin node yarn usr local bin yarn npm usr local bin npm pnpm usr local share npm global bin pnpm npmpackages vitepress beta beta additional context no response validations check if you re on the follow our read the check that there isn t already an issue that reports the same bug to avoid creating a duplicate
| 0
|
43,781
| 7,064,997,277
|
IssuesEvent
|
2018-01-06 14:43:28
|
jekyll/jekyll
|
https://api.github.com/repos/jekyll/jekyll
|
closed
|
Header on jekyllrb.com doesn't link to new release 3.7.0
|
documentation
|
Hey,
just noticed that the header still shows and links to the previous release `3.6.2`. Locally everything works fine, i assume the docs site just has to be regenerated? Is there a way to start a rebuild without a commit?
<img width="1504" alt="screen shot 2018-01-06 at 11 23 28" src="https://user-images.githubusercontent.com/570608/34639234-1034d166-f2d4-11e7-8319-b7f526d053fe.png">
cc: @jekyll/documentation
|
1.0
|
Header on jekyllrb.com doesn't link to new release 3.7.0 - Hey,
just noticed that the header still shows and links to the previous release `3.6.2`. Locally everything works fine, i assume the docs site just has to be regenerated? Is there a way to start a rebuild without a commit?
<img width="1504" alt="screen shot 2018-01-06 at 11 23 28" src="https://user-images.githubusercontent.com/570608/34639234-1034d166-f2d4-11e7-8319-b7f526d053fe.png">
cc: @jekyll/documentation
|
non_infrastructure
|
header on jekyllrb com doesn t link to new release hey just noticed that the header still shows and links to the previous release locally everything works fine i assume the docs site just has to be regenerated is there a way to start a rebuild without a commit img width alt screen shot at src cc jekyll documentation
| 0
|
261,821
| 8,246,381,973
|
IssuesEvent
|
2018-09-11 12:47:14
|
dojot/dojot
|
https://api.github.com/repos/dojot/dojot
|
opened
|
GUI - Usability problem when creating a new flow
|
Priority:Medium Team:Frontend Type:Bug
|
The scroll bar does not reach the bottom of the screen. Some nodes are not shown (eg geofence).

maximized window:

**Affected Version**: v0.3.0-beta1 (0.3.0-nightly_20180807)
|
1.0
|
GUI - Usability problem when creating a new flow - The scroll bar does not reach the bottom of the screen. Some nodes are not shown (eg geofence).

maximized window:

**Affected Version**: v0.3.0-beta1 (0.3.0-nightly_20180807)
|
non_infrastructure
|
gui usability problem when creating a new flow the scroll bar does not reach the bottom of the screen some nodes are not shown eg geofence maximized window affected version nightly
| 0
|
30,109
| 24,546,214,076
|
IssuesEvent
|
2022-10-12 08:59:52
|
nf-core/tools
|
https://api.github.com/repos/nf-core/tools
|
closed
|
Make `check_up_to_date()` to check for subworkflows also.
|
enhancement infrastructure
|
### Description of feature
The `check_up_to_date()` function in [modules_json.py](https://github.com/nf-core/tools/blob/dec66abe1c36a8975a952e1f80f045cab65bbf72/nf_core/modules/modules_json.py#L439) is only checking for modules. We need to update the function so it also checks `subworkflows`.
|
1.0
|
Make `check_up_to_date()` to check for subworkflows also. - ### Description of feature
The `check_up_to_date()` function in [modules_json.py](https://github.com/nf-core/tools/blob/dec66abe1c36a8975a952e1f80f045cab65bbf72/nf_core/modules/modules_json.py#L439) is only checking for modules. We need to update the function so it also checks `subworkflows`.
|
infrastructure
|
make check up to date to check for subworkflows also description of feature the check up to date function in is only checking for modules we need to update the function so it also checks subworkflows
| 1
|
29,443
| 24,015,048,138
|
IssuesEvent
|
2022-09-14 23:12:16
|
dotnet/runtime
|
https://api.github.com/repos/dotnet/runtime
|
opened
|
[Mono][Codespace] Add mono desktop build optioins
|
area-Infrastructure-mono
|
Scenarios to add:
- mono+libs
- mono+libs /p:MonoEnableLlvm=true
|
1.0
|
[Mono][Codespace] Add mono desktop build optioins - Scenarios to add:
- mono+libs
- mono+libs /p:MonoEnableLlvm=true
|
infrastructure
|
add mono desktop build optioins scenarios to add mono libs mono libs p monoenablellvm true
| 1
|
34,542
| 30,114,621,282
|
IssuesEvent
|
2023-06-30 10:25:38
|
kuznia-rdzeni/coreblocks
|
https://api.github.com/repos/kuznia-rdzeni/coreblocks
|
opened
|
Synthesis benchmark for full core
|
infrastructure
|
Currently, only the basic core is measured. We should do this for full core also, maybe synthesize both.
|
1.0
|
Synthesis benchmark for full core - Currently, only the basic core is measured. We should do this for full core also, maybe synthesize both.
|
infrastructure
|
synthesis benchmark for full core currently only the basic core is measured we should do this for full core also maybe synthesize both
| 1
|
35,517
| 31,780,864,790
|
IssuesEvent
|
2023-09-12 17:19:59
|
finos/FDC3
|
https://api.github.com/repos/finos/FDC3
|
opened
|
Update tsdx version in repo to resolve 17 moderate vulnerabilities
|
help wanted good first issue api project infrastructure
|
### Area of Issue
[x] API
Upgrading tsdx to 0.13.3 would resolve 17 moderate vulnerabilities in the FDC3 repo - but is a breaking change. I'm not sure what upgrade steps are required.
### npm audit report
```
jsdom <=16.5.3
Severity: moderate
Insufficient Granularity of Access Control in JSDom - https://github.com/advisories/GHSA-f4c9-cqv8-9v98
Depends on vulnerable versions of request
Depends on vulnerable versions of request-promise-native
Depends on vulnerable versions of tough-cookie
fix available via `npm audit fix --force`
Will install tsdx@0.13.3, which is a breaking change
node_modules/jsdom
jest-environment-jsdom 10.0.2 - 25.5.0
Depends on vulnerable versions of jsdom
node_modules/jest-environment-jsdom
jest-config 12.1.1-alpha.2935e14d - 25.5.4
Depends on vulnerable versions of @jest/test-sequencer
Depends on vulnerable versions of jest-environment-jsdom
Depends on vulnerable versions of jest-jasmine2
node_modules/jest-config
jest-cli 12.1.1-alpha.2935e14d || 12.1.2-alpha.6230044c - 25.5.4
Depends on vulnerable versions of @jest/core
Depends on vulnerable versions of jest-config
node_modules/jest-cli
jest 12.1.2-alpha.6230044c - 25.5.4
Depends on vulnerable versions of @jest/core
Depends on vulnerable versions of jest-cli
node_modules/jest
tsdx >=0.14.0
Depends on vulnerable versions of jest
node_modules/tsdx
jest-runner 21.0.0-alpha.1 - 25.5.4
Depends on vulnerable versions of jest-config
Depends on vulnerable versions of jest-jasmine2
Depends on vulnerable versions of jest-runtime
node_modules/jest-runner
@jest/test-sequencer <=25.5.4
Depends on vulnerable versions of jest-runner
Depends on vulnerable versions of jest-runtime
node_modules/@jest/test-sequencer
jest-runtime 12.1.1-alpha.2935e14d - 25.5.4
Depends on vulnerable versions of jest-config
node_modules/jest-runtime
jest-jasmine2 24.2.0-alpha.0 - 25.5.4
Depends on vulnerable versions of jest-runtime
node_modules/jest-jasmine2
node-notifier <8.0.1
Severity: moderate
OS Command Injection in node-notifier - https://github.com/advisories/GHSA-5fw9-fq32-wv5p
fix available via `npm audit fix --force`
Will install tsdx@0.13.3, which is a breaking change
node_modules/node-notifier
@jest/reporters <=26.4.0
Depends on vulnerable versions of node-notifier
node_modules/@jest/reporters
@jest/core <=25.5.4
Depends on vulnerable versions of @jest/reporters
Depends on vulnerable versions of jest-config
Depends on vulnerable versions of jest-runner
Depends on vulnerable versions of jest-runtime
node_modules/@jest/core
request *
Severity: moderate
Server-Side Request Forgery in Request - https://github.com/advisories/GHSA-p8p7-x288-28g6
Depends on vulnerable versions of tough-cookie
fix available via `npm audit fix --force`
Will install tsdx@0.13.3, which is a breaking change
node_modules/request
request-promise-core *
Depends on vulnerable versions of request
node_modules/request-promise-core
request-promise-native >=1.0.0
Depends on vulnerable versions of request
Depends on vulnerable versions of request-promise-core
Depends on vulnerable versions of tough-cookie
node_modules/request-promise-native
tough-cookie <4.1.3
Severity: moderate
tough-cookie Prototype Pollution vulnerability - https://github.com/advisories/GHSA-72xf-g2v4-qvf3
fix available via `npm audit fix --force`
Will install tsdx@0.13.3, which is a breaking change
node_modules/request-promise-native/node_modules/tough-cookie
node_modules/request/node_modules/tough-cookie
node_modules/tough-cookie
17 moderate severity vulnerabilities
```
|
1.0
|
Update tsdx version in repo to resolve 17 moderate vulnerabilities - ### Area of Issue
[x] API
Upgrading tsdx to 0.13.3 would resolve 17 moderate vulnerabilities in the FDC3 repo - but is a breaking change. I'm not sure what upgrade steps are required.
### npm audit report
```
jsdom <=16.5.3
Severity: moderate
Insufficient Granularity of Access Control in JSDom - https://github.com/advisories/GHSA-f4c9-cqv8-9v98
Depends on vulnerable versions of request
Depends on vulnerable versions of request-promise-native
Depends on vulnerable versions of tough-cookie
fix available via `npm audit fix --force`
Will install tsdx@0.13.3, which is a breaking change
node_modules/jsdom
jest-environment-jsdom 10.0.2 - 25.5.0
Depends on vulnerable versions of jsdom
node_modules/jest-environment-jsdom
jest-config 12.1.1-alpha.2935e14d - 25.5.4
Depends on vulnerable versions of @jest/test-sequencer
Depends on vulnerable versions of jest-environment-jsdom
Depends on vulnerable versions of jest-jasmine2
node_modules/jest-config
jest-cli 12.1.1-alpha.2935e14d || 12.1.2-alpha.6230044c - 25.5.4
Depends on vulnerable versions of @jest/core
Depends on vulnerable versions of jest-config
node_modules/jest-cli
jest 12.1.2-alpha.6230044c - 25.5.4
Depends on vulnerable versions of @jest/core
Depends on vulnerable versions of jest-cli
node_modules/jest
tsdx >=0.14.0
Depends on vulnerable versions of jest
node_modules/tsdx
jest-runner 21.0.0-alpha.1 - 25.5.4
Depends on vulnerable versions of jest-config
Depends on vulnerable versions of jest-jasmine2
Depends on vulnerable versions of jest-runtime
node_modules/jest-runner
@jest/test-sequencer <=25.5.4
Depends on vulnerable versions of jest-runner
Depends on vulnerable versions of jest-runtime
node_modules/@jest/test-sequencer
jest-runtime 12.1.1-alpha.2935e14d - 25.5.4
Depends on vulnerable versions of jest-config
node_modules/jest-runtime
jest-jasmine2 24.2.0-alpha.0 - 25.5.4
Depends on vulnerable versions of jest-runtime
node_modules/jest-jasmine2
node-notifier <8.0.1
Severity: moderate
OS Command Injection in node-notifier - https://github.com/advisories/GHSA-5fw9-fq32-wv5p
fix available via `npm audit fix --force`
Will install tsdx@0.13.3, which is a breaking change
node_modules/node-notifier
@jest/reporters <=26.4.0
Depends on vulnerable versions of node-notifier
node_modules/@jest/reporters
@jest/core <=25.5.4
Depends on vulnerable versions of @jest/reporters
Depends on vulnerable versions of jest-config
Depends on vulnerable versions of jest-runner
Depends on vulnerable versions of jest-runtime
node_modules/@jest/core
request *
Severity: moderate
Server-Side Request Forgery in Request - https://github.com/advisories/GHSA-p8p7-x288-28g6
Depends on vulnerable versions of tough-cookie
fix available via `npm audit fix --force`
Will install tsdx@0.13.3, which is a breaking change
node_modules/request
request-promise-core *
Depends on vulnerable versions of request
node_modules/request-promise-core
request-promise-native >=1.0.0
Depends on vulnerable versions of request
Depends on vulnerable versions of request-promise-core
Depends on vulnerable versions of tough-cookie
node_modules/request-promise-native
tough-cookie <4.1.3
Severity: moderate
tough-cookie Prototype Pollution vulnerability - https://github.com/advisories/GHSA-72xf-g2v4-qvf3
fix available via `npm audit fix --force`
Will install tsdx@0.13.3, which is a breaking change
node_modules/request-promise-native/node_modules/tough-cookie
node_modules/request/node_modules/tough-cookie
node_modules/tough-cookie
17 moderate severity vulnerabilities
```
|
infrastructure
|
update tsdx version in repo to resolve moderate vulnerabilities area of issue api upgrading tsdx to would resolve moderate vulnerabilities in the repo but is a breaking change i m not sure what upgrade steps are required npm audit report jsdom severity moderate insufficient granularity of access control in jsdom depends on vulnerable versions of request depends on vulnerable versions of request promise native depends on vulnerable versions of tough cookie fix available via npm audit fix force will install tsdx which is a breaking change node modules jsdom jest environment jsdom depends on vulnerable versions of jsdom node modules jest environment jsdom jest config alpha depends on vulnerable versions of jest test sequencer depends on vulnerable versions of jest environment jsdom depends on vulnerable versions of jest node modules jest config jest cli alpha alpha depends on vulnerable versions of jest core depends on vulnerable versions of jest config node modules jest cli jest alpha depends on vulnerable versions of jest core depends on vulnerable versions of jest cli node modules jest tsdx depends on vulnerable versions of jest node modules tsdx jest runner alpha depends on vulnerable versions of jest config depends on vulnerable versions of jest depends on vulnerable versions of jest runtime node modules jest runner jest test sequencer depends on vulnerable versions of jest runner depends on vulnerable versions of jest runtime node modules jest test sequencer jest runtime alpha depends on vulnerable versions of jest config node modules jest runtime jest alpha depends on vulnerable versions of jest runtime node modules jest node notifier severity moderate os command injection in node notifier fix available via npm audit fix force will install tsdx which is a breaking change node modules node notifier jest reporters depends on vulnerable versions of node notifier node modules jest reporters jest core depends on vulnerable versions of jest reporters depends on vulnerable versions of jest config depends on vulnerable versions of jest runner depends on vulnerable versions of jest runtime node modules jest core request severity moderate server side request forgery in request depends on vulnerable versions of tough cookie fix available via npm audit fix force will install tsdx which is a breaking change node modules request request promise core depends on vulnerable versions of request node modules request promise core request promise native depends on vulnerable versions of request depends on vulnerable versions of request promise core depends on vulnerable versions of tough cookie node modules request promise native tough cookie severity moderate tough cookie prototype pollution vulnerability fix available via npm audit fix force will install tsdx which is a breaking change node modules request promise native node modules tough cookie node modules request node modules tough cookie node modules tough cookie moderate severity vulnerabilities
| 1
|
422,869
| 12,287,490,746
|
IssuesEvent
|
2020-05-09 12:27:18
|
googleapis/elixir-google-api
|
https://api.github.com/repos/googleapis/elixir-google-api
|
opened
|
Synthesis failed for Vision
|
api: vision autosynth failure priority: p1 type: bug
|
Hello! Autosynth couldn't regenerate Vision. :broken_heart:
Here's the output from running `synth.py`:
```
2020-05-09 05:22:11 [INFO] logs will be written to: /tmpfs/src/github/synthtool/logs/googleapis/elixir-google-api
2020-05-09 05:22:11,441 autosynth > logs will be written to: /tmpfs/src/github/synthtool/logs/googleapis/elixir-google-api
Switched to branch 'autosynth-vision'
2020-05-09 05:22:13 [INFO] Running synthtool
2020-05-09 05:22:13,103 autosynth > Running synthtool
2020-05-09 05:22:13 [INFO] ['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'clients/vision/synth.metadata', 'synth.py', '--']
2020-05-09 05:22:13,104 autosynth > ['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'clients/vision/synth.metadata', 'synth.py', '--']
2020-05-09 05:22:13,314 synthtool > Executing /home/kbuilder/.cache/synthtool/elixir-google-api/synth.py.
On branch autosynth-vision
nothing to commit, working tree clean
2020-05-09 05:22:13,657 synthtool > Cloning https://github.com/googleapis/elixir-google-api.git.
2020-05-09 05:22:14,106 synthtool > Running: docker run --rm -v/home/kbuilder/.cache/synthtool/elixir-google-api:/workspace -v/var/run/docker.sock:/var/run/docker.sock -e USER_GROUP=1000:1000 -w /workspace gcr.io/cloud-devrel-public-resources/elixir19 scripts/generate_client.sh Vision
2020-05-09 05:22:18,091 synthtool > No files in sources /home/kbuilder/.cache/synthtool/elixir-google-api/clients were copied. Does the source contain files?
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 102, in <module>
main()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 94, in main
spec.loader.exec_module(synth_module) # type: ignore
File "/tmpfs/src/github/synthtool/synthtool/metadata.py", line 180, in __exit__
write(self.metadata_file_path)
File "/tmpfs/src/github/synthtool/synthtool/metadata.py", line 112, in write
with open(outfile, "w") as fh:
FileNotFoundError: [Errno 2] No such file or directory: 'clients/vision/synth.metadata'
2020-05-09 05:22:18 [ERROR] Synthesis failed
2020-05-09 05:22:18,120 autosynth > Synthesis failed
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 599, in <module>
main()
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 471, in main
return _inner_main(temp_dir)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 549, in _inner_main
).synthesize(base_synth_log_path)
File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 118, in synthesize
synth_proc.check_returncode() # Raise an exception.
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 389, in check_returncode
self.stderr)
subprocess.CalledProcessError: Command '['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'clients/vision/synth.metadata', 'synth.py', '--', 'Vision']' returned non-zero exit status 1.
```
Google internal developers can see the full log [here](https://sponge/11ff3741-9158-4831-8681-fff828f77e1a).
|
1.0
|
Synthesis failed for Vision - Hello! Autosynth couldn't regenerate Vision. :broken_heart:
Here's the output from running `synth.py`:
```
2020-05-09 05:22:11 [INFO] logs will be written to: /tmpfs/src/github/synthtool/logs/googleapis/elixir-google-api
2020-05-09 05:22:11,441 autosynth > logs will be written to: /tmpfs/src/github/synthtool/logs/googleapis/elixir-google-api
Switched to branch 'autosynth-vision'
2020-05-09 05:22:13 [INFO] Running synthtool
2020-05-09 05:22:13,103 autosynth > Running synthtool
2020-05-09 05:22:13 [INFO] ['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'clients/vision/synth.metadata', 'synth.py', '--']
2020-05-09 05:22:13,104 autosynth > ['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'clients/vision/synth.metadata', 'synth.py', '--']
2020-05-09 05:22:13,314 synthtool > Executing /home/kbuilder/.cache/synthtool/elixir-google-api/synth.py.
On branch autosynth-vision
nothing to commit, working tree clean
2020-05-09 05:22:13,657 synthtool > Cloning https://github.com/googleapis/elixir-google-api.git.
2020-05-09 05:22:14,106 synthtool > Running: docker run --rm -v/home/kbuilder/.cache/synthtool/elixir-google-api:/workspace -v/var/run/docker.sock:/var/run/docker.sock -e USER_GROUP=1000:1000 -w /workspace gcr.io/cloud-devrel-public-resources/elixir19 scripts/generate_client.sh Vision
2020-05-09 05:22:18,091 synthtool > No files in sources /home/kbuilder/.cache/synthtool/elixir-google-api/clients were copied. Does the source contain files?
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 102, in <module>
main()
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/tmpfs/src/github/synthtool/env/lib/python3.6/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/tmpfs/src/github/synthtool/synthtool/__main__.py", line 94, in main
spec.loader.exec_module(synth_module) # type: ignore
File "/tmpfs/src/github/synthtool/synthtool/metadata.py", line 180, in __exit__
write(self.metadata_file_path)
File "/tmpfs/src/github/synthtool/synthtool/metadata.py", line 112, in write
with open(outfile, "w") as fh:
FileNotFoundError: [Errno 2] No such file or directory: 'clients/vision/synth.metadata'
2020-05-09 05:22:18 [ERROR] Synthesis failed
2020-05-09 05:22:18,120 autosynth > Synthesis failed
Traceback (most recent call last):
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 599, in <module>
main()
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 471, in main
return _inner_main(temp_dir)
File "/tmpfs/src/github/synthtool/autosynth/synth.py", line 549, in _inner_main
).synthesize(base_synth_log_path)
File "/tmpfs/src/github/synthtool/autosynth/synthesizer.py", line 118, in synthesize
synth_proc.check_returncode() # Raise an exception.
File "/home/kbuilder/.pyenv/versions/3.6.9/lib/python3.6/subprocess.py", line 389, in check_returncode
self.stderr)
subprocess.CalledProcessError: Command '['/tmpfs/src/github/synthtool/env/bin/python3', '-m', 'synthtool', '--metadata', 'clients/vision/synth.metadata', 'synth.py', '--', 'Vision']' returned non-zero exit status 1.
```
Google internal developers can see the full log [here](https://sponge/11ff3741-9158-4831-8681-fff828f77e1a).
|
non_infrastructure
|
synthesis failed for vision hello autosynth couldn t regenerate vision broken heart here s the output from running synth py logs will be written to tmpfs src github synthtool logs googleapis elixir google api autosynth logs will be written to tmpfs src github synthtool logs googleapis elixir google api switched to branch autosynth vision running synthtool autosynth running synthtool autosynth synthtool executing home kbuilder cache synthtool elixir google api synth py on branch autosynth vision nothing to commit working tree clean synthtool cloning synthtool running docker run rm v home kbuilder cache synthtool elixir google api workspace v var run docker sock var run docker sock e user group w workspace gcr io cloud devrel public resources scripts generate client sh vision synthtool no files in sources home kbuilder cache synthtool elixir google api clients were copied does the source contain files traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src github synthtool synthtool main py line in main file tmpfs src github synthtool env lib site packages click core py line in call return self main args kwargs file tmpfs src github synthtool env lib site packages click core py line in main rv self invoke ctx file tmpfs src github synthtool env lib site packages click core py line in invoke return ctx invoke self callback ctx params file tmpfs src github synthtool env lib site packages click core py line in invoke return callback args kwargs file tmpfs src github synthtool synthtool main py line in main spec loader exec module synth module type ignore file tmpfs src github synthtool synthtool metadata py line in exit write self metadata file path file tmpfs src github synthtool synthtool metadata py line in write with open outfile w as fh filenotfounderror no such file or directory clients vision synth metadata synthesis failed autosynth synthesis failed traceback most recent call last file home kbuilder pyenv versions lib runpy py line in run module as main main mod spec file home kbuilder pyenv versions lib runpy py line in run code exec code run globals file tmpfs src github synthtool autosynth synth py line in main file tmpfs src github synthtool autosynth synth py line in main return inner main temp dir file tmpfs src github synthtool autosynth synth py line in inner main synthesize base synth log path file tmpfs src github synthtool autosynth synthesizer py line in synthesize synth proc check returncode raise an exception file home kbuilder pyenv versions lib subprocess py line in check returncode self stderr subprocess calledprocesserror command returned non zero exit status google internal developers can see the full log
| 0
|
160,282
| 6,085,827,064
|
IssuesEvent
|
2017-06-17 18:21:45
|
ReikaKalseki/Reika_Mods_Issues
|
https://api.github.com/repos/ReikaKalseki/Reika_Mods_Issues
|
closed
|
[Chromaticraft] 17c. Infinite Liquids via Teleportation Pump
|
Bug ChromatiCraft Exploit High Priority
|
Hi, i found an exploit for infinite fluids:
an example for Liquid Chroma:
Step 1: Place a Liquid Chroma Puddle in the world.
Step 2: Register them by the teleportation Pump.
Step 3: Collect the Chroma Puddle back in the Bucket.
Step 4: Pump the registered Chroma via teleportation pump.
Step 5: Profit
greets
|
1.0
|
[Chromaticraft] 17c. Infinite Liquids via Teleportation Pump - Hi, i found an exploit for infinite fluids:
an example for Liquid Chroma:
Step 1: Place a Liquid Chroma Puddle in the world.
Step 2: Register them by the teleportation Pump.
Step 3: Collect the Chroma Puddle back in the Bucket.
Step 4: Pump the registered Chroma via teleportation pump.
Step 5: Profit
greets
|
non_infrastructure
|
infinite liquids via teleportation pump hi i found an exploit for infinite fluids an example for liquid chroma step place a liquid chroma puddle in the world step register them by the teleportation pump step collect the chroma puddle back in the bucket step pump the registered chroma via teleportation pump step profit greets
| 0
|
19,796
| 13,458,495,178
|
IssuesEvent
|
2020-09-09 10:44:12
|
telerik/kendo-themes
|
https://api.github.com/repos/telerik/kendo-themes
|
closed
|
SUGGESTION: Add .browserlistrc for our apps to comply with
|
Enhancement infrastructure
|
Bootstrap provides a .browserlistrc at https://github.com/twbs/bootstrap/blob/v4.3.1/.browserslistrc
```
# https://github.com/browserslist/browserslist#readme
>= 1%
last 1 major version
not dead
Chrome >= 45
Firefox >= 38
Edge >= 12
Explorer >= 10
iOS >= 9
Safari >= 9
Android >= 4.4
Opera >= 30
```
This makes it very clear what browser versions Bootstrap supports at any time.
It would be nice to have a .browserlistrc to copy into our apps in order to make sure our apps do not pretend to support browsers that kendo-themes do not support.
|
1.0
|
SUGGESTION: Add .browserlistrc for our apps to comply with - Bootstrap provides a .browserlistrc at https://github.com/twbs/bootstrap/blob/v4.3.1/.browserslistrc
```
# https://github.com/browserslist/browserslist#readme
>= 1%
last 1 major version
not dead
Chrome >= 45
Firefox >= 38
Edge >= 12
Explorer >= 10
iOS >= 9
Safari >= 9
Android >= 4.4
Opera >= 30
```
This makes it very clear what browser versions Bootstrap supports at any time.
It would be nice to have a .browserlistrc to copy into our apps in order to make sure our apps do not pretend to support browsers that kendo-themes do not support.
|
infrastructure
|
suggestion add browserlistrc for our apps to comply with bootstrap provides a browserlistrc at last major version not dead chrome firefox edge explorer ios safari android opera this makes it very clear what browser versions bootstrap supports at any time it would be nice to have a browserlistrc to copy into our apps in order to make sure our apps do not pretend to support browsers that kendo themes do not support
| 1
|
449,974
| 31,879,500,444
|
IssuesEvent
|
2023-09-16 07:37:00
|
gak112/DearjobTesting2
|
https://api.github.com/repos/gak112/DearjobTesting2
|
closed
|
Bug ; DEAR JOB WEB ; Staffing Consultancy ; Home> Hot List >Add Hot List ;Error in Experience
|
documentation invalid
|
Action :- In experience place holder it is accepting more than 100
Expected Output :- It should accept more than 100 years
Actual Output :- Accepting more than 100 yrs

|
1.0
|
Bug ; DEAR JOB WEB ; Staffing Consultancy ; Home> Hot List >Add Hot List ;Error in Experience - Action :- In experience place holder it is accepting more than 100
Expected Output :- It should accept more than 100 years
Actual Output :- Accepting more than 100 yrs

|
non_infrastructure
|
bug dear job web staffing consultancy home hot list add hot list error in experience action in experience place holder it is accepting more than expected output it should accept more than years actual output accepting more than yrs
| 0
|
42,187
| 17,081,900,625
|
IssuesEvent
|
2021-07-08 06:50:20
|
ctripcorp/apollo
|
https://api.github.com/repos/ctripcorp/apollo
|
closed
|
当apollo服务端宕机后,不影响应用正常使用
|
area/client area/configservice kind/question stale
|
**你的特性请求和某个问题有关吗?请描述**
我们公司的apollo支持很多业务使用,有个业务访问量太大,数据量也大,导致server 宕机
这样全公司的net java服务都出错了,影响太大了
**清晰简洁地描述一下你希望的解决方案**
希望当server宕机后,因为服务器下载了服务端缓存(放在opt文件夹下的),这样可以优先使用本地的缓存运行,仅当服务端跟客户端通信连接正常
才进行更新本地缓存的操作
|
1.0
|
当apollo服务端宕机后,不影响应用正常使用 - **你的特性请求和某个问题有关吗?请描述**
我们公司的apollo支持很多业务使用,有个业务访问量太大,数据量也大,导致server 宕机
这样全公司的net java服务都出错了,影响太大了
**清晰简洁地描述一下你希望的解决方案**
希望当server宕机后,因为服务器下载了服务端缓存(放在opt文件夹下的),这样可以优先使用本地的缓存运行,仅当服务端跟客户端通信连接正常
才进行更新本地缓存的操作
|
non_infrastructure
|
当apollo服务端宕机后,不影响应用正常使用 你的特性请求和某个问题有关吗?请描述 我们公司的apollo支持很多业务使用,有个业务访问量太大,数据量也大,导致server 宕机 这样全公司的net java服务都出错了,影响太大了 清晰简洁地描述一下你希望的解决方案 希望当server宕机后,因为服务器下载了服务端缓存(放在opt文件夹下的),这样可以优先使用本地的缓存运行,仅当服务端跟客户端通信连接正常 才进行更新本地缓存的操作
| 0
|
21,366
| 14,541,224,116
|
IssuesEvent
|
2020-12-15 14:19:22
|
google/web-stories-wp
|
https://api.github.com/repos/google/web-stories-wp
|
closed
|
Karma: ensure Google Fonts are loaded for tests and snapshots
|
P2 Pod: WP & Infra Type: Infrastructure Type: Task
|
<!-- NOTE: For help requests, support questions, or general feedback, please use the WordPress.org forums instead: https://wordpress.org/support/plugin/web-stories/ -->
## Task Description
Percy often fails because the web fonts haven't been loaded completely, showing everything in Times New Roman. Let's make this more robust.
|
1.0
|
Karma: ensure Google Fonts are loaded for tests and snapshots - <!-- NOTE: For help requests, support questions, or general feedback, please use the WordPress.org forums instead: https://wordpress.org/support/plugin/web-stories/ -->
## Task Description
Percy often fails because the web fonts haven't been loaded completely, showing everything in Times New Roman. Let's make this more robust.
|
infrastructure
|
karma ensure google fonts are loaded for tests and snapshots task description percy often fails because the web fonts haven t been loaded completely showing everything in times new roman let s make this more robust
| 1
|
243,318
| 20,377,963,768
|
IssuesEvent
|
2022-02-21 17:34:48
|
weaveworks/tf-controller
|
https://api.github.com/repos/weaveworks/tf-controller
|
closed
|
Add a test case for writing output with dots in its name to a secret
|
kind/enhancement area/testing
|
Need a test case to make sure that outputs containing dots, like the following, in their names are allowed:
```yaml
apiVersion: infra.contrib.fluxcd.io/v1alpha1
kind: Terraform
metadata:
name: master-key-tf
namespace: app-01
spec:
interval: 1h
path: ./_artifacts/10-zz-terraform
writeOutputsToSecret:
name: age
outputs:
- age.agekey
```
|
1.0
|
Add a test case for writing output with dots in its name to a secret - Need a test case to make sure that outputs containing dots, like the following, in their names are allowed:
```yaml
apiVersion: infra.contrib.fluxcd.io/v1alpha1
kind: Terraform
metadata:
name: master-key-tf
namespace: app-01
spec:
interval: 1h
path: ./_artifacts/10-zz-terraform
writeOutputsToSecret:
name: age
outputs:
- age.agekey
```
|
non_infrastructure
|
add a test case for writing output with dots in its name to a secret need a test case to make sure that outputs containing dots like the following in their names are allowed yaml apiversion infra contrib fluxcd io kind terraform metadata name master key tf namespace app spec interval path artifacts zz terraform writeoutputstosecret name age outputs age agekey
| 0
|
232,196
| 25,565,421,526
|
IssuesEvent
|
2022-11-30 13:59:00
|
hygieia/hygieia-whitesource-collector
|
https://api.github.com/repos/hygieia/hygieia-whitesource-collector
|
closed
|
CVE-2020-14062 (High) detected in jackson-databind-2.8.11.3.jar - autoclosed
|
wontfix security vulnerability
|
## CVE-2020-14062 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.11.3.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.11.3/jackson-databind-2.8.11.3.jar</p>
<p>
Dependency Hierarchy:
- core-3.15.42.jar (Root Library)
- spring-boot-starter-web-1.5.22.RELEASE.jar
- :x: **jackson-databind-2.8.11.3.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/hygieia/hygieia-whitesource-collector/commit/4b5ed1d2f3030d721692ff4f980e8d2467fde19b">4b5ed1d2f3030d721692ff4f980e8d2467fde19b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.5 mishandles the interaction between serialization gadgets and typing, related to com.sun.org.apache.xalan.internal.lib.sql.JNDIConnectionPool (aka xalan2).
<p>Publish Date: 2020-06-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-14062>CVE-2020-14062</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14062">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14062</a></p>
<p>Release Date: 2020-06-14</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.10.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2020-14062 (High) detected in jackson-databind-2.8.11.3.jar - autoclosed - ## CVE-2020-14062 - High Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>jackson-databind-2.8.11.3.jar</b></p></summary>
<p>General data-binding functionality for Jackson: works on core streaming API</p>
<p>Library home page: <a href="http://github.com/FasterXML/jackson">http://github.com/FasterXML/jackson</a></p>
<p>Path to dependency file: /pom.xml</p>
<p>Path to vulnerable library: /home/wss-scanner/.m2/repository/com/fasterxml/jackson/core/jackson-databind/2.8.11.3/jackson-databind-2.8.11.3.jar</p>
<p>
Dependency Hierarchy:
- core-3.15.42.jar (Root Library)
- spring-boot-starter-web-1.5.22.RELEASE.jar
- :x: **jackson-databind-2.8.11.3.jar** (Vulnerable Library)
<p>Found in HEAD commit: <a href="https://github.com/hygieia/hygieia-whitesource-collector/commit/4b5ed1d2f3030d721692ff4f980e8d2467fde19b">4b5ed1d2f3030d721692ff4f980e8d2467fde19b</a></p>
<p>Found in base branch: <b>main</b></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/high_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
FasterXML jackson-databind 2.x before 2.9.10.5 mishandles the interaction between serialization gadgets and typing, related to com.sun.org.apache.xalan.internal.lib.sql.JNDIConnectionPool (aka xalan2).
<p>Publish Date: 2020-06-14
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2020-14062>CVE-2020-14062</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>8.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: High
- Privileges Required: None
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: High
- Availability Impact: High
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14062">https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-14062</a></p>
<p>Release Date: 2020-06-14</p>
<p>Fix Resolution: com.fasterxml.jackson.core:jackson-databind:2.10.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_infrastructure
|
cve high detected in jackson databind jar autoclosed cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy core jar root library spring boot starter web release jar x jackson databind jar vulnerable library found in head commit a href found in base branch main vulnerability details fasterxml jackson databind x before mishandles the interaction between serialization gadgets and typing related to com sun org apache xalan internal lib sql jndiconnectionpool aka publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution com fasterxml jackson core jackson databind step up your open source security game with mend
| 0
|
4,126
| 4,822,665,965
|
IssuesEvent
|
2016-11-05 23:52:14
|
LOZORD/xanadu
|
https://api.github.com/repos/LOZORD/xanadu
|
opened
|
No slow tests!
|
enhancement infrastructure
|
None of the Mocha tests should be slow (yellow). We should try to remove any explicit `this.slow` calls, if possible.
|
1.0
|
No slow tests! - None of the Mocha tests should be slow (yellow). We should try to remove any explicit `this.slow` calls, if possible.
|
infrastructure
|
no slow tests none of the mocha tests should be slow yellow we should try to remove any explicit this slow calls if possible
| 1
|
28,541
| 23,323,408,254
|
IssuesEvent
|
2022-08-08 18:38:12
|
opensearch-project/k-NN
|
https://api.github.com/repos/opensearch-project/k-NN
|
opened
|
Add OSB workload that can run indexing and querying in parallel
|
Infrastructure
|
## Description
One common benchmark question that arises for the plugin is how does indexing impact querying performance. OpenSearch benchmarks has the ability to run a workload that executes 2 tasks in parallel.
We should add a new workload to [our extensions](https://github.com/opensearch-project/k-NN/tree/main/benchmarks/osb) that will allow users to benchmark plugin performance for a configurable indexing and querying throughput.
The workload should be broken up into the following operations:
1. Create a configurable k-NN index. We should be able to create an index from a model or not.
2. Ingest a base set of documents into the index
3. Warmup the index for querying workload
4. In parallel, index a set of documents at a configurable throughput and run a set of queries at a configurable throughput.
Further, we should be able to compare the numbers against existing benchmarks when there are no parallel operations going on.
## Links
1. [OSB Workload Schema](https://github.com/opensearch-project/opensearch-benchmark/blob/main/osbenchmark/resources/workload-schema.json)
|
1.0
|
Add OSB workload that can run indexing and querying in parallel - ## Description
One common benchmark question that arises for the plugin is how does indexing impact querying performance. OpenSearch benchmarks has the ability to run a workload that executes 2 tasks in parallel.
We should add a new workload to [our extensions](https://github.com/opensearch-project/k-NN/tree/main/benchmarks/osb) that will allow users to benchmark plugin performance for a configurable indexing and querying throughput.
The workload should be broken up into the following operations:
1. Create a configurable k-NN index. We should be able to create an index from a model or not.
2. Ingest a base set of documents into the index
3. Warmup the index for querying workload
4. In parallel, index a set of documents at a configurable throughput and run a set of queries at a configurable throughput.
Further, we should be able to compare the numbers against existing benchmarks when there are no parallel operations going on.
## Links
1. [OSB Workload Schema](https://github.com/opensearch-project/opensearch-benchmark/blob/main/osbenchmark/resources/workload-schema.json)
|
infrastructure
|
add osb workload that can run indexing and querying in parallel description one common benchmark question that arises for the plugin is how does indexing impact querying performance opensearch benchmarks has the ability to run a workload that executes tasks in parallel we should add a new workload to that will allow users to benchmark plugin performance for a configurable indexing and querying throughput the workload should be broken up into the following operations create a configurable k nn index we should be able to create an index from a model or not ingest a base set of documents into the index warmup the index for querying workload in parallel index a set of documents at a configurable throughput and run a set of queries at a configurable throughput further we should be able to compare the numbers against existing benchmarks when there are no parallel operations going on links
| 1
|
630,450
| 20,109,538,108
|
IssuesEvent
|
2022-02-07 13:54:33
|
googleapis/python-spanner-django
|
https://api.github.com/repos/googleapis/python-spanner-django
|
closed
|
Incompatible with google-cloud-spanner 3.12 ->> UserWarning: The `rowcount` property is non-operational | Cannot update/delete model objects
|
type: bug priority: p2 api: spanner
|
#### Environment details
- Programming language: Python
- OS: MacOS Big Sur 11.4
- Language runtime version: 3.8.9
- Package version: Django 3.2.2 and 3.2.9 with django-cloud-spanner 3.0.0 and google-cloud spanner 3.12.0
#### Steps to reproduce
1. Follow the "from scratch" documentation as outlined in the readme to the letter
2. Instantiate and save a model, e.g.: `a = MyModel.objects.create(name="testname")`
3. Try to delete the model with: `a.delete()`
4. This will trigger the following error:
```
/Users/me/.pyenv/versions/3.8.9/envs/spannertest/lib/python3.8/site-packages/django/db/backends/utils.py:22: UserWarning: The `rowcount` property is non-operational. Request resulting rows are streamed by the `fetch*()` methods and can't be counted before they are all streamed.
cursor_attr = getattr(self.cursor, attr)
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/Users/me/.pyenv/versions/3.8.9/envs/spannertest/lib/python3.8/site-packages/django/db/models/base.py", line 954, in delete
return collector.delete()
File "/Users/me/.pyenv/versions/3.8.9/envs/spannertest/lib/python3.8/site-packages/django/db/models/deletion.py", line 396, in delete
count = sql.DeleteQuery(model).delete_batch([instance.pk], self.using)
File "/Users/me/.pyenv/versions/3.8.9/envs/spannertest/lib/python3.8/site-packages/django/db/models/sql/subqueries.py", line 43, in delete_batch
num_deleted += self.do_query(self.get_meta().db_table, self.where, using=using)
TypeError: unsupported operand type(s) for +=: 'int' and 'NoneType'
```
A similar error will be raised when trying to update the model: `a.name = "new name" & a.save()`.
|
1.0
|
Incompatible with google-cloud-spanner 3.12 ->> UserWarning: The `rowcount` property is non-operational | Cannot update/delete model objects - #### Environment details
- Programming language: Python
- OS: MacOS Big Sur 11.4
- Language runtime version: 3.8.9
- Package version: Django 3.2.2 and 3.2.9 with django-cloud-spanner 3.0.0 and google-cloud spanner 3.12.0
#### Steps to reproduce
1. Follow the "from scratch" documentation as outlined in the readme to the letter
2. Instantiate and save a model, e.g.: `a = MyModel.objects.create(name="testname")`
3. Try to delete the model with: `a.delete()`
4. This will trigger the following error:
```
/Users/me/.pyenv/versions/3.8.9/envs/spannertest/lib/python3.8/site-packages/django/db/backends/utils.py:22: UserWarning: The `rowcount` property is non-operational. Request resulting rows are streamed by the `fetch*()` methods and can't be counted before they are all streamed.
cursor_attr = getattr(self.cursor, attr)
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/Users/me/.pyenv/versions/3.8.9/envs/spannertest/lib/python3.8/site-packages/django/db/models/base.py", line 954, in delete
return collector.delete()
File "/Users/me/.pyenv/versions/3.8.9/envs/spannertest/lib/python3.8/site-packages/django/db/models/deletion.py", line 396, in delete
count = sql.DeleteQuery(model).delete_batch([instance.pk], self.using)
File "/Users/me/.pyenv/versions/3.8.9/envs/spannertest/lib/python3.8/site-packages/django/db/models/sql/subqueries.py", line 43, in delete_batch
num_deleted += self.do_query(self.get_meta().db_table, self.where, using=using)
TypeError: unsupported operand type(s) for +=: 'int' and 'NoneType'
```
A similar error will be raised when trying to update the model: `a.name = "new name" & a.save()`.
|
non_infrastructure
|
incompatible with google cloud spanner userwarning the rowcount property is non operational cannot update delete model objects environment details programming language python os macos big sur language runtime version package version django and with django cloud spanner and google cloud spanner steps to reproduce follow the from scratch documentation as outlined in the readme to the letter instantiate and save a model e g a mymodel objects create name testname try to delete the model with a delete this will trigger the following error users me pyenv versions envs spannertest lib site packages django db backends utils py userwarning the rowcount property is non operational request resulting rows are streamed by the fetch methods and can t be counted before they are all streamed cursor attr getattr self cursor attr traceback most recent call last file line in file users me pyenv versions envs spannertest lib site packages django db models base py line in delete return collector delete file users me pyenv versions envs spannertest lib site packages django db models deletion py line in delete count sql deletequery model delete batch self using file users me pyenv versions envs spannertest lib site packages django db models sql subqueries py line in delete batch num deleted self do query self get meta db table self where using using typeerror unsupported operand type s for int and nonetype a similar error will be raised when trying to update the model a name new name a save
| 0
|
26,212
| 19,726,077,215
|
IssuesEvent
|
2022-01-13 20:06:16
|
department-of-veterans-affairs/va.gov-team
|
https://api.github.com/repos/department-of-veterans-affairs/va.gov-team
|
opened
|
[Vets-API] Perform load testing in EKS dev cluster
|
operations devops infrastructure eks
|
## Description
Run load testing to make sure that the Vets-API is stable and performant in the EKS dev cluster.
## Technical notes
- This may have been done already with EKS in general... but, it should now be done on the dev-api.va.gov application in the EKS
- The load test should be equivalent to real-world production traffic volumes
## Tasks
- [ ] _What things need to happen?_
## Acceptance Criteria
- [ ] Load test has been performed against dev-api.va.gov and results recorded
|
1.0
|
[Vets-API] Perform load testing in EKS dev cluster - ## Description
Run load testing to make sure that the Vets-API is stable and performant in the EKS dev cluster.
## Technical notes
- This may have been done already with EKS in general... but, it should now be done on the dev-api.va.gov application in the EKS
- The load test should be equivalent to real-world production traffic volumes
## Tasks
- [ ] _What things need to happen?_
## Acceptance Criteria
- [ ] Load test has been performed against dev-api.va.gov and results recorded
|
infrastructure
|
perform load testing in eks dev cluster description run load testing to make sure that the vets api is stable and performant in the eks dev cluster technical notes this may have been done already with eks in general but it should now be done on the dev api va gov application in the eks the load test should be equivalent to real world production traffic volumes tasks what things need to happen acceptance criteria load test has been performed against dev api va gov and results recorded
| 1
|
21,378
| 14,542,245,288
|
IssuesEvent
|
2020-12-15 15:29:34
|
robotology/QA
|
https://api.github.com/repos/robotology/QA
|
closed
|
Qt & Yarp + iCub failed to link on Windows 10
|
infrastructure software
|
@gregoire-pointeau commented on [Tue Jun 21 2016](https://github.com/robotology/yarp/issues/808)
Hello,
My installation was working on windows 10 until an automatic update last Friday (17 June).
Since I have the following problem when I launch any GUI (`yarpmanager`, `iCubGui`...)
I have the following error message:
for `yarpmanager`:
> `?setSelectionModel@QListWidget@@UAEXPAVQItemSelectionModel@@@Z process entry point not found in the library link library C:\Robot\yarp\build\bin\Release\yarpmanager.exe`
for `iCubGui`:
> `?getProcAddress@QOpenGLContext@@QBEP6AXXZPBD@Z process entry point not found in the library link library C:\Robot\robotology\Qt\5.7\msvc2013\bin\Qt5OpenGL.dll`
I reinstalled Qt, re-pull everything, and rebuilt several times everything.
My configuration:
- Windows 10
- MVS 12 2013
- Cmake 3.5.2
- Qt 5.7
`Qt5_DIR: C:\Robot\robotology\Qt\5.7\msvc2013\lib\cmake`
INCLUDE has:
`C:\Robot\robotology\Qt\5.7\msvc2013\include`
PATH has:
`C:\Robot\robotology\Qt\5.7\msvc2013\lib`
`C:\Robot\robotology\Qt\5.7\msvc2013\bin`
Does anyone had a similar problem using windows 10?
Thanks
---
@gregoire-pointeau commented on [Tue Jul 26 2016](https://github.com/robotology/yarp/issues/808#issuecomment-235257339)
Any update on it anyone ?
---
@drdanz commented on [Tue Jul 26 2016](https://github.com/robotology/yarp/issues/808#issuecomment-235403730)
@gregoire-pointeau I'm sorry, I'm not a Windows user, perhaps @randaz81, @pattacini or @mbrunettini saw something similar? Are you sure that you don't have more than one qt5 installation in your path? I've seen strange behaviours on windows with recent versions of CMake that include qt5 dlls in its path.
Anyway it looks to me something related to your setup, not a bug in yarp, therefore I'm closing this, please reopen it if you find out that the bug is actually in YARP, or open a new one in the robotology/QA if you need more support with this issue.
|
1.0
|
Qt & Yarp + iCub failed to link on Windows 10 - @gregoire-pointeau commented on [Tue Jun 21 2016](https://github.com/robotology/yarp/issues/808)
Hello,
My installation was working on windows 10 until an automatic update last Friday (17 June).
Since I have the following problem when I launch any GUI (`yarpmanager`, `iCubGui`...)
I have the following error message:
for `yarpmanager`:
> `?setSelectionModel@QListWidget@@UAEXPAVQItemSelectionModel@@@Z process entry point not found in the library link library C:\Robot\yarp\build\bin\Release\yarpmanager.exe`
for `iCubGui`:
> `?getProcAddress@QOpenGLContext@@QBEP6AXXZPBD@Z process entry point not found in the library link library C:\Robot\robotology\Qt\5.7\msvc2013\bin\Qt5OpenGL.dll`
I reinstalled Qt, re-pull everything, and rebuilt several times everything.
My configuration:
- Windows 10
- MVS 12 2013
- Cmake 3.5.2
- Qt 5.7
`Qt5_DIR: C:\Robot\robotology\Qt\5.7\msvc2013\lib\cmake`
INCLUDE has:
`C:\Robot\robotology\Qt\5.7\msvc2013\include`
PATH has:
`C:\Robot\robotology\Qt\5.7\msvc2013\lib`
`C:\Robot\robotology\Qt\5.7\msvc2013\bin`
Does anyone had a similar problem using windows 10?
Thanks
---
@gregoire-pointeau commented on [Tue Jul 26 2016](https://github.com/robotology/yarp/issues/808#issuecomment-235257339)
Any update on it anyone ?
---
@drdanz commented on [Tue Jul 26 2016](https://github.com/robotology/yarp/issues/808#issuecomment-235403730)
@gregoire-pointeau I'm sorry, I'm not a Windows user, perhaps @randaz81, @pattacini or @mbrunettini saw something similar? Are you sure that you don't have more than one qt5 installation in your path? I've seen strange behaviours on windows with recent versions of CMake that include qt5 dlls in its path.
Anyway it looks to me something related to your setup, not a bug in yarp, therefore I'm closing this, please reopen it if you find out that the bug is actually in YARP, or open a new one in the robotology/QA if you need more support with this issue.
|
infrastructure
|
qt yarp icub failed to link on windows gregoire pointeau commented on hello my installation was working on windows until an automatic update last friday june since i have the following problem when i launch any gui yarpmanager icubgui i have the following error message for yarpmanager setselectionmodel qlistwidget uaexpavqitemselectionmodel z process entry point not found in the library link library c robot yarp build bin release yarpmanager exe for icubgui getprocaddress qopenglcontext z process entry point not found in the library link library c robot robotology qt bin dll i reinstalled qt re pull everything and rebuilt several times everything my configuration windows mvs cmake qt dir c robot robotology qt lib cmake include has c robot robotology qt include path has c robot robotology qt lib c robot robotology qt bin does anyone had a similar problem using windows thanks gregoire pointeau commented on any update on it anyone drdanz commented on gregoire pointeau i m sorry i m not a windows user perhaps pattacini or mbrunettini saw something similar are you sure that you don t have more than one installation in your path i ve seen strange behaviours on windows with recent versions of cmake that include dlls in its path anyway it looks to me something related to your setup not a bug in yarp therefore i m closing this please reopen it if you find out that the bug is actually in yarp or open a new one in the robotology qa if you need more support with this issue
| 1
|
37,039
| 9,942,177,933
|
IssuesEvent
|
2019-07-03 13:22:07
|
gpac/gpac
|
https://api.github.com/repos/gpac/gpac
|
closed
|
Support OpenJPEG 2
|
build feature-request player (mp4client/osmo)
|
Debian bug: https://bugs.debian.org/826814
OpenJPEG 1 is about to be removed from Debian so the OpenJPEG code in GPAC needs to be ported to OpenJPEG 2, or the JPEG2000 reader will have to be disabled in Debian (and probably other downstreams when they start removing OpenJPEG 1).
|
1.0
|
Support OpenJPEG 2 - Debian bug: https://bugs.debian.org/826814
OpenJPEG 1 is about to be removed from Debian so the OpenJPEG code in GPAC needs to be ported to OpenJPEG 2, or the JPEG2000 reader will have to be disabled in Debian (and probably other downstreams when they start removing OpenJPEG 1).
|
non_infrastructure
|
support openjpeg debian bug openjpeg is about to be removed from debian so the openjpeg code in gpac needs to be ported to openjpeg or the reader will have to be disabled in debian and probably other downstreams when they start removing openjpeg
| 0
|
25,072
| 18,075,603,801
|
IssuesEvent
|
2021-09-21 09:31:46
|
fremtind/jokul
|
https://api.github.com/repos/fremtind/jokul
|
closed
|
Bygddtiden på portalen kryper mot 2 minutter
|
👷♂️ CI and deployment 🚇 infrastructure 👽portal github_actions
|
**Feilbeskrivelse**
Siden portalen er grunnsteinen i alt vi lager, så begynner det å bli litt ubehagelig lang ventetid, både i utvikling og på CI-serveren.
**Forventet oppførsel**
Vi burde være litt raskere. Vi kan gjøre statiske optimaliseringer av bildene, isteden for å bruke sharp-transformeren for bildene, det burde shave vesentlige deler av bygget.
Vi kan splitte byggene, så vi kan bygge pakkene våre i en action, for å tilgjengeliggjøre assetene for andre actions, dermed trenger ikke portalbygge å bygge noe annet enn portalen.
|
1.0
|
Bygddtiden på portalen kryper mot 2 minutter - **Feilbeskrivelse**
Siden portalen er grunnsteinen i alt vi lager, så begynner det å bli litt ubehagelig lang ventetid, både i utvikling og på CI-serveren.
**Forventet oppførsel**
Vi burde være litt raskere. Vi kan gjøre statiske optimaliseringer av bildene, isteden for å bruke sharp-transformeren for bildene, det burde shave vesentlige deler av bygget.
Vi kan splitte byggene, så vi kan bygge pakkene våre i en action, for å tilgjengeliggjøre assetene for andre actions, dermed trenger ikke portalbygge å bygge noe annet enn portalen.
|
infrastructure
|
bygddtiden på portalen kryper mot minutter feilbeskrivelse siden portalen er grunnsteinen i alt vi lager så begynner det å bli litt ubehagelig lang ventetid både i utvikling og på ci serveren forventet oppførsel vi burde være litt raskere vi kan gjøre statiske optimaliseringer av bildene isteden for å bruke sharp transformeren for bildene det burde shave vesentlige deler av bygget vi kan splitte byggene så vi kan bygge pakkene våre i en action for å tilgjengeliggjøre assetene for andre actions dermed trenger ikke portalbygge å bygge noe annet enn portalen
| 1
|
22,929
| 15,684,753,454
|
IssuesEvent
|
2021-03-25 10:22:23
|
ComplianceAsCode/content
|
https://api.github.com/repos/ComplianceAsCode/content
|
closed
|
Generic python shebang?
|
Infrastructure Python Ubuntu
|
#### Description of problem:
A lot of shebangs are currently using `#!/usr/bin/env python2`:
```console
shared/transforms/pcidss/generate_pcidss_json.py:1:#!/usr/bin/env python2
shared/transforms/pcidss/transform_benchmark_to_pcidss.py:1:#!/usr/bin/env python2
build-scripts/relabel_ids.py:1:#!/usr/bin/env python2
build-scripts/unselect_empty_xccdf_groups.py:1:#!/usr/bin/env python2
build-scripts/build_templated_content.py:1:#!/usr/bin/env python2
build-scripts/generate_bash_remediation_functions.py:1:#!/usr/bin/env python2
build-scripts/verify_references.py:1:#!/usr/bin/env python2
build-scripts/oscap_svg_support.py:1:#!/usr/bin/env python2
build-scripts/combine_ovals.py:1:#!/usr/bin/env python2
build-scripts/enable_derivatives.py:1:#!/usr/bin/env python2
build-scripts/build_all_guides.py:1:#!/usr/bin/env python2
build-scripts/combine_remediations.py:1:#!/usr/bin/env python2
build-scripts/sds_move_ocil_to_checks.py:1:#!/usr/bin/env python2
build-scripts/add_stig_references.py:1:#!/usr/bin/env python2
build-scripts/build_profile_remediations.py:1:#!/usr/bin/env python2
build-scripts/cpe_generate.py:1:#!/usr/bin/env python2
build-scripts/generate_fixes_xml.py:1:#!/usr/bin/env python2
utils/fix-rules.py:1:#!/usr/bin/env python2
utils/count_oval_objects.py:1:#!/usr/bin/env python2
utils/find_duplicates.py:1:#!/usr/bin/env python2
utils/generate_contributors.py:1:#!/usr/bin/env python2
utils/xccdf2csv-stig.py:1:#!/usr/bin/env python2
utils/ansible_playbook_to_role.py:1:#!/usr/bin/env python2
utils/testoval.py:1:#!/usr/bin/env python2
tests/ensure_paths_are_short.py:1:#!/usr/bin/env python2
tests/test_suite.py:1:#!/usr/bin/env python2
tests/ssg_test_suite/xml_operations.py:1:#!/usr/bin/env python2
tests/ssg_test_suite/oscap.py:1:#!/usr/bin/env python2
tests/ssg_test_suite/virt.py:1:#!/usr/bin/env python2
tests/ssg_test_suite/profile.py:1:#!/usr/bin/env python2
tests/ssg_test_suite/combined.py:1:#!/usr/bin/env python2
tests/install_vm.py:1:#!/usr/bin/env python2
```
A lot of work was done earlier to make SSG work well under both Python2 and Python3.
Could we perhaps move these to an agnostic `#!/usr/bin/env python` if we can't directly switch to `python3`?
I'd imagine Fedora would have the same problem..
#### SCAP Security Guide Version:
`master` as of `6fbb68c8344fbf16994e4dc3bd3e207495181f8c`
#### Operating System Version:
Ubuntu 20.10
|
1.0
|
Generic python shebang? - #### Description of problem:
A lot of shebangs are currently using `#!/usr/bin/env python2`:
```console
shared/transforms/pcidss/generate_pcidss_json.py:1:#!/usr/bin/env python2
shared/transforms/pcidss/transform_benchmark_to_pcidss.py:1:#!/usr/bin/env python2
build-scripts/relabel_ids.py:1:#!/usr/bin/env python2
build-scripts/unselect_empty_xccdf_groups.py:1:#!/usr/bin/env python2
build-scripts/build_templated_content.py:1:#!/usr/bin/env python2
build-scripts/generate_bash_remediation_functions.py:1:#!/usr/bin/env python2
build-scripts/verify_references.py:1:#!/usr/bin/env python2
build-scripts/oscap_svg_support.py:1:#!/usr/bin/env python2
build-scripts/combine_ovals.py:1:#!/usr/bin/env python2
build-scripts/enable_derivatives.py:1:#!/usr/bin/env python2
build-scripts/build_all_guides.py:1:#!/usr/bin/env python2
build-scripts/combine_remediations.py:1:#!/usr/bin/env python2
build-scripts/sds_move_ocil_to_checks.py:1:#!/usr/bin/env python2
build-scripts/add_stig_references.py:1:#!/usr/bin/env python2
build-scripts/build_profile_remediations.py:1:#!/usr/bin/env python2
build-scripts/cpe_generate.py:1:#!/usr/bin/env python2
build-scripts/generate_fixes_xml.py:1:#!/usr/bin/env python2
utils/fix-rules.py:1:#!/usr/bin/env python2
utils/count_oval_objects.py:1:#!/usr/bin/env python2
utils/find_duplicates.py:1:#!/usr/bin/env python2
utils/generate_contributors.py:1:#!/usr/bin/env python2
utils/xccdf2csv-stig.py:1:#!/usr/bin/env python2
utils/ansible_playbook_to_role.py:1:#!/usr/bin/env python2
utils/testoval.py:1:#!/usr/bin/env python2
tests/ensure_paths_are_short.py:1:#!/usr/bin/env python2
tests/test_suite.py:1:#!/usr/bin/env python2
tests/ssg_test_suite/xml_operations.py:1:#!/usr/bin/env python2
tests/ssg_test_suite/oscap.py:1:#!/usr/bin/env python2
tests/ssg_test_suite/virt.py:1:#!/usr/bin/env python2
tests/ssg_test_suite/profile.py:1:#!/usr/bin/env python2
tests/ssg_test_suite/combined.py:1:#!/usr/bin/env python2
tests/install_vm.py:1:#!/usr/bin/env python2
```
A lot of work was done earlier to make SSG work well under both Python2 and Python3.
Could we perhaps move these to an agnostic `#!/usr/bin/env python` if we can't directly switch to `python3`?
I'd imagine Fedora would have the same problem..
#### SCAP Security Guide Version:
`master` as of `6fbb68c8344fbf16994e4dc3bd3e207495181f8c`
#### Operating System Version:
Ubuntu 20.10
|
infrastructure
|
generic python shebang description of problem a lot of shebangs are currently using usr bin env console shared transforms pcidss generate pcidss json py usr bin env shared transforms pcidss transform benchmark to pcidss py usr bin env build scripts relabel ids py usr bin env build scripts unselect empty xccdf groups py usr bin env build scripts build templated content py usr bin env build scripts generate bash remediation functions py usr bin env build scripts verify references py usr bin env build scripts oscap svg support py usr bin env build scripts combine ovals py usr bin env build scripts enable derivatives py usr bin env build scripts build all guides py usr bin env build scripts combine remediations py usr bin env build scripts sds move ocil to checks py usr bin env build scripts add stig references py usr bin env build scripts build profile remediations py usr bin env build scripts cpe generate py usr bin env build scripts generate fixes xml py usr bin env utils fix rules py usr bin env utils count oval objects py usr bin env utils find duplicates py usr bin env utils generate contributors py usr bin env utils stig py usr bin env utils ansible playbook to role py usr bin env utils testoval py usr bin env tests ensure paths are short py usr bin env tests test suite py usr bin env tests ssg test suite xml operations py usr bin env tests ssg test suite oscap py usr bin env tests ssg test suite virt py usr bin env tests ssg test suite profile py usr bin env tests ssg test suite combined py usr bin env tests install vm py usr bin env a lot of work was done earlier to make ssg work well under both and could we perhaps move these to an agnostic usr bin env python if we can t directly switch to i d imagine fedora would have the same problem scap security guide version master as of operating system version ubuntu
| 1
|
748,596
| 26,128,819,834
|
IssuesEvent
|
2022-12-28 23:39:19
|
microsoft/PowerToys
|
https://api.github.com/repos/microsoft/PowerToys
|
closed
|
Video Conference Overlay opens on startup when "Enable Video Conference" feature is set to "Off".
|
Issue-Bug Priority-2 Product-Video Conference Mute
|
I have power toys set to startup with windows an the "Video Conference" feature disabled yet the overlay opens on startup still.
Steps to reproduce:
1. Set power toys to start with windows
2. Disable "Video Conference" feature
3. Restart computer
Expected:
1. Power Toys starts with windows with no video conference overlay
Actual:
1. Video conference overlay appears in top right corner when starting windows
2. To disable the overlay user must toggle "Enable Video Conference" feature on then off.
|
1.0
|
Video Conference Overlay opens on startup when "Enable Video Conference" feature is set to "Off". - I have power toys set to startup with windows an the "Video Conference" feature disabled yet the overlay opens on startup still.
Steps to reproduce:
1. Set power toys to start with windows
2. Disable "Video Conference" feature
3. Restart computer
Expected:
1. Power Toys starts with windows with no video conference overlay
Actual:
1. Video conference overlay appears in top right corner when starting windows
2. To disable the overlay user must toggle "Enable Video Conference" feature on then off.
|
non_infrastructure
|
video conference overlay opens on startup when enable video conference feature is set to off i have power toys set to startup with windows an the video conference feature disabled yet the overlay opens on startup still steps to reproduce set power toys to start with windows disable video conference feature restart computer expected power toys starts with windows with no video conference overlay actual video conference overlay appears in top right corner when starting windows to disable the overlay user must toggle enable video conference feature on then off
| 0
|
195,596
| 6,913,299,623
|
IssuesEvent
|
2017-11-28 14:53:55
|
chingu-coders/Voyage2-Bears-27
|
https://api.github.com/repos/chingu-coders/Voyage2-Bears-27
|
closed
|
Establish Style Specifications
|
priority:must have scope:story type:feature
|
As a WebDev I need style specifications for common app components, like headers/footers/buttons, that are shared across all pages.
|
1.0
|
Establish Style Specifications - As a WebDev I need style specifications for common app components, like headers/footers/buttons, that are shared across all pages.
|
non_infrastructure
|
establish style specifications as a webdev i need style specifications for common app components like headers footers buttons that are shared across all pages
| 0
|
3,728
| 4,513,860,609
|
IssuesEvent
|
2016-09-04 14:47:56
|
matthiasbeyer/imag
|
https://api.github.com/repos/matthiasbeyer/imag
|
closed
|
libimagref: Custom hash functions
|
complexity/medium kind/enhancement kind/feature kind/infrastructure meta/blocker meta/importance/high meta/WIP part/lib/imagref
|
We need (trust me, we need this) custom hash functions in `libimagref`.
This does not mean that we want to replace SHA1 as hash function (but we could offer SHA512 and others via RefFlags, ... just popped into my mind).
I want to be able to define _what_ gets hashed ... at the moment, the `libimagref` implementation simply hashes the complete content.
It could be a good idea, though, to hash only _parts_ of a file. For speed optimizations, we could only hash the first N bytes (as described in #637) - but for refering to music files, movie files or even mail files, it could be a good idea to hash some of the content parts of a file - so the _semantics_ of the file matter here!
---
I will implement this - maybe even today.
|
1.0
|
libimagref: Custom hash functions - We need (trust me, we need this) custom hash functions in `libimagref`.
This does not mean that we want to replace SHA1 as hash function (but we could offer SHA512 and others via RefFlags, ... just popped into my mind).
I want to be able to define _what_ gets hashed ... at the moment, the `libimagref` implementation simply hashes the complete content.
It could be a good idea, though, to hash only _parts_ of a file. For speed optimizations, we could only hash the first N bytes (as described in #637) - but for refering to music files, movie files or even mail files, it could be a good idea to hash some of the content parts of a file - so the _semantics_ of the file matter here!
---
I will implement this - maybe even today.
|
infrastructure
|
libimagref custom hash functions we need trust me we need this custom hash functions in libimagref this does not mean that we want to replace as hash function but we could offer and others via refflags just popped into my mind i want to be able to define what gets hashed at the moment the libimagref implementation simply hashes the complete content it could be a good idea though to hash only parts of a file for speed optimizations we could only hash the first n bytes as described in but for refering to music files movie files or even mail files it could be a good idea to hash some of the content parts of a file so the semantics of the file matter here i will implement this maybe even today
| 1
|
5,080
| 5,429,706,173
|
IssuesEvent
|
2017-03-03 19:10:02
|
crystal-lang/crystal
|
https://api.github.com/repos/crystal-lang/crystal
|
opened
|
LLVM < 3.8 generates incorrect binary in release mode
|
kind:bug topic:infrastructure
|
We have #4057, #3013 and #3695 as examples. We should stop supporting LLVM 3.5 (I'm not sure this happens in 3.6, but it's a relatively old version).
|
1.0
|
LLVM < 3.8 generates incorrect binary in release mode - We have #4057, #3013 and #3695 as examples. We should stop supporting LLVM 3.5 (I'm not sure this happens in 3.6, but it's a relatively old version).
|
infrastructure
|
llvm generates incorrect binary in release mode we have and as examples we should stop supporting llvm i m not sure this happens in but it s a relatively old version
| 1
|
72,551
| 19,318,572,717
|
IssuesEvent
|
2021-12-14 01:01:50
|
tensorflow/tensorflow
|
https://api.github.com/repos/tensorflow/tensorflow
|
opened
|
[r2.6][PR] Cannot find real source of dependency `No such file or directory` error when building from scratch
|
type:build/install
|
**System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 18.04
- TensorFlow installed from (source or binary): source
- TensorFlow version: r2.6
- Python version: 3.9.5
- Installed using virtualenv? pip? conda?: `miniconda`
- Bazel version (if compiling from source): 3.7.2
- GCC/Compiler version (if compiling from source): 7.5.0
**Describe the problem**
I am currently working on a relatively large PR to [introduce ZSTD support in TF](https://github.com/tensorflow/tensorflow/pull/53385) (don't want to focus on why just now) for `TFRecordWriter`, which currently supports `ZLIB` and `GZIP`.
To introduce my changes, I am of course writing some tests, and as I am working my way up to the dependencies chain, I have stumbled upon this error:
```
(tensorflow) ubuntu@tensorflow-compression-build-1:~/Workspace/tensorflow$ bazel test //tensorflow/core/lib/io/zstd:zstd_test --test_filter=* --verbose_failures
INFO: Options provided by the client:
Inherited 'common' options: --isatty=1 --terminal_columns=143
INFO: Reading rc options for 'test' from /home/ubuntu/Workspace/tensorflow/.bazelrc:
Inherited 'common' options: --experimental_repo_remote_exec
INFO: Reading rc options for 'test' from /home/ubuntu/Workspace/tensorflow/.bazelrc:
Inherited 'build' options: --define framework_shared_object=true --java_toolchain=@tf_toolchains//toolchains/java:tf_java_toolchain --host_java_toolchain=@tf_toolchains//toolchains/java:tf_java_toolchain --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --enable_platform_specific_config --define=with_xla_support=true --config=short_logs --config=v2 --define=no_aws_support=true --define=no_hdfs_support=true
INFO: Reading rc options for 'test' from /home/ubuntu/Workspace/tensorflow/.tf_configure.bazelrc:
Inherited 'build' options: --action_env PYTHON_BIN_PATH=/home/ubuntu/miniconda3/envs/tensorflow/bin/python --action_env PYTHON_LIB_PATH=/home/ubuntu/miniconda3/envs/tensorflow/lib/python3.9/site-packages --python_path=/home/ubuntu/miniconda3/envs/tensorflow/bin/python
INFO: Reading rc options for 'test' from /home/ubuntu/Workspace/tensorflow/.tf_configure.bazelrc:
'test' options: --flaky_test_attempts=3 --test_size_filters=small,medium
INFO: Found applicable config definition build:short_logs in file /home/ubuntu/Workspace/tensorflow/.bazelrc: --output_filter=DONT_MATCH_ANYTHING
INFO: Found applicable config definition build:v2 in file /home/ubuntu/Workspace/tensorflow/.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
INFO: Found applicable config definition test:v2 in file /home/ubuntu/Workspace/tensorflow/.tf_configure.bazelrc: --test_tag_filters=-benchmark-test,-no_oss,-gpu,-oss_serial,-v1only --build_tag_filters=-benchmark-test,-no_oss,-gpu,-v1only
INFO: Found applicable config definition build:linux in file /home/ubuntu/Workspace/tensorflow/.bazelrc: --copt=-w --host_copt=-w --define=PREFIX=/usr --define=LIBDIR=$(PREFIX)/lib --define=INCLUDEDIR=$(PREFIX)/include --define=PROTOBUF_INCLUDE_PATH=$(PREFIX)/include --cxxopt=-std=c++14 --host_cxxopt=-std=c++14 --config=dynamic_kernels --distinct_host_configuration=false
INFO: Found applicable config definition build:dynamic_kernels in file /home/ubuntu/Workspace/tensorflow/.bazelrc: --define=dynamic_loaded_kernels=true --copt=-DAUTOLOAD_DYNAMIC_KERNELS
DEBUG: /home/ubuntu/.cache/bazel/_bazel_ubuntu/ef409778dc6f6079679ade72bc957ee0/external/tf_runtime/third_party/cuda/dependencies.bzl:51:10: The following command will download NVIDIA proprietary software. By using the software you agree to comply with the terms of the license agreement that accompanies the software. If you do not agree to the terms of the license agreement, do not use the software.
INFO: Analyzed target //tensorflow/core/lib/io/zstd:zstd_test (1 packages loaded, 70 targets configured).
INFO: Found 1 test target...
ERROR: /home/ubuntu/Workspace/tensorflow/tensorflow/core/BUILD:1610:16: C++ compilation of rule '//tensorflow/core:framework_internal_impl' failed (Exit 1): gcc failed: error executing command
(cd /home/ubuntu/.cache/bazel/_bazel_ubuntu/ef409778dc6f6079679ade72bc957ee0/execroot/org_tensorflow && \
exec env - \
PATH=/home/ubuntu/.cache/bazelisk/downloads/bazelbuild/bazel-3.7.2-linux-x86_64/bin:/home/ubuntu/.vscode-server/bin/7db1a2b88f7557e0a43fec75b6ba7e50b3e9f77e/bin:/home/ubuntu/miniconda3/envs/tensorflow/bin:/home/ubuntu/miniconda3/condabin:/home/ubuntu/.vscode-server/bin/7db1a2b88f7557e0a43fec75b6ba7e50b3e9f77e/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin \
PWD=/proc/self/cwd \
PYTHON_BIN_PATH=/home/ubuntu/miniconda3/envs/tensorflow/bin/python \
PYTHON_LIB_PATH=/home/ubuntu/miniconda3/envs/tensorflow/lib/python3.9/site-packages \
TF2_BEHAVIOR=1 \
/usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections -fdata-sections '-std=c++11' -MD -MF bazel-out/k8-opt/bin/tensorflow/core/_objs/framework_internal_impl/events_writer.pic.d '-frandom-seed=bazel-out/k8-opt/bin/tensorflow/core/_objs/framework_internal_impl/events_writer.pic.o' -fPIC -DHAVE_SYS_UIO_H -DTF_USE_SNAPPY -DEIGEN_MPL2_ONLY '-DEIGEN_MAX_ALIGN_BYTES=64' -iquote. -iquotebazel-out/k8-opt/bin -iquoteexternal/com_google_protobuf -iquotebazel-out/k8-opt/bin/external/com_google_protobuf -iquoteexternal/eigen_archive -iquotebazel-out/k8-opt/bin/external/eigen_archive -iquoteexternal/com_google_absl -iquotebazel-out/k8-opt/bin/external/com_google_absl -iquoteexternal/nsync -iquotebazel-out/k8-opt/bin/external/nsync -iquoteexternal/gif -iquotebazel-out/k8-opt/bin/external/gif -iquoteexternal/libjpeg_turbo -iquotebazel-out/k8-opt/bin/external/libjpeg_turbo -iquoteexternal/com_googlesource_code_re2 -iquotebazel-out/k8-opt/bin/external/com_googlesource_code_re2 -iquoteexternal/farmhash_archive -iquotebazel-out/k8-opt/bin/external/farmhash_archive -iquoteexternal/fft2d -iquotebazel-out/k8-opt/bin/external/fft2d -iquoteexternal/highwayhash -iquotebazel-out/k8-opt/bin/external/highwayhash -iquoteexternal/zlib -iquotebazel-out/k8-opt/bin/external/zlib -iquoteexternal/double_conversion -iquotebazel-out/k8-opt/bin/external/double_conversion -iquoteexternal/snappy -iquotebazel-out/k8-opt/bin/external/snappy -isystem external/com_google_protobuf/src -isystem bazel-out/k8-opt/bin/external/com_google_protobuf/src -isystem third_party/eigen3/mkl_include -isystem bazel-out/k8-opt/bin/third_party/eigen3/mkl_include -isystem external/eigen_archive -isystem bazel-out/k8-opt/bin/external/eigen_archive -isystem external/nsync/public -isystem bazel-out/k8-opt/bin/external/nsync/public -isystem external/gif -isystem bazel-out/k8-opt/bin/external/gif -isystem external/farmhash_archive/src -isystem bazel-out/k8-opt/bin/external/farmhash_archive/src -isystem external/zlib -isystem bazel-out/k8-opt/bin/external/zlib -isystem external/double_conversion -isystem bazel-out/k8-opt/bin/external/double_conversion -w -DAUTOLOAD_DYNAMIC_KERNELS '-std=c++14' -DEIGEN_AVOID_STL_ARRAY -Iexternal/gemmlowp -Wno-sign-compare '-ftemplate-depth=900' -fno-exceptions '-DTENSORFLOW_USE_XLA=1' -DINTEL_MKL -msse3 -pthread '-DTENSORFLOW_USE_XLA=1' '-DINTEL_MKL=1' -fno-canonical-system-headers -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -c tensorflow/core/util/events_writer.cc -o bazel-out/k8-opt/bin/tensorflow/core/_objs/framework_internal_impl/events_writer.pic.o)
Execution platform: @local_execution_config_platform//:platform
In file included from ./tensorflow/core/lib/io/record_writer.h:29:0,
from ./tensorflow/core/util/events_writer.h:23,
from tensorflow/core/util/events_writer.cc:16:
./tensorflow/core/lib/io/zstd/zstd_outputbuffer.h:19:10: fatal error: zstd.h: No such file or directory
#include <zstd.h>
^~~~~~~~
compilation terminated.
Target //tensorflow/core/lib/io/zstd:zstd_test failed to build
INFO: Elapsed time: 0.926s, Critical Path: 0.28s
INFO: 5 processes: 5 internal.
FAILED: Build did NOT complete successfully
//tensorflow/core/lib/io/zstd:zstd_test FAILED TO BUILD
FAILED: Build did NOT complete successfully
```
And it did happen before, so I just went into the `BUILD` file that the error references, and look if there is somewhere I might have missed a dependency somehow. I have yet to find the real source of the dependency, and I have gone through every possible place where I should have put my dependencies.
**Provide the exact sequence of commands / steps that you executed before running into the problem**
Clone my fork and go to my branch, with commit sha `9d7b6a5e223c59d8d9687ce128dd1ebc5a6ab908` or `build-issue-adrian-compression-r2.6`:
```
git clone https://github.com/IAL32/tensorflow
git checkout 9d7b6a5e223c59d8d9687ce128dd1ebc5a6ab908
```
Build from source however you like, and execute the test:
```
bazel test //tensorflow/core/lib/io/zstd:zstd_test
```
You should now see the error above.
Finally, if you revert the last commit, which changes `tensorflow/core/lib/io/record_writer.cc` and `tensorflow/core/lib/io/record_writer.h` to accommodate my `ZSTD` class, launching the test again works fine.
An overview of the stuff I have changed can be found here: https://github.com/tensorflow/tensorflow/compare/r2.6...IAL32:build-issue-adrian-compression-r2.6?expand=1
I believe this can be solved by including some header as a dependency, but I have been struggling to understand **where** exactly.
Thanks for any help!
|
1.0
|
[r2.6][PR] Cannot find real source of dependency `No such file or directory` error when building from scratch - **System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 18.04
- TensorFlow installed from (source or binary): source
- TensorFlow version: r2.6
- Python version: 3.9.5
- Installed using virtualenv? pip? conda?: `miniconda`
- Bazel version (if compiling from source): 3.7.2
- GCC/Compiler version (if compiling from source): 7.5.0
**Describe the problem**
I am currently working on a relatively large PR to [introduce ZSTD support in TF](https://github.com/tensorflow/tensorflow/pull/53385) (don't want to focus on why just now) for `TFRecordWriter`, which currently supports `ZLIB` and `GZIP`.
To introduce my changes, I am of course writing some tests, and as I am working my way up to the dependencies chain, I have stumbled upon this error:
```
(tensorflow) ubuntu@tensorflow-compression-build-1:~/Workspace/tensorflow$ bazel test //tensorflow/core/lib/io/zstd:zstd_test --test_filter=* --verbose_failures
INFO: Options provided by the client:
Inherited 'common' options: --isatty=1 --terminal_columns=143
INFO: Reading rc options for 'test' from /home/ubuntu/Workspace/tensorflow/.bazelrc:
Inherited 'common' options: --experimental_repo_remote_exec
INFO: Reading rc options for 'test' from /home/ubuntu/Workspace/tensorflow/.bazelrc:
Inherited 'build' options: --define framework_shared_object=true --java_toolchain=@tf_toolchains//toolchains/java:tf_java_toolchain --host_java_toolchain=@tf_toolchains//toolchains/java:tf_java_toolchain --define=use_fast_cpp_protos=true --define=allow_oversize_protos=true --spawn_strategy=standalone -c opt --announce_rc --define=grpc_no_ares=true --noincompatible_remove_legacy_whole_archive --enable_platform_specific_config --define=with_xla_support=true --config=short_logs --config=v2 --define=no_aws_support=true --define=no_hdfs_support=true
INFO: Reading rc options for 'test' from /home/ubuntu/Workspace/tensorflow/.tf_configure.bazelrc:
Inherited 'build' options: --action_env PYTHON_BIN_PATH=/home/ubuntu/miniconda3/envs/tensorflow/bin/python --action_env PYTHON_LIB_PATH=/home/ubuntu/miniconda3/envs/tensorflow/lib/python3.9/site-packages --python_path=/home/ubuntu/miniconda3/envs/tensorflow/bin/python
INFO: Reading rc options for 'test' from /home/ubuntu/Workspace/tensorflow/.tf_configure.bazelrc:
'test' options: --flaky_test_attempts=3 --test_size_filters=small,medium
INFO: Found applicable config definition build:short_logs in file /home/ubuntu/Workspace/tensorflow/.bazelrc: --output_filter=DONT_MATCH_ANYTHING
INFO: Found applicable config definition build:v2 in file /home/ubuntu/Workspace/tensorflow/.bazelrc: --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
INFO: Found applicable config definition test:v2 in file /home/ubuntu/Workspace/tensorflow/.tf_configure.bazelrc: --test_tag_filters=-benchmark-test,-no_oss,-gpu,-oss_serial,-v1only --build_tag_filters=-benchmark-test,-no_oss,-gpu,-v1only
INFO: Found applicable config definition build:linux in file /home/ubuntu/Workspace/tensorflow/.bazelrc: --copt=-w --host_copt=-w --define=PREFIX=/usr --define=LIBDIR=$(PREFIX)/lib --define=INCLUDEDIR=$(PREFIX)/include --define=PROTOBUF_INCLUDE_PATH=$(PREFIX)/include --cxxopt=-std=c++14 --host_cxxopt=-std=c++14 --config=dynamic_kernels --distinct_host_configuration=false
INFO: Found applicable config definition build:dynamic_kernels in file /home/ubuntu/Workspace/tensorflow/.bazelrc: --define=dynamic_loaded_kernels=true --copt=-DAUTOLOAD_DYNAMIC_KERNELS
DEBUG: /home/ubuntu/.cache/bazel/_bazel_ubuntu/ef409778dc6f6079679ade72bc957ee0/external/tf_runtime/third_party/cuda/dependencies.bzl:51:10: The following command will download NVIDIA proprietary software. By using the software you agree to comply with the terms of the license agreement that accompanies the software. If you do not agree to the terms of the license agreement, do not use the software.
INFO: Analyzed target //tensorflow/core/lib/io/zstd:zstd_test (1 packages loaded, 70 targets configured).
INFO: Found 1 test target...
ERROR: /home/ubuntu/Workspace/tensorflow/tensorflow/core/BUILD:1610:16: C++ compilation of rule '//tensorflow/core:framework_internal_impl' failed (Exit 1): gcc failed: error executing command
(cd /home/ubuntu/.cache/bazel/_bazel_ubuntu/ef409778dc6f6079679ade72bc957ee0/execroot/org_tensorflow && \
exec env - \
PATH=/home/ubuntu/.cache/bazelisk/downloads/bazelbuild/bazel-3.7.2-linux-x86_64/bin:/home/ubuntu/.vscode-server/bin/7db1a2b88f7557e0a43fec75b6ba7e50b3e9f77e/bin:/home/ubuntu/miniconda3/envs/tensorflow/bin:/home/ubuntu/miniconda3/condabin:/home/ubuntu/.vscode-server/bin/7db1a2b88f7557e0a43fec75b6ba7e50b3e9f77e/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin \
PWD=/proc/self/cwd \
PYTHON_BIN_PATH=/home/ubuntu/miniconda3/envs/tensorflow/bin/python \
PYTHON_LIB_PATH=/home/ubuntu/miniconda3/envs/tensorflow/lib/python3.9/site-packages \
TF2_BEHAVIOR=1 \
/usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections -fdata-sections '-std=c++11' -MD -MF bazel-out/k8-opt/bin/tensorflow/core/_objs/framework_internal_impl/events_writer.pic.d '-frandom-seed=bazel-out/k8-opt/bin/tensorflow/core/_objs/framework_internal_impl/events_writer.pic.o' -fPIC -DHAVE_SYS_UIO_H -DTF_USE_SNAPPY -DEIGEN_MPL2_ONLY '-DEIGEN_MAX_ALIGN_BYTES=64' -iquote. -iquotebazel-out/k8-opt/bin -iquoteexternal/com_google_protobuf -iquotebazel-out/k8-opt/bin/external/com_google_protobuf -iquoteexternal/eigen_archive -iquotebazel-out/k8-opt/bin/external/eigen_archive -iquoteexternal/com_google_absl -iquotebazel-out/k8-opt/bin/external/com_google_absl -iquoteexternal/nsync -iquotebazel-out/k8-opt/bin/external/nsync -iquoteexternal/gif -iquotebazel-out/k8-opt/bin/external/gif -iquoteexternal/libjpeg_turbo -iquotebazel-out/k8-opt/bin/external/libjpeg_turbo -iquoteexternal/com_googlesource_code_re2 -iquotebazel-out/k8-opt/bin/external/com_googlesource_code_re2 -iquoteexternal/farmhash_archive -iquotebazel-out/k8-opt/bin/external/farmhash_archive -iquoteexternal/fft2d -iquotebazel-out/k8-opt/bin/external/fft2d -iquoteexternal/highwayhash -iquotebazel-out/k8-opt/bin/external/highwayhash -iquoteexternal/zlib -iquotebazel-out/k8-opt/bin/external/zlib -iquoteexternal/double_conversion -iquotebazel-out/k8-opt/bin/external/double_conversion -iquoteexternal/snappy -iquotebazel-out/k8-opt/bin/external/snappy -isystem external/com_google_protobuf/src -isystem bazel-out/k8-opt/bin/external/com_google_protobuf/src -isystem third_party/eigen3/mkl_include -isystem bazel-out/k8-opt/bin/third_party/eigen3/mkl_include -isystem external/eigen_archive -isystem bazel-out/k8-opt/bin/external/eigen_archive -isystem external/nsync/public -isystem bazel-out/k8-opt/bin/external/nsync/public -isystem external/gif -isystem bazel-out/k8-opt/bin/external/gif -isystem external/farmhash_archive/src -isystem bazel-out/k8-opt/bin/external/farmhash_archive/src -isystem external/zlib -isystem bazel-out/k8-opt/bin/external/zlib -isystem external/double_conversion -isystem bazel-out/k8-opt/bin/external/double_conversion -w -DAUTOLOAD_DYNAMIC_KERNELS '-std=c++14' -DEIGEN_AVOID_STL_ARRAY -Iexternal/gemmlowp -Wno-sign-compare '-ftemplate-depth=900' -fno-exceptions '-DTENSORFLOW_USE_XLA=1' -DINTEL_MKL -msse3 -pthread '-DTENSORFLOW_USE_XLA=1' '-DINTEL_MKL=1' -fno-canonical-system-headers -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -c tensorflow/core/util/events_writer.cc -o bazel-out/k8-opt/bin/tensorflow/core/_objs/framework_internal_impl/events_writer.pic.o)
Execution platform: @local_execution_config_platform//:platform
In file included from ./tensorflow/core/lib/io/record_writer.h:29:0,
from ./tensorflow/core/util/events_writer.h:23,
from tensorflow/core/util/events_writer.cc:16:
./tensorflow/core/lib/io/zstd/zstd_outputbuffer.h:19:10: fatal error: zstd.h: No such file or directory
#include <zstd.h>
^~~~~~~~
compilation terminated.
Target //tensorflow/core/lib/io/zstd:zstd_test failed to build
INFO: Elapsed time: 0.926s, Critical Path: 0.28s
INFO: 5 processes: 5 internal.
FAILED: Build did NOT complete successfully
//tensorflow/core/lib/io/zstd:zstd_test FAILED TO BUILD
FAILED: Build did NOT complete successfully
```
And it did happen before, so I just went into the `BUILD` file that the error references, and look if there is somewhere I might have missed a dependency somehow. I have yet to find the real source of the dependency, and I have gone through every possible place where I should have put my dependencies.
**Provide the exact sequence of commands / steps that you executed before running into the problem**
Clone my fork and go to my branch, with commit sha `9d7b6a5e223c59d8d9687ce128dd1ebc5a6ab908` or `build-issue-adrian-compression-r2.6`:
```
git clone https://github.com/IAL32/tensorflow
git checkout 9d7b6a5e223c59d8d9687ce128dd1ebc5a6ab908
```
Build from source however you like, and execute the test:
```
bazel test //tensorflow/core/lib/io/zstd:zstd_test
```
You should now see the error above.
Finally, if you revert the last commit, which changes `tensorflow/core/lib/io/record_writer.cc` and `tensorflow/core/lib/io/record_writer.h` to accommodate my `ZSTD` class, launching the test again works fine.
An overview of the stuff I have changed can be found here: https://github.com/tensorflow/tensorflow/compare/r2.6...IAL32:build-issue-adrian-compression-r2.6?expand=1
I believe this can be solved by including some header as a dependency, but I have been struggling to understand **where** exactly.
Thanks for any help!
|
non_infrastructure
|
cannot find real source of dependency no such file or directory error when building from scratch system information os platform and distribution e g linux ubuntu ubuntu tensorflow installed from source or binary source tensorflow version python version installed using virtualenv pip conda miniconda bazel version if compiling from source gcc compiler version if compiling from source describe the problem i am currently working on a relatively large pr to don t want to focus on why just now for tfrecordwriter which currently supports zlib and gzip to introduce my changes i am of course writing some tests and as i am working my way up to the dependencies chain i have stumbled upon this error tensorflow ubuntu tensorflow compression build workspace tensorflow bazel test tensorflow core lib io zstd zstd test test filter verbose failures info options provided by the client inherited common options isatty terminal columns info reading rc options for test from home ubuntu workspace tensorflow bazelrc inherited common options experimental repo remote exec info reading rc options for test from home ubuntu workspace tensorflow bazelrc inherited build options define framework shared object true java toolchain tf toolchains toolchains java tf java toolchain host java toolchain tf toolchains toolchains java tf java toolchain define use fast cpp protos true define allow oversize protos true spawn strategy standalone c opt announce rc define grpc no ares true noincompatible remove legacy whole archive enable platform specific config define with xla support true config short logs config define no aws support true define no hdfs support true info reading rc options for test from home ubuntu workspace tensorflow tf configure bazelrc inherited build options action env python bin path home ubuntu envs tensorflow bin python action env python lib path home ubuntu envs tensorflow lib site packages python path home ubuntu envs tensorflow bin python info reading rc options for test from home ubuntu workspace tensorflow tf configure bazelrc test options flaky test attempts test size filters small medium info found applicable config definition build short logs in file home ubuntu workspace tensorflow bazelrc output filter dont match anything info found applicable config definition build in file home ubuntu workspace tensorflow bazelrc define tf api version action env behavior info found applicable config definition test in file home ubuntu workspace tensorflow tf configure bazelrc test tag filters benchmark test no oss gpu oss serial build tag filters benchmark test no oss gpu info found applicable config definition build linux in file home ubuntu workspace tensorflow bazelrc copt w host copt w define prefix usr define libdir prefix lib define includedir prefix include define protobuf include path prefix include cxxopt std c host cxxopt std c config dynamic kernels distinct host configuration false info found applicable config definition build dynamic kernels in file home ubuntu workspace tensorflow bazelrc define dynamic loaded kernels true copt dautoload dynamic kernels debug home ubuntu cache bazel bazel ubuntu external tf runtime third party cuda dependencies bzl the following command will download nvidia proprietary software by using the software you agree to comply with the terms of the license agreement that accompanies the software if you do not agree to the terms of the license agreement do not use the software info analyzed target tensorflow core lib io zstd zstd test packages loaded targets configured info found test target error home ubuntu workspace tensorflow tensorflow core build c compilation of rule tensorflow core framework internal impl failed exit gcc failed error executing command cd home ubuntu cache bazel bazel ubuntu execroot org tensorflow exec env path home ubuntu cache bazelisk downloads bazelbuild bazel linux bin home ubuntu vscode server bin bin home ubuntu envs tensorflow bin home ubuntu condabin home ubuntu vscode server bin bin usr local sbin usr local bin usr sbin usr bin sbin bin usr games usr local games snap bin pwd proc self cwd python bin path home ubuntu envs tensorflow bin python python lib path home ubuntu envs tensorflow lib site packages behavior usr bin gcc u fortify source fstack protector wall wunused but set parameter wno free nonheap object fno omit frame pointer d fortify source dndebug ffunction sections fdata sections std c md mf bazel out opt bin tensorflow core objs framework internal impl events writer pic d frandom seed bazel out opt bin tensorflow core objs framework internal impl events writer pic o fpic dhave sys uio h dtf use snappy deigen only deigen max align bytes iquote iquotebazel out opt bin iquoteexternal com google protobuf iquotebazel out opt bin external com google protobuf iquoteexternal eigen archive iquotebazel out opt bin external eigen archive iquoteexternal com google absl iquotebazel out opt bin external com google absl iquoteexternal nsync iquotebazel out opt bin external nsync iquoteexternal gif iquotebazel out opt bin external gif iquoteexternal libjpeg turbo iquotebazel out opt bin external libjpeg turbo iquoteexternal com googlesource code iquotebazel out opt bin external com googlesource code iquoteexternal farmhash archive iquotebazel out opt bin external farmhash archive iquoteexternal iquotebazel out opt bin external iquoteexternal highwayhash iquotebazel out opt bin external highwayhash iquoteexternal zlib iquotebazel out opt bin external zlib iquoteexternal double conversion iquotebazel out opt bin external double conversion iquoteexternal snappy iquotebazel out opt bin external snappy isystem external com google protobuf src isystem bazel out opt bin external com google protobuf src isystem third party mkl include isystem bazel out opt bin third party mkl include isystem external eigen archive isystem bazel out opt bin external eigen archive isystem external nsync public isystem bazel out opt bin external nsync public isystem external gif isystem bazel out opt bin external gif isystem external farmhash archive src isystem bazel out opt bin external farmhash archive src isystem external zlib isystem bazel out opt bin external zlib isystem external double conversion isystem bazel out opt bin external double conversion w dautoload dynamic kernels std c deigen avoid stl array iexternal gemmlowp wno sign compare ftemplate depth fno exceptions dtensorflow use xla dintel mkl pthread dtensorflow use xla dintel mkl fno canonical system headers wno builtin macro redefined d date redacted d timestamp redacted d time redacted c tensorflow core util events writer cc o bazel out opt bin tensorflow core objs framework internal impl events writer pic o execution platform local execution config platform platform in file included from tensorflow core lib io record writer h from tensorflow core util events writer h from tensorflow core util events writer cc tensorflow core lib io zstd zstd outputbuffer h fatal error zstd h no such file or directory include compilation terminated target tensorflow core lib io zstd zstd test failed to build info elapsed time critical path info processes internal failed build did not complete successfully tensorflow core lib io zstd zstd test failed to build failed build did not complete successfully and it did happen before so i just went into the build file that the error references and look if there is somewhere i might have missed a dependency somehow i have yet to find the real source of the dependency and i have gone through every possible place where i should have put my dependencies provide the exact sequence of commands steps that you executed before running into the problem clone my fork and go to my branch with commit sha or build issue adrian compression git clone git checkout build from source however you like and execute the test bazel test tensorflow core lib io zstd zstd test you should now see the error above finally if you revert the last commit which changes tensorflow core lib io record writer cc and tensorflow core lib io record writer h to accommodate my zstd class launching the test again works fine an overview of the stuff i have changed can be found here i believe this can be solved by including some header as a dependency but i have been struggling to understand where exactly thanks for any help
| 0
|
576,571
| 17,090,120,187
|
IssuesEvent
|
2021-07-08 16:18:50
|
Sage-Bionetworks/rocc-app
|
https://api.github.com/repos/Sage-Bionetworks/rocc-app
|
closed
|
Create a challenge page
|
Priority: Medium
|
Create a component in the folder `pages` to display detailed information about a challenge.
- Path to a challenge should be `rocc.org/challenges/{challendId}`
- Update the challenge preview to redirect the user to the corresponding challenge page when clicking on the card.
|
1.0
|
Create a challenge page - Create a component in the folder `pages` to display detailed information about a challenge.
- Path to a challenge should be `rocc.org/challenges/{challendId}`
- Update the challenge preview to redirect the user to the corresponding challenge page when clicking on the card.
|
non_infrastructure
|
create a challenge page create a component in the folder pages to display detailed information about a challenge path to a challenge should be rocc org challenges challendid update the challenge preview to redirect the user to the corresponding challenge page when clicking on the card
| 0
|
156,282
| 5,966,603,250
|
IssuesEvent
|
2017-05-30 14:21:03
|
open-io/oio-sds
|
https://api.github.com/repos/open-io/oio-sds
|
opened
|
Improve meta1 repartition (slots, location mask)
|
enhancement language:python priority:2
|
The meta1 prefix mapping created by `openio directory bootstrap` ensure that all copies of a database land on different locations, but custom service pool options (slots, location mask) are not taken into account.
We must create a new [*strategy*](https://github.com/open-io/oio-sds/blob/4.0.0.b0/oio/directory/meta0.py#L225) (in addition to `find_services_random` and `find_services_less_bases`) that calls oio-proxy's load balancer and takes all options into account.
|
1.0
|
Improve meta1 repartition (slots, location mask) - The meta1 prefix mapping created by `openio directory bootstrap` ensure that all copies of a database land on different locations, but custom service pool options (slots, location mask) are not taken into account.
We must create a new [*strategy*](https://github.com/open-io/oio-sds/blob/4.0.0.b0/oio/directory/meta0.py#L225) (in addition to `find_services_random` and `find_services_less_bases`) that calls oio-proxy's load balancer and takes all options into account.
|
non_infrastructure
|
improve repartition slots location mask the prefix mapping created by openio directory bootstrap ensure that all copies of a database land on different locations but custom service pool options slots location mask are not taken into account we must create a new in addition to find services random and find services less bases that calls oio proxy s load balancer and takes all options into account
| 0
|
49,313
| 6,021,941,406
|
IssuesEvent
|
2017-06-07 19:53:21
|
Wangscape/Wangscape
|
https://api.github.com/repos/Wangscape/Wangscape
|
closed
|
Support OSX in Travis
|
ci testing
|
~Probably using Docker?~
- [x] Compile without errors.
- [x] Pass all tests.
|
1.0
|
Support OSX in Travis - ~Probably using Docker?~
- [x] Compile without errors.
- [x] Pass all tests.
|
non_infrastructure
|
support osx in travis probably using docker compile without errors pass all tests
| 0
|
827,258
| 31,762,562,690
|
IssuesEvent
|
2023-09-12 06:38:03
|
oceanbase/odc
|
https://api.github.com/repos/oceanbase/odc
|
opened
|
[Bug]: There is a syntax error when creating a report, but the sql is executed successfully
|
type-bug priority-medium
|
### ODC version
ODC421
### OB version
Oceanbase4.1.0
### What happened?
There is a syntax error when creating a report, but the sql is executed successfully

### What did you expect to happen?
execution succeed
### How can we reproduce it (as minimally and precisely as possible)?
Execute the following sql
create table if not exists `t1_g` (
`id` int(11),
`name` varchar(18),
`g` geometry not null
)
default charset=utf8mb4
default collate=utf8mb4_general_ci;
create index `t1_g_idx1` on `t1_g` (`g` ASC);
### Anything else we need to know?
_No response_
### Cloud
_No response_
|
1.0
|
[Bug]: There is a syntax error when creating a report, but the sql is executed successfully - ### ODC version
ODC421
### OB version
Oceanbase4.1.0
### What happened?
There is a syntax error when creating a report, but the sql is executed successfully

### What did you expect to happen?
execution succeed
### How can we reproduce it (as minimally and precisely as possible)?
Execute the following sql
create table if not exists `t1_g` (
`id` int(11),
`name` varchar(18),
`g` geometry not null
)
default charset=utf8mb4
default collate=utf8mb4_general_ci;
create index `t1_g_idx1` on `t1_g` (`g` ASC);
### Anything else we need to know?
_No response_
### Cloud
_No response_
|
non_infrastructure
|
there is a syntax error when creating a report but the sql is executed successfully odc version ob version what happened there is a syntax error when creating a report but the sql is executed successfully what did you expect to happen execution succeed how can we reproduce it as minimally and precisely as possible execute the following sql create table if not exists g id int name varchar g geometry not null default charset default collate general ci create index g on g g asc anything else we need to know no response cloud no response
| 0
|
29,778
| 24,261,761,672
|
IssuesEvent
|
2022-09-27 23:55:19
|
APSIMInitiative/ApsimX
|
https://api.github.com/repos/APSIMInitiative/ApsimX
|
closed
|
Remembering expanded state on node collapse/expand in ExplorerPresenter
|
interface/infrastructure question stale
|
Does the UI always remember the user expanded state of tree nodes after you have closed a node? Just as the last state is remembered and restored when you open the simulation, are the user expanded states descendant nodes remembered when you collapse a node? It would be super useful if the previous state was saved so that when you open a node it expands to the previous user defined state of all descendants rather than just showing each child collapsed.
This would be really valuable when working with large nested tree structures such as a CLEM grazing simulation as you often want to close off a branch to clear up the screen, but later when expanding the branch would like it how it previously was expanded.
Also when is the expanded state saved? It seems that unless you close the application the node expanded states are not saved, and don't save with the Save menu option. I have been unable to understand it from code, but seems the expandedRows.ForEach in refresh() needs to be called in Expand() as well.
|
1.0
|
Remembering expanded state on node collapse/expand in ExplorerPresenter - Does the UI always remember the user expanded state of tree nodes after you have closed a node? Just as the last state is remembered and restored when you open the simulation, are the user expanded states descendant nodes remembered when you collapse a node? It would be super useful if the previous state was saved so that when you open a node it expands to the previous user defined state of all descendants rather than just showing each child collapsed.
This would be really valuable when working with large nested tree structures such as a CLEM grazing simulation as you often want to close off a branch to clear up the screen, but later when expanding the branch would like it how it previously was expanded.
Also when is the expanded state saved? It seems that unless you close the application the node expanded states are not saved, and don't save with the Save menu option. I have been unable to understand it from code, but seems the expandedRows.ForEach in refresh() needs to be called in Expand() as well.
|
infrastructure
|
remembering expanded state on node collapse expand in explorerpresenter does the ui always remember the user expanded state of tree nodes after you have closed a node just as the last state is remembered and restored when you open the simulation are the user expanded states descendant nodes remembered when you collapse a node it would be super useful if the previous state was saved so that when you open a node it expands to the previous user defined state of all descendants rather than just showing each child collapsed this would be really valuable when working with large nested tree structures such as a clem grazing simulation as you often want to close off a branch to clear up the screen but later when expanding the branch would like it how it previously was expanded also when is the expanded state saved it seems that unless you close the application the node expanded states are not saved and don t save with the save menu option i have been unable to understand it from code but seems the expandedrows foreach in refresh needs to be called in expand as well
| 1
|
33,592
| 27,611,828,835
|
IssuesEvent
|
2023-03-09 16:28:51
|
dotnet/performance
|
https://api.github.com/repos/dotnet/performance
|
closed
|
[main][runtime] iOS app builds are failing
|
lab-infrastructure pipeline blocker impact test coverage
|
iOS app builds for mono testing are failing to build resulting in the iOS testing jobs not being run. An example failure is available here: https://dev.azure.com/dnceng/internal/_build/results?buildId=2130635&view=results under 'ios-arm64 release iOSMono'-> 'Build HelloiOS AOT Sample...'. The main failure seems to be 'ld: symbol(s) not found for architecture arm64', with full output available following the above steps (too large to put in github comment).
cc: @SamMonoRT, are you aware of any recent changes that may be causing this or someone that would have an idea?
|
1.0
|
[main][runtime] iOS app builds are failing - iOS app builds for mono testing are failing to build resulting in the iOS testing jobs not being run. An example failure is available here: https://dev.azure.com/dnceng/internal/_build/results?buildId=2130635&view=results under 'ios-arm64 release iOSMono'-> 'Build HelloiOS AOT Sample...'. The main failure seems to be 'ld: symbol(s) not found for architecture arm64', with full output available following the above steps (too large to put in github comment).
cc: @SamMonoRT, are you aware of any recent changes that may be causing this or someone that would have an idea?
|
infrastructure
|
ios app builds are failing ios app builds for mono testing are failing to build resulting in the ios testing jobs not being run an example failure is available here under ios release iosmono build helloios aot sample the main failure seems to be ld symbol s not found for architecture with full output available following the above steps too large to put in github comment cc sammonort are you aware of any recent changes that may be causing this or someone that would have an idea
| 1
|
34,701
| 30,296,627,781
|
IssuesEvent
|
2023-07-09 23:09:41
|
shiftkey/desktop
|
https://api.github.com/repos/shiftkey/desktop
|
closed
|
pre-requisites for experimental ARM build
|
help wanted infrastructure
|
There have been several requests for shipping something for ARM versions:
- #86
- #245
I'm going to use this issue to capture blockers for this:
- embedded Git - https://github.com/desktop/dugite-native/pull/315 - need to figure out how to build Git from source locally and slipstream the packaged bits into a build
- native dependencies - NodeJS does have support for building native modules for ARM64, but I haven't explored this
- local packaging - there's manual steps to do this [here](https://github.com/shiftkey/desktop/blob/linux/docs/contributing/building-arm64.md) but we'd need to update this to support scripted builds
- continuous integration builds - I want to avoid shipping releases to be tied to a specific machine - how can we avoid this?
Please upvote this if you're interested, or comment if you have any insights to contribute about this work.
|
1.0
|
pre-requisites for experimental ARM build - There have been several requests for shipping something for ARM versions:
- #86
- #245
I'm going to use this issue to capture blockers for this:
- embedded Git - https://github.com/desktop/dugite-native/pull/315 - need to figure out how to build Git from source locally and slipstream the packaged bits into a build
- native dependencies - NodeJS does have support for building native modules for ARM64, but I haven't explored this
- local packaging - there's manual steps to do this [here](https://github.com/shiftkey/desktop/blob/linux/docs/contributing/building-arm64.md) but we'd need to update this to support scripted builds
- continuous integration builds - I want to avoid shipping releases to be tied to a specific machine - how can we avoid this?
Please upvote this if you're interested, or comment if you have any insights to contribute about this work.
|
infrastructure
|
pre requisites for experimental arm build there have been several requests for shipping something for arm versions i m going to use this issue to capture blockers for this embedded git need to figure out how to build git from source locally and slipstream the packaged bits into a build native dependencies nodejs does have support for building native modules for but i haven t explored this local packaging there s manual steps to do this but we d need to update this to support scripted builds continuous integration builds i want to avoid shipping releases to be tied to a specific machine how can we avoid this please upvote this if you re interested or comment if you have any insights to contribute about this work
| 1
|
33,517
| 27,541,163,261
|
IssuesEvent
|
2023-03-07 08:38:35
|
ministryofjustice/data-platform
|
https://api.github.com/repos/ministryofjustice/data-platform
|
closed
|
🔧 Create Simulated Data Producer infrastructure
|
Data Platform Core Infrastructure
|
Create RDS instance to be used for simulated-data-producer
Relates to #172
|
1.0
|
🔧 Create Simulated Data Producer infrastructure - Create RDS instance to be used for simulated-data-producer
Relates to #172
|
infrastructure
|
🔧 create simulated data producer infrastructure create rds instance to be used for simulated data producer relates to
| 1
|
14,122
| 10,617,073,766
|
IssuesEvent
|
2019-10-12 16:26:18
|
forseti-security/forseti-security
|
https://api.github.com/repos/forseti-security/forseti-security
|
closed
|
Google Cloud Build CI\CD
|
issue-review: future-milestone module: infrastructure priority: p3 triaged: yes
|
Ref. https://cloud.google.com/cloud-build/
Modify cloudbuild.yaml to provide fully automated CI\CD triggered by git code change
-Deploy Forseti and cloud sql proxy images to a Container Optimized OS (cos)
(Once its working on cos, possibly move onto GKE)
-Set up bucket
-Setup Cloud SQL
etc
|
1.0
|
Google Cloud Build CI\CD - Ref. https://cloud.google.com/cloud-build/
Modify cloudbuild.yaml to provide fully automated CI\CD triggered by git code change
-Deploy Forseti and cloud sql proxy images to a Container Optimized OS (cos)
(Once its working on cos, possibly move onto GKE)
-Set up bucket
-Setup Cloud SQL
etc
|
infrastructure
|
google cloud build ci cd ref modify cloudbuild yaml to provide fully automated ci cd triggered by git code change deploy forseti and cloud sql proxy images to a container optimized os cos once its working on cos possibly move onto gke set up bucket setup cloud sql etc
| 1
|
49,421
| 7,503,918,637
|
IssuesEvent
|
2018-04-10 00:35:10
|
archesproject/arches-docs
|
https://api.github.com/repos/archesproject/arches-docs
|
closed
|
Mention requirement for UTF8 encoding for csv's in docs
|
Subject: Documentation
|
_From @adamlodge on January 23, 2018 18:28_
Noticed that there is no mention of the requirement that csv slated for upload to Arches conform to UTF8 encoding. Would be good to identify that as a technical requirement in the docs and illustrate an example workflow of how to force that encoding on a file that may have some other encoding.
_Copied from original issue: archesproject/arches#2971_
|
1.0
|
Mention requirement for UTF8 encoding for csv's in docs - _From @adamlodge on January 23, 2018 18:28_
Noticed that there is no mention of the requirement that csv slated for upload to Arches conform to UTF8 encoding. Would be good to identify that as a technical requirement in the docs and illustrate an example workflow of how to force that encoding on a file that may have some other encoding.
_Copied from original issue: archesproject/arches#2971_
|
non_infrastructure
|
mention requirement for encoding for csv s in docs from adamlodge on january noticed that there is no mention of the requirement that csv slated for upload to arches conform to encoding would be good to identify that as a technical requirement in the docs and illustrate an example workflow of how to force that encoding on a file that may have some other encoding copied from original issue archesproject arches
| 0
|
21,230
| 16,664,990,451
|
IssuesEvent
|
2021-06-07 00:59:40
|
godotengine/godot
|
https://api.github.com/repos/godotengine/godot
|
closed
|
[3D view] selected object is not obvious
|
enhancement topic:editor usability
|
**Operating system or device, Godot version, GPU Model and driver (if graphics related):**
Godot 3
**Issue description:**
<!-- What happened, and what was expected. -->
While working with 3d models it is quite hard to distinguish selected object from the rest, because currently only bounding box is showed; I suggest showing outline for the selected 3d object.
As an addition, I also propose adding functionality to set mimap level for the background, so that in viewport the user can see blurred version- that helps to focus on the 3d objects by reducing visual noise in the scene.


|
True
|
[3D view] selected object is not obvious - **Operating system or device, Godot version, GPU Model and driver (if graphics related):**
Godot 3
**Issue description:**
<!-- What happened, and what was expected. -->
While working with 3d models it is quite hard to distinguish selected object from the rest, because currently only bounding box is showed; I suggest showing outline for the selected 3d object.
As an addition, I also propose adding functionality to set mimap level for the background, so that in viewport the user can see blurred version- that helps to focus on the 3d objects by reducing visual noise in the scene.


|
non_infrastructure
|
selected object is not obvious operating system or device godot version gpu model and driver if graphics related godot issue description while working with models it is quite hard to distinguish selected object from the rest because currently only bounding box is showed i suggest showing outline for the selected object as an addition i also propose adding functionality to set mimap level for the background so that in viewport the user can see blurred version that helps to focus on the objects by reducing visual noise in the scene
| 0
|
42,982
| 11,134,225,504
|
IssuesEvent
|
2019-12-20 11:12:38
|
jrasell/sherpa
|
https://api.github.com/repos/jrasell/sherpa
|
closed
|
Docker Hub wrong version - 0.4.0
|
area/build kind/bug
|
**Describe the bug**
The latest version pushed on Docker Hub does not seem to be the right one.
The interface displays version `0.3.0` instead of `0.4.0` and the digest (`a73972f6f7bd`) is the same on the [Docker Hub](https://hub.docker.com/r/jrasell/sherpa/tags) for both versions.
[Build](https://hub.docker.com/layers/jrasell/sherpa/0.4.0/images/sha256-a73972f6f7bd77f495fc8286fa6913a90053486542f5c2743964319ea400dfee) also show version `0.3.0`.
Seem likes you forgot to bump up version in the [Dockerfile](https://github.com/jrasell/sherpa/blob/master/Dockerfile#L6).
**To reproduce**
```bash
docker run -it --rm -p 8000:8000 jrasell/sherpa:0.4.0 server --bind-addr 0.0.0.0 --ui
```
Hit http://127.0.0.1:8000/ui, version `0.3.0` is displayed.
**Expected behavior**
Sherpa at version `0.4.0`.
|
1.0
|
Docker Hub wrong version - 0.4.0 - **Describe the bug**
The latest version pushed on Docker Hub does not seem to be the right one.
The interface displays version `0.3.0` instead of `0.4.0` and the digest (`a73972f6f7bd`) is the same on the [Docker Hub](https://hub.docker.com/r/jrasell/sherpa/tags) for both versions.
[Build](https://hub.docker.com/layers/jrasell/sherpa/0.4.0/images/sha256-a73972f6f7bd77f495fc8286fa6913a90053486542f5c2743964319ea400dfee) also show version `0.3.0`.
Seem likes you forgot to bump up version in the [Dockerfile](https://github.com/jrasell/sherpa/blob/master/Dockerfile#L6).
**To reproduce**
```bash
docker run -it --rm -p 8000:8000 jrasell/sherpa:0.4.0 server --bind-addr 0.0.0.0 --ui
```
Hit http://127.0.0.1:8000/ui, version `0.3.0` is displayed.
**Expected behavior**
Sherpa at version `0.4.0`.
|
non_infrastructure
|
docker hub wrong version describe the bug the latest version pushed on docker hub does not seem to be the right one the interface displays version instead of and the digest is the same on the for both versions also show version seem likes you forgot to bump up version in the to reproduce bash docker run it rm p jrasell sherpa server bind addr ui hit version is displayed expected behavior sherpa at version
| 0
|
27,215
| 21,470,149,350
|
IssuesEvent
|
2022-04-26 08:47:14
|
GraphiteEditor/Graphite
|
https://api.github.com/repos/GraphiteEditor/Graphite
|
closed
|
Migrate/upgrade to Vue CLI 5
|
Infrastructure In-Progress Dependencies Web P-High
|
It's finally out! This should remove a lot of security alerts for ancient transitive dependencies and let us finally(!!!) use the optional chaining operator in JS.
https://github.com/vuejs/vue-cli/releases/tag/v5.0.1
Lot of thorough testing should go into ensuring nothing breaks in the application, build process, or dev environment.
|
1.0
|
Migrate/upgrade to Vue CLI 5 - It's finally out! This should remove a lot of security alerts for ancient transitive dependencies and let us finally(!!!) use the optional chaining operator in JS.
https://github.com/vuejs/vue-cli/releases/tag/v5.0.1
Lot of thorough testing should go into ensuring nothing breaks in the application, build process, or dev environment.
|
infrastructure
|
migrate upgrade to vue cli it s finally out this should remove a lot of security alerts for ancient transitive dependencies and let us finally use the optional chaining operator in js lot of thorough testing should go into ensuring nothing breaks in the application build process or dev environment
| 1
|
20,117
| 3,793,350,251
|
IssuesEvent
|
2016-03-22 13:40:11
|
qc1iu/tiger-comp
|
https://api.github.com/repos/qc1iu/tiger-comp
|
reopened
|
Support GC status accounting
|
feature gc test
|
Add feature to record the GC behaviors like:
- new object times
- new array times
- the spaces allocated
- the spaces collected
Use these information, we can do unit test for GC.
|
1.0
|
Support GC status accounting - Add feature to record the GC behaviors like:
- new object times
- new array times
- the spaces allocated
- the spaces collected
Use these information, we can do unit test for GC.
|
non_infrastructure
|
support gc status accounting add feature to record the gc behaviors like new object times new array times the spaces allocated the spaces collected use these information we can do unit test for gc
| 0
|
181,821
| 21,664,450,490
|
IssuesEvent
|
2022-05-07 01:23:29
|
n-devs/reactIOTEAU
|
https://api.github.com/repos/n-devs/reactIOTEAU
|
closed
|
CVE-2015-9251 (Medium) detected in multiple libraries - autoclosed
|
security vulnerability
|
## CVE-2015-9251 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-2.1.4.min.js</b>, <b>jquery-1.9.1.js</b>, <b>jquery-1.7.1.min.js</b></p></summary>
<p>
<details><summary><b>jquery-2.1.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/reactIOTEAU/IOT-v0.2/node_modules/js-base64/test/index.html</p>
<p>Path to vulnerable library: /reactIOTEAU/IOT-v0.2/node_modules/js-base64/test/index.html,/reactIOTEAU/IOT-v0.1/node_modules/js-base64/.attic/test-moment/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.1.4.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.9.1.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/reactIOTEAU/IOT-v0.1/node_modules/reactstrap-tether/examples/tooltip/index.html</p>
<p>Path to vulnerable library: /reactIOTEAU/IOT-v0.1/node_modules/reactstrap-tether/examples/tooltip/../resources/js/jquery.js,/reactIOTEAU/IOT-v0.2/node_modules/reactstrap-tether/examples/facebook/../resources/js/jquery.js,/reactIOTEAU/IOT-v0.2/node_modules/reactstrap-tether/examples/chosen/../resources/js/jquery.js,/reactIOTEAU/IOT-v0.1/node_modules/reactstrap-tether/examples/facebook/../resources/js/jquery.js,/reactIOTEAU/IOT-v0.1/node_modules/reactstrap-tether/examples/chosen/../resources/js/jquery.js,/reactIOTEAU/IOT-v0.2/node_modules/reactstrap-tether/examples/tooltip/../resources/js/jquery.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.9.1.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.7.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/reactIOTEAU/IOT-v0.1/node_modules/sockjs/examples/echo/index.html</p>
<p>Path to vulnerable library: /reactIOTEAU/IOT-v0.1/node_modules/sockjs/examples/echo/index.html,/reactIOTEAU/IOT-v0.2/node_modules/sockjs/examples/express/index.html,/reactIOTEAU/IOT-v0.1/node_modules/sockjs/examples/hapi/html/index.html,/reactIOTEAU/IOT-v0.2/node_modules/sockjs/examples/hapi/html/index.html,/reactIOTEAU/IOT-v0.2/node_modules/vm-browserify/example/run/index.html,/reactIOTEAU/IOT-v0.2/node_modules/sockjs/examples/echo/index.html,/reactIOTEAU/IOT-v0.2/node_modules/sockjs/examples/multiplex/index.html,/reactIOTEAU/IOT-v0.2/node_modules/sockjs/examples/express-3.x/index.html,/reactIOTEAU/IOT-v0.1/node_modules/sockjs/examples/express-3.x/index.html,/reactIOTEAU/IOT-v0.1/node_modules/sockjs/examples/multiplex/index.html,/reactIOTEAU/IOT-v0.1/node_modules/vm-browserify/example/run/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.1.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/n-psk/reactIOTEAU/commit/765abb9ca864862e1fdb3523b5880dd76a73a295">765abb9ca864862e1fdb3523b5880dd76a73a295</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-9251>CVE-2015-9251</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-9251">https://nvd.nist.gov/vuln/detail/CVE-2015-9251</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - v3.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2015-9251 (Medium) detected in multiple libraries - autoclosed - ## CVE-2015-9251 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Libraries - <b>jquery-2.1.4.min.js</b>, <b>jquery-1.9.1.js</b>, <b>jquery-1.7.1.min.js</b></p></summary>
<p>
<details><summary><b>jquery-2.1.4.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/2.1.4/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/reactIOTEAU/IOT-v0.2/node_modules/js-base64/test/index.html</p>
<p>Path to vulnerable library: /reactIOTEAU/IOT-v0.2/node_modules/js-base64/test/index.html,/reactIOTEAU/IOT-v0.1/node_modules/js-base64/.attic/test-moment/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-2.1.4.min.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.9.1.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.9.1/jquery.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/reactIOTEAU/IOT-v0.1/node_modules/reactstrap-tether/examples/tooltip/index.html</p>
<p>Path to vulnerable library: /reactIOTEAU/IOT-v0.1/node_modules/reactstrap-tether/examples/tooltip/../resources/js/jquery.js,/reactIOTEAU/IOT-v0.2/node_modules/reactstrap-tether/examples/facebook/../resources/js/jquery.js,/reactIOTEAU/IOT-v0.2/node_modules/reactstrap-tether/examples/chosen/../resources/js/jquery.js,/reactIOTEAU/IOT-v0.1/node_modules/reactstrap-tether/examples/facebook/../resources/js/jquery.js,/reactIOTEAU/IOT-v0.1/node_modules/reactstrap-tether/examples/chosen/../resources/js/jquery.js,/reactIOTEAU/IOT-v0.2/node_modules/reactstrap-tether/examples/tooltip/../resources/js/jquery.js</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.9.1.js** (Vulnerable Library)
</details>
<details><summary><b>jquery-1.7.1.min.js</b></p></summary>
<p>JavaScript library for DOM operations</p>
<p>Library home page: <a href="https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js">https://cdnjs.cloudflare.com/ajax/libs/jquery/1.7.1/jquery.min.js</a></p>
<p>Path to dependency file: /tmp/ws-scm/reactIOTEAU/IOT-v0.1/node_modules/sockjs/examples/echo/index.html</p>
<p>Path to vulnerable library: /reactIOTEAU/IOT-v0.1/node_modules/sockjs/examples/echo/index.html,/reactIOTEAU/IOT-v0.2/node_modules/sockjs/examples/express/index.html,/reactIOTEAU/IOT-v0.1/node_modules/sockjs/examples/hapi/html/index.html,/reactIOTEAU/IOT-v0.2/node_modules/sockjs/examples/hapi/html/index.html,/reactIOTEAU/IOT-v0.2/node_modules/vm-browserify/example/run/index.html,/reactIOTEAU/IOT-v0.2/node_modules/sockjs/examples/echo/index.html,/reactIOTEAU/IOT-v0.2/node_modules/sockjs/examples/multiplex/index.html,/reactIOTEAU/IOT-v0.2/node_modules/sockjs/examples/express-3.x/index.html,/reactIOTEAU/IOT-v0.1/node_modules/sockjs/examples/express-3.x/index.html,/reactIOTEAU/IOT-v0.1/node_modules/sockjs/examples/multiplex/index.html,/reactIOTEAU/IOT-v0.1/node_modules/vm-browserify/example/run/index.html</p>
<p>
Dependency Hierarchy:
- :x: **jquery-1.7.1.min.js** (Vulnerable Library)
</details>
<p>Found in HEAD commit: <a href="https://github.com/n-psk/reactIOTEAU/commit/765abb9ca864862e1fdb3523b5880dd76a73a295">765abb9ca864862e1fdb3523b5880dd76a73a295</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png' width=19 height=20> Vulnerability Details</summary>
<p>
jQuery before 3.0.0 is vulnerable to Cross-site Scripting (XSS) attacks when a cross-domain Ajax request is performed without the dataType option, causing text/javascript responses to be executed.
<p>Publish Date: 2018-01-18
<p>URL: <a href=https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-9251>CVE-2015-9251</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>6.1</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Network
- Attack Complexity: Low
- Privileges Required: None
- User Interaction: Required
- Scope: Changed
- Impact Metrics:
- Confidentiality Impact: Low
- Integrity Impact: Low
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2015-9251">https://nvd.nist.gov/vuln/detail/CVE-2015-9251</a></p>
<p>Release Date: 2018-01-18</p>
<p>Fix Resolution: jQuery - v3.0.0</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_infrastructure
|
cve medium detected in multiple libraries autoclosed cve medium severity vulnerability vulnerable libraries jquery min js jquery js jquery min js jquery min js javascript library for dom operations library home page a href path to dependency file tmp ws scm reactioteau iot node modules js test index html path to vulnerable library reactioteau iot node modules js test index html reactioteau iot node modules js attic test moment index html dependency hierarchy x jquery min js vulnerable library jquery js javascript library for dom operations library home page a href path to dependency file tmp ws scm reactioteau iot node modules reactstrap tether examples tooltip index html path to vulnerable library reactioteau iot node modules reactstrap tether examples tooltip resources js jquery js reactioteau iot node modules reactstrap tether examples facebook resources js jquery js reactioteau iot node modules reactstrap tether examples chosen resources js jquery js reactioteau iot node modules reactstrap tether examples facebook resources js jquery js reactioteau iot node modules reactstrap tether examples chosen resources js jquery js reactioteau iot node modules reactstrap tether examples tooltip resources js jquery js dependency hierarchy x jquery js vulnerable library jquery min js javascript library for dom operations library home page a href path to dependency file tmp ws scm reactioteau iot node modules sockjs examples echo index html path to vulnerable library reactioteau iot node modules sockjs examples echo index html reactioteau iot node modules sockjs examples express index html reactioteau iot node modules sockjs examples hapi html index html reactioteau iot node modules sockjs examples hapi html index html reactioteau iot node modules vm browserify example run index html reactioteau iot node modules sockjs examples echo index html reactioteau iot node modules sockjs examples multiplex index html reactioteau iot node modules sockjs examples express x index html reactioteau iot node modules sockjs examples express x index html reactioteau iot node modules sockjs examples multiplex index html reactioteau iot node modules vm browserify example run index html dependency hierarchy x jquery min js vulnerable library found in head commit a href vulnerability details jquery before is vulnerable to cross site scripting xss attacks when a cross domain ajax request is performed without the datatype option causing text javascript responses to be executed publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution jquery step up your open source security game with whitesource
| 0
|
130,540
| 10,617,607,032
|
IssuesEvent
|
2019-10-12 20:20:26
|
Vachok/ftpplus
|
https://api.github.com/repos/Vachok/ftpplus
|
closed
|
testAdUsersComps [D285]
|
Lowest TestQuality bug mint resolution_Fixed
|
Execute ActDirectoryCTRLTest::testAdUsersComps**testAdUsersComps**
*ActDirectoryCTRLTest*
*java.lang.NullPointerException*
|
1.0
|
testAdUsersComps [D285] - Execute ActDirectoryCTRLTest::testAdUsersComps**testAdUsersComps**
*ActDirectoryCTRLTest*
*java.lang.NullPointerException*
|
non_infrastructure
|
testaduserscomps execute actdirectoryctrltest testaduserscomps testaduserscomps actdirectoryctrltest java lang nullpointerexception
| 0
|
284,322
| 21,414,017,957
|
IssuesEvent
|
2022-04-22 09:07:27
|
alphagov/govuk-design-system
|
https://api.github.com/repos/alphagov/govuk-design-system
|
opened
|
Expand on not using placeholder text
|
documentation awaiting triage
|
## Related documentation
https://design-system.service.gov.uk/components/text-input/
## Suggestion
>All text inputs must have visible labels; placeholder text is not an acceptable replacement for a label as it vanishes when users start typing
This does not recommend against placeholders in general (for example to provide a hint or example) - possibly it should:
- placeholder text is low contrast
- placeholder text is not supported by all screen readers
https://www.deque.com/blog/accessible-forms-the-problem-with-placeholders/
## Evidence (where applicable)
Request on support
|
1.0
|
Expand on not using placeholder text - ## Related documentation
https://design-system.service.gov.uk/components/text-input/
## Suggestion
>All text inputs must have visible labels; placeholder text is not an acceptable replacement for a label as it vanishes when users start typing
This does not recommend against placeholders in general (for example to provide a hint or example) - possibly it should:
- placeholder text is low contrast
- placeholder text is not supported by all screen readers
https://www.deque.com/blog/accessible-forms-the-problem-with-placeholders/
## Evidence (where applicable)
Request on support
|
non_infrastructure
|
expand on not using placeholder text related documentation suggestion all text inputs must have visible labels placeholder text is not an acceptable replacement for a label as it vanishes when users start typing this does not recommend against placeholders in general for example to provide a hint or example possibly it should placeholder text is low contrast placeholder text is not supported by all screen readers evidence where applicable request on support
| 0
|
774,443
| 27,197,036,475
|
IssuesEvent
|
2023-02-20 06:30:20
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
web.whatsapp.com - see bug description
|
browser-firefox priority-critical engine-gecko
|
<!-- @browser: Firefox 110.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:109.0) Gecko/20100101 Firefox/110.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/118426 -->
**URL**: https://web.whatsapp.com/
**Browser / Version**: Firefox 110.0
**Operating System**: Windows 7
**Tested Another Browser**: Yes Chrome
**Problem type**: Something else
**Description**: QR-code doesn't load
**Steps to Reproduce**:
Web version of WhatsApp stop to work. It is impossible to log-in because the site doesn't display the QR-code. There are error-messages related to websocket connections in the browser console.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
web.whatsapp.com - see bug description - <!-- @browser: Firefox 110.0 -->
<!-- @ua_header: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:109.0) Gecko/20100101 Firefox/110.0 -->
<!-- @reported_with: unknown -->
<!-- @public_url: https://github.com/webcompat/web-bugs/issues/118426 -->
**URL**: https://web.whatsapp.com/
**Browser / Version**: Firefox 110.0
**Operating System**: Windows 7
**Tested Another Browser**: Yes Chrome
**Problem type**: Something else
**Description**: QR-code doesn't load
**Steps to Reproduce**:
Web version of WhatsApp stop to work. It is impossible to log-in because the site doesn't display the QR-code. There are error-messages related to websocket connections in the browser console.
<details>
<summary>Browser Configuration</summary>
<ul>
<li>None</li>
</ul>
</details>
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_infrastructure
|
web whatsapp com see bug description url browser version firefox operating system windows tested another browser yes chrome problem type something else description qr code doesn t load steps to reproduce web version of whatsapp stop to work it is impossible to log in because the site doesn t display the qr code there are error messages related to websocket connections in the browser console browser configuration none from with ❤️
| 0
|
436,007
| 30,532,189,537
|
IssuesEvent
|
2023-07-19 14:55:21
|
redhat-developer/odo
|
https://api.github.com/repos/redhat-developer/odo
|
closed
|
invalid link in docs
|
kind/bug area/documentation
|
/kind bug
/area documentation
url: https://odo.dev/blog/odo-v3.11.0/#handling-imagename-in-image-component-as-a-selector
invalid link: See **[How odo handles image names](https://odo.dev/blog/docs/development/devfile#how-odo-handles-image-names)** for more details.
|
1.0
|
invalid link in docs - /kind bug
/area documentation
url: https://odo.dev/blog/odo-v3.11.0/#handling-imagename-in-image-component-as-a-selector
invalid link: See **[How odo handles image names](https://odo.dev/blog/docs/development/devfile#how-odo-handles-image-names)** for more details.
|
non_infrastructure
|
invalid link in docs kind bug area documentation url invalid link see for more details
| 0
|
621,421
| 19,586,666,910
|
IssuesEvent
|
2022-01-05 07:58:47
|
webcompat/web-bugs
|
https://api.github.com/repos/webcompat/web-bugs
|
closed
|
tools.usps.com - site is not usable
|
browser-firefox priority-important type-webrender-enabled os-linux engine-gecko
|
<!-- @browser: Firefox 97.0 -->
<!-- @ua_header: Mozilla/5.0 (X11; Linux x86_64; rv:97.0) Gecko/20100101 Firefox/97.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @extra_labels: type-webrender-enabled -->
**URL**: https://tools.usps.com/rcas.htm
**Browser / Version**: Firefox 97.0
**Operating System**: Linux
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
The page loads with a progress indication saying "Initializing services" which never disappears. The page works in Chromium for me.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/1/1fbe5546-ac7e-4e26-bc66-ea8641ee3681.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: true</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20220104034109</li><li>channel: nightly</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2022/1/d9936077-b718-4a53-91f8-439f3dfa5030)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
1.0
|
tools.usps.com - site is not usable - <!-- @browser: Firefox 97.0 -->
<!-- @ua_header: Mozilla/5.0 (X11; Linux x86_64; rv:97.0) Gecko/20100101 Firefox/97.0 -->
<!-- @reported_with: desktop-reporter -->
<!-- @extra_labels: type-webrender-enabled -->
**URL**: https://tools.usps.com/rcas.htm
**Browser / Version**: Firefox 97.0
**Operating System**: Linux
**Tested Another Browser**: Yes Chrome
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
The page loads with a progress indication saying "Initializing services" which never disappears. The page works in Chromium for me.
<details>
<summary>View the screenshot</summary>
<img alt="Screenshot" src="https://webcompat.com/uploads/2022/1/1fbe5546-ac7e-4e26-bc66-ea8641ee3681.jpeg">
</details>
<details>
<summary>Browser Configuration</summary>
<ul>
<li>gfx.webrender.all: true</li><li>gfx.webrender.blob-images: true</li><li>gfx.webrender.enabled: false</li><li>image.mem.shared: true</li><li>buildID: 20220104034109</li><li>channel: nightly</li><li>hasTouchScreen: false</li><li>mixed active content blocked: false</li><li>mixed passive content blocked: false</li><li>tracking content blocked: false</li>
</ul>
</details>
[View console log messages](https://webcompat.com/console_logs/2022/1/d9936077-b718-4a53-91f8-439f3dfa5030)
_From [webcompat.com](https://webcompat.com/) with ❤️_
|
non_infrastructure
|
tools usps com site is not usable url browser version firefox operating system linux tested another browser yes chrome problem type site is not usable description page not loading correctly steps to reproduce the page loads with a progress indication saying initializing services which never disappears the page works in chromium for me view the screenshot img alt screenshot src browser configuration gfx webrender all true gfx webrender blob images true gfx webrender enabled false image mem shared true buildid channel nightly hastouchscreen false mixed active content blocked false mixed passive content blocked false tracking content blocked false from with ❤️
| 0
|
1,534
| 3,777,023,642
|
IssuesEvent
|
2016-03-17 18:33:02
|
rancher/rancher
|
https://api.github.com/repos/rancher/rancher
|
closed
|
Using asterisk in command
|
area/service area/ui kind/bug status/resolved status/to-test
|
I'm trying to add a new service to my stack. My service uses a image with an entrypoint and I want to add command line arguments via the `command` field of `advanced options`.
My command field is a series of command line arguments:
```
--upstream=http://upstream/ --http-address=0.0.0.0:4180 --redirect-url=https://example.com/oauth2/callback --email-domain=* --provider=azure --cookie-secure=true
```
When the container is created I see that the command line option `--email-domain=*` is not passed into the container and is not listed in the `View in API` output of rancher.
|
1.0
|
Using asterisk in command - I'm trying to add a new service to my stack. My service uses a image with an entrypoint and I want to add command line arguments via the `command` field of `advanced options`.
My command field is a series of command line arguments:
```
--upstream=http://upstream/ --http-address=0.0.0.0:4180 --redirect-url=https://example.com/oauth2/callback --email-domain=* --provider=azure --cookie-secure=true
```
When the container is created I see that the command line option `--email-domain=*` is not passed into the container and is not listed in the `View in API` output of rancher.
|
non_infrastructure
|
using asterisk in command i m trying to add a new service to my stack my service uses a image with an entrypoint and i want to add command line arguments via the command field of advanced options my command field is a series of command line arguments upstream http address redirect url email domain provider azure cookie secure true when the container is created i see that the command line option email domain is not passed into the container and is not listed in the view in api output of rancher
| 0
|
1,892
| 3,419,076,856
|
IssuesEvent
|
2015-12-08 07:33:24
|
dart-lang/fletch
|
https://api.github.com/repos/dart-lang/fletch
|
closed
|
test.py does a lot of work before discovering the --help option
|
Area-Infrastructure FixIt-15Q4
|
It should not do runhooks before printing the usage message.
Tested with the command: python tools/test.py --help
|
1.0
|
test.py does a lot of work before discovering the --help option - It should not do runhooks before printing the usage message.
Tested with the command: python tools/test.py --help
|
infrastructure
|
test py does a lot of work before discovering the help option it should not do runhooks before printing the usage message tested with the command python tools test py help
| 1
|
31,852
| 26,194,298,334
|
IssuesEvent
|
2023-01-03 12:02:45
|
gothick/omm
|
https://api.github.com/repos/gothick/omm
|
opened
|
Remove Vagrant setup
|
quickwin infrastructure
|
You started Vagrant the other day accidentally even though you're not using it for this project any more. Remove Vagrantfile and anything else associated with it.
|
1.0
|
Remove Vagrant setup - You started Vagrant the other day accidentally even though you're not using it for this project any more. Remove Vagrantfile and anything else associated with it.
|
infrastructure
|
remove vagrant setup you started vagrant the other day accidentally even though you re not using it for this project any more remove vagrantfile and anything else associated with it
| 1
|
30,731
| 25,020,240,293
|
IssuesEvent
|
2022-11-03 23:21:58
|
open-duelyst/duelyst
|
https://api.github.com/repos/open-duelyst/duelyst
|
opened
|
[P1] Automate database backups
|
enhancement infrastructure
|
## Summary
We currently have the ability to snapshot the RDS database. Let's do this on a regular cadence.
|
1.0
|
[P1] Automate database backups - ## Summary
We currently have the ability to snapshot the RDS database. Let's do this on a regular cadence.
|
infrastructure
|
automate database backups summary we currently have the ability to snapshot the rds database let s do this on a regular cadence
| 1
|
289,200
| 31,931,266,101
|
IssuesEvent
|
2023-09-19 07:35:54
|
Trinadh465/linux-4.1.15_CVE-2023-4128
|
https://api.github.com/repos/Trinadh465/linux-4.1.15_CVE-2023-4128
|
opened
|
CVE-2018-5953 (Medium) detected in linuxlinux-4.6
|
Mend: dependency security vulnerability
|
## CVE-2018-5953 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2023-4128/commit/0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8">0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/lib/swiotlb.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/lib/swiotlb.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The swiotlb_print_info function in lib/swiotlb.c in the Linux kernel through 4.14.14 allows local users to obtain sensitive address information by reading dmesg data from a "software IO TLB" printk call.
<p>Publish Date: 2018-08-07
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-5953>CVE-2018-5953</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-5953">https://nvd.nist.gov/vuln/detail/CVE-2018-5953</a></p>
<p>Release Date: 2018-08-07</p>
<p>Fix Resolution: linux-yocto - 5.4.20+gitAUTOINC+c11911d4d1_f4d7dbafb1,4.8.24+gitAUTOINC+c84532b647_f6329fd287</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
True
|
CVE-2018-5953 (Medium) detected in linuxlinux-4.6 - ## CVE-2018-5953 - Medium Severity Vulnerability
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Library - <b>linuxlinux-4.6</b></p></summary>
<p>
<p>The Linux Kernel</p>
<p>Library home page: <a href=https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux>https://mirrors.edge.kernel.org/pub/linux/kernel/v4.x/?wsslib=linux</a></p>
<p>Found in HEAD commit: <a href="https://github.com/Trinadh465/linux-4.1.15_CVE-2023-4128/commit/0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8">0c6c8d8c809f697cd5fc581c6c08e9ad646c55a8</a></p>
<p>Found in base branch: <b>main</b></p></p>
</details>
</p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/vulnerability_details.png' width=19 height=20> Vulnerable Source Files (2)</summary>
<p></p>
<p>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/lib/swiotlb.c</b>
<img src='https://s3.amazonaws.com/wss-public/bitbucketImages/xRedImage.png' width=19 height=20> <b>/lib/swiotlb.c</b>
</p>
</details>
<p></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/medium_vul.png?' width=19 height=20> Vulnerability Details</summary>
<p>
The swiotlb_print_info function in lib/swiotlb.c in the Linux kernel through 4.14.14 allows local users to obtain sensitive address information by reading dmesg data from a "software IO TLB" printk call.
<p>Publish Date: 2018-08-07
<p>URL: <a href=https://www.mend.io/vulnerability-database/CVE-2018-5953>CVE-2018-5953</a></p>
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/cvss3.png' width=19 height=20> CVSS 3 Score Details (<b>5.5</b>)</summary>
<p>
Base Score Metrics:
- Exploitability Metrics:
- Attack Vector: Local
- Attack Complexity: Low
- Privileges Required: Low
- User Interaction: None
- Scope: Unchanged
- Impact Metrics:
- Confidentiality Impact: High
- Integrity Impact: None
- Availability Impact: None
</p>
For more information on CVSS3 Scores, click <a href="https://www.first.org/cvss/calculator/3.0">here</a>.
</p>
</details>
<p></p>
<details><summary><img src='https://whitesource-resources.whitesourcesoftware.com/suggested_fix.png' width=19 height=20> Suggested Fix</summary>
<p>
<p>Type: Upgrade version</p>
<p>Origin: <a href="https://nvd.nist.gov/vuln/detail/CVE-2018-5953">https://nvd.nist.gov/vuln/detail/CVE-2018-5953</a></p>
<p>Release Date: 2018-08-07</p>
<p>Fix Resolution: linux-yocto - 5.4.20+gitAUTOINC+c11911d4d1_f4d7dbafb1,4.8.24+gitAUTOINC+c84532b647_f6329fd287</p>
</p>
</details>
<p></p>
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
|
non_infrastructure
|
cve medium detected in linuxlinux cve medium severity vulnerability vulnerable library linuxlinux the linux kernel library home page a href found in head commit a href found in base branch main vulnerable source files lib swiotlb c lib swiotlb c vulnerability details the swiotlb print info function in lib swiotlb c in the linux kernel through allows local users to obtain sensitive address information by reading dmesg data from a software io tlb printk call publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution linux yocto gitautoinc gitautoinc step up your open source security game with mend
| 0
|
30,154
| 24,588,467,444
|
IssuesEvent
|
2022-10-13 22:19:02
|
woocommerce/woocommerce
|
https://api.github.com/repos/woocommerce/woocommerce
|
closed
|
Release Automation: Create a CLI command to automate bumping versions
|
type: task status: prioritization tool: monorepo infrastructure
|
<!-- This form is for other issue types specific to the WooCommerce plugin. This is not a support portal. -->
**Prerequisites (mark completed items with an [x]):**
- [x] I have checked that my issue type is not listed here https://github.com/woocommerce/woocommerce/issues/new/choose
- [x] My issue is not a security issue, support request, bug report, enhancement or feature request (Please use the link above if it is).
**Issue Description:**
Bumping versions for WooCommerce requires several places to be updated. The release lead needs to remember all instances or do a search all to find them.
For example, see this PR https://github.com/woocommerce/woocommerce/pull/34422
- [x] Make a CLI command to update these all at once. https://github.com/woocommerce/woocommerce/pull/34555
- [x] Add Logger to make CLI better and pnpm script and add README https://github.com/woocommerce/woocommerce/pull/34636
- [x] Change Code Analyzer's `getPluginData` to return all data in its raw form so the current version can be determined. https://github.com/woocommerce/woocommerce/pull/34951
- [x] Update any workflows that may be able to take advantage of this tool. For instance, updating `trunk` in the code freeze process and automatically create PR
- [x] Update Release Lead guide section about Code freeze.
- [x] Update Release Lead guide section about Fix releases.
|
1.0
|
Release Automation: Create a CLI command to automate bumping versions - <!-- This form is for other issue types specific to the WooCommerce plugin. This is not a support portal. -->
**Prerequisites (mark completed items with an [x]):**
- [x] I have checked that my issue type is not listed here https://github.com/woocommerce/woocommerce/issues/new/choose
- [x] My issue is not a security issue, support request, bug report, enhancement or feature request (Please use the link above if it is).
**Issue Description:**
Bumping versions for WooCommerce requires several places to be updated. The release lead needs to remember all instances or do a search all to find them.
For example, see this PR https://github.com/woocommerce/woocommerce/pull/34422
- [x] Make a CLI command to update these all at once. https://github.com/woocommerce/woocommerce/pull/34555
- [x] Add Logger to make CLI better and pnpm script and add README https://github.com/woocommerce/woocommerce/pull/34636
- [x] Change Code Analyzer's `getPluginData` to return all data in its raw form so the current version can be determined. https://github.com/woocommerce/woocommerce/pull/34951
- [x] Update any workflows that may be able to take advantage of this tool. For instance, updating `trunk` in the code freeze process and automatically create PR
- [x] Update Release Lead guide section about Code freeze.
- [x] Update Release Lead guide section about Fix releases.
|
infrastructure
|
release automation create a cli command to automate bumping versions prerequisites mark completed items with an i have checked that my issue type is not listed here my issue is not a security issue support request bug report enhancement or feature request please use the link above if it is issue description bumping versions for woocommerce requires several places to be updated the release lead needs to remember all instances or do a search all to find them for example see this pr make a cli command to update these all at once add logger to make cli better and pnpm script and add readme change code analyzer s getplugindata to return all data in its raw form so the current version can be determined update any workflows that may be able to take advantage of this tool for instance updating trunk in the code freeze process and automatically create pr update release lead guide section about code freeze update release lead guide section about fix releases
| 1
|
12,284
| 9,670,647,494
|
IssuesEvent
|
2019-05-21 20:27:09
|
perl6/problem-solving
|
https://api.github.com/repos/perl6/problem-solving
|
opened
|
perl6-infra: rules and guidelines
|
infrastructure meta
|
We like transparently decide about the upcoming infrastructure changes together.
I therefore propose change on a `service` level or for a `group of services`. A service could be for example "hosting perl6.org static website" and an example for a group of service could be "dns hosting".
There will always be a _proposed_ solution. If there is no better proposal in the comments, **we will start implementing the proposed solution, a week after opening the issue**.
Here is how we like to handle the Perl6 Infrastructure. Feel free to comment.
# Rules and guidelines
1. Automate everything
2. Everything is a service
3. Categorize the service and add additional attributes (monitored, backuped, static, dynamic, redundant, CDN)
1. hack
2. build
3. run
4. Use top level domains perl6.org, rakudo.org, moarvm.org
5. Use subdomains to separate services
6. Make sure every service has at least two admins and every core member has access
7. All technical usernames and passwords are stored securely in either a password tool or at least in an encrypted document
8. Where possible add the admins to a 3-party-services and give authorization. For services with a single user, create a technical user (e.g. perl6-infra).
9. Use what‘s already there, operate own service where needed (DNS services instead of running bind ourselves; github instead of gitolite on a server, etc.)
10. Choose free or sponsored services wherever possible
11. Keep infrastructure documentation updated
|
1.0
|
perl6-infra: rules and guidelines - We like transparently decide about the upcoming infrastructure changes together.
I therefore propose change on a `service` level or for a `group of services`. A service could be for example "hosting perl6.org static website" and an example for a group of service could be "dns hosting".
There will always be a _proposed_ solution. If there is no better proposal in the comments, **we will start implementing the proposed solution, a week after opening the issue**.
Here is how we like to handle the Perl6 Infrastructure. Feel free to comment.
# Rules and guidelines
1. Automate everything
2. Everything is a service
3. Categorize the service and add additional attributes (monitored, backuped, static, dynamic, redundant, CDN)
1. hack
2. build
3. run
4. Use top level domains perl6.org, rakudo.org, moarvm.org
5. Use subdomains to separate services
6. Make sure every service has at least two admins and every core member has access
7. All technical usernames and passwords are stored securely in either a password tool or at least in an encrypted document
8. Where possible add the admins to a 3-party-services and give authorization. For services with a single user, create a technical user (e.g. perl6-infra).
9. Use what‘s already there, operate own service where needed (DNS services instead of running bind ourselves; github instead of gitolite on a server, etc.)
10. Choose free or sponsored services wherever possible
11. Keep infrastructure documentation updated
|
infrastructure
|
infra rules and guidelines we like transparently decide about the upcoming infrastructure changes together i therefore propose change on a service level or for a group of services a service could be for example hosting org static website and an example for a group of service could be dns hosting there will always be a proposed solution if there is no better proposal in the comments we will start implementing the proposed solution a week after opening the issue here is how we like to handle the infrastructure feel free to comment rules and guidelines automate everything everything is a service categorize the service and add additional attributes monitored backuped static dynamic redundant cdn hack build run use top level domains org rakudo org moarvm org use subdomains to separate services make sure every service has at least two admins and every core member has access all technical usernames and passwords are stored securely in either a password tool or at least in an encrypted document where possible add the admins to a party services and give authorization for services with a single user create a technical user e g infra use what‘s already there operate own service where needed dns services instead of running bind ourselves github instead of gitolite on a server etc choose free or sponsored services wherever possible keep infrastructure documentation updated
| 1
|
709,260
| 24,371,959,441
|
IssuesEvent
|
2022-10-03 20:08:41
|
Inter-Actief/amelie
|
https://api.github.com/repos/Inter-Actief/amelie
|
closed
|
Birthday message photo update
|
enhancement front-end easy-to-fix Priority
|
The 44th board would like to update the picture that is displayed with the e-mail that is sent on people’s birthdays. We would like to change it to the picture at “/documenten/bestuur/44/Board_picture_birthday.jpg” for now.
|
1.0
|
Birthday message photo update - The 44th board would like to update the picture that is displayed with the e-mail that is sent on people’s birthdays. We would like to change it to the picture at “/documenten/bestuur/44/Board_picture_birthday.jpg” for now.
|
non_infrastructure
|
birthday message photo update the board would like to update the picture that is displayed with the e mail that is sent on people’s birthdays we would like to change it to the picture at “ documenten bestuur board picture birthday jpg” for now
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.