Unnamed: 0,id,type,created_at,repo,repo_url,action,title,labels,body,index,text_combine,label,text,binary_label
384,7180510469.0,IssuesEvent,2018-01-31 23:38:07,dotnet/corefx,https://api.github.com/repos/dotnet/corefx,reopened,intermittent Http.Functional.Tests failure,area-System.Net.Http tenet-reliability,"Failed during debian CI leg in PR https://github.com/dotnet/corefx/pull/26679
https://mc.dot.net/#/user/MichalStrehovsky/pr~2Fjenkins~2Fdotnet~2Fcorefx~2Fmaster~2F/test~2Ffunctional~2Fcli~2F/5665828df2bef3224b83bd4a544e2ab35bd63be3/workItem/System.Net.Http.Functional.Tests/analysis/xunit/System.Net.Http.Functional.Tests.ManagedHandler_HttpClientHandler_DangerousAcceptAllCertificatesValidator_Test~2FSetDelegate_ConnectionSucceeds(acceptedProtocol:%20Tls,%20requestOnlyThisProtocol:%20False)
```
Unhandled Exception of Type System.IO.IOException
Message :
System.IO.IOException : The decryption operation failed, see inner exception.
---- Interop+OpenSsl+SslException : Decrypt failed with OpenSSL error - SSL_ERROR_SSL.
-------- System.Security.Cryptography.CryptographicException : Error occurred during a cryptographic operation.
Stack Trace :
at System.Net.Security.SslStreamInternal.ReadAsyncInternal[TReadAdapter](TReadAdapter adapter, Memory`1 buffer) in /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Release+AGroup_x64+TestOuter_false_prtest/src/System.Net.Security/src/System/Net/Security/SslStreamInternal.cs:line 306
at System.IO.StreamReader.ReadBufferAsync() in /root/coreclr/src/mscorlib/shared/System/IO/StreamReader.cs:line 1328
at System.IO.StreamReader.ReadLineAsyncInternal() in /root/coreclr/src/mscorlib/shared/System/IO/StreamReader.cs:line 888
at System.Net.Test.Common.LoopbackServer.ReadWriteAcceptedAsync(Socket s, StreamReader reader, StreamWriter writer, String response) in /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Release+AGroup_x64+TestOuter_false_prtest/src/Common/tests/System/Net/Http/LoopbackServer.cs:line 97
at System.Net.Test.Common.LoopbackServer.AcceptSocketAsync(Socket server, Func`5 funcAsync, Options options) in /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Release+AGroup_x64+TestOuter_false_prtest/src/Common/tests/System/Net/Http/LoopbackServer.cs:line 177
at System.Net.Http.Functional.Tests.HttpClientHandler_DangerousAcceptAllCertificatesValidator_Test.<>c__DisplayClass3_1.<b__0>d.MoveNext() in /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Release+AGroup_x64+TestOuter_false_prtest/src/System.Net.Http/tests/FunctionalTests/HttpClientHandlerTest.AcceptAllCerts.cs:line 53
--- End of stack trace from previous location where exception was thrown ---
at System.Net.Test.Common.LoopbackServer.<>c__DisplayClass3_0.b__0(Task t) in /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Release+AGroup_x64+TestOuter_false_prtest/src/Common/tests/System/Net/Http/LoopbackServer.cs:line 68
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) in /root/coreclr/src/mscorlib/shared/System/Threading/ExecutionContext.cs:line 151
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(Task& currentTaskSlot) in /root/coreclr/src/mscorlib/src/System/Threading/Tasks/Task.cs:line 2440
--- End of stack trace from previous location where exception was thrown ---
at System.Net.Http.Functional.Tests.HttpClientHandler_DangerousAcceptAllCertificatesValidator_Test.SetDelegate_ConnectionSucceeds(SslProtocols acceptedProtocol, Boolean requestOnlyThisProtocol) in /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Release+AGroup_x64+TestOuter_false_prtest/src/System.Net.Http/tests/FunctionalTests/HttpClientHandlerTest.AcceptAllCerts.cs:line 51
--- End of stack trace from previous location where exception was thrown ---
----- Inner Stack Trace -----
at Interop.OpenSsl.Decrypt(SafeSslHandle context, Byte[] outBuffer, Int32 offset, Int32 count, SslErrorCode& errorCode) in /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Release+AGroup_x64+TestOuter_false_prtest/src/Common/src/Interop/Unix/System.Security.Cryptography.Native/Interop.OpenSsl.cs:line 279
at System.Net.Security.SslStreamPal.EncryptDecryptHelper(SafeDeleteContext securityContext, ReadOnlyMemory`1 input, Int32 offset, Int32 size, Boolean encrypt, Byte[]& output, Int32& resultSize) in /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Release+AGroup_x64+TestOuter_false_prtest/src/System.Net.Security/src/System/Net/Security/SslStreamPal.Unix.cs:line 207
----- Inner Stack Trace -----
```",True,"intermittent Http.Functional.Tests failure - Failed during debian CI leg in PR https://github.com/dotnet/corefx/pull/26679
https://mc.dot.net/#/user/MichalStrehovsky/pr~2Fjenkins~2Fdotnet~2Fcorefx~2Fmaster~2F/test~2Ffunctional~2Fcli~2F/5665828df2bef3224b83bd4a544e2ab35bd63be3/workItem/System.Net.Http.Functional.Tests/analysis/xunit/System.Net.Http.Functional.Tests.ManagedHandler_HttpClientHandler_DangerousAcceptAllCertificatesValidator_Test~2FSetDelegate_ConnectionSucceeds(acceptedProtocol:%20Tls,%20requestOnlyThisProtocol:%20False)
```
Unhandled Exception of Type System.IO.IOException
Message :
System.IO.IOException : The decryption operation failed, see inner exception.
---- Interop+OpenSsl+SslException : Decrypt failed with OpenSSL error - SSL_ERROR_SSL.
-------- System.Security.Cryptography.CryptographicException : Error occurred during a cryptographic operation.
Stack Trace :
at System.Net.Security.SslStreamInternal.ReadAsyncInternal[TReadAdapter](TReadAdapter adapter, Memory`1 buffer) in /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Release+AGroup_x64+TestOuter_false_prtest/src/System.Net.Security/src/System/Net/Security/SslStreamInternal.cs:line 306
at System.IO.StreamReader.ReadBufferAsync() in /root/coreclr/src/mscorlib/shared/System/IO/StreamReader.cs:line 1328
at System.IO.StreamReader.ReadLineAsyncInternal() in /root/coreclr/src/mscorlib/shared/System/IO/StreamReader.cs:line 888
at System.Net.Test.Common.LoopbackServer.ReadWriteAcceptedAsync(Socket s, StreamReader reader, StreamWriter writer, String response) in /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Release+AGroup_x64+TestOuter_false_prtest/src/Common/tests/System/Net/Http/LoopbackServer.cs:line 97
at System.Net.Test.Common.LoopbackServer.AcceptSocketAsync(Socket server, Func`5 funcAsync, Options options) in /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Release+AGroup_x64+TestOuter_false_prtest/src/Common/tests/System/Net/Http/LoopbackServer.cs:line 177
at System.Net.Http.Functional.Tests.HttpClientHandler_DangerousAcceptAllCertificatesValidator_Test.<>c__DisplayClass3_1.<b__0>d.MoveNext() in /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Release+AGroup_x64+TestOuter_false_prtest/src/System.Net.Http/tests/FunctionalTests/HttpClientHandlerTest.AcceptAllCerts.cs:line 53
--- End of stack trace from previous location where exception was thrown ---
at System.Net.Test.Common.LoopbackServer.<>c__DisplayClass3_0.b__0(Task t) in /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Release+AGroup_x64+TestOuter_false_prtest/src/Common/tests/System/Net/Http/LoopbackServer.cs:line 68
at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) in /root/coreclr/src/mscorlib/shared/System/Threading/ExecutionContext.cs:line 151
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(Task& currentTaskSlot) in /root/coreclr/src/mscorlib/src/System/Threading/Tasks/Task.cs:line 2440
--- End of stack trace from previous location where exception was thrown ---
at System.Net.Http.Functional.Tests.HttpClientHandler_DangerousAcceptAllCertificatesValidator_Test.SetDelegate_ConnectionSucceeds(SslProtocols acceptedProtocol, Boolean requestOnlyThisProtocol) in /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Release+AGroup_x64+TestOuter_false_prtest/src/System.Net.Http/tests/FunctionalTests/HttpClientHandlerTest.AcceptAllCerts.cs:line 51
--- End of stack trace from previous location where exception was thrown ---
----- Inner Stack Trace -----
at Interop.OpenSsl.Decrypt(SafeSslHandle context, Byte[] outBuffer, Int32 offset, Int32 count, SslErrorCode& errorCode) in /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Release+AGroup_x64+TestOuter_false_prtest/src/Common/src/Interop/Unix/System.Security.Cryptography.Native/Interop.OpenSsl.cs:line 279
at System.Net.Security.SslStreamPal.EncryptDecryptHelper(SafeDeleteContext securityContext, ReadOnlyMemory`1 input, Int32 offset, Int32 size, Boolean encrypt, Byte[]& output, Int32& resultSize) in /mnt/j/workspace/dotnet_corefx/master/linux-TGroup_netcoreapp+CGroup_Release+AGroup_x64+TestOuter_false_prtest/src/System.Net.Security/src/System/Net/Security/SslStreamPal.Unix.cs:line 207
----- Inner Stack Trace -----
```",1,intermittent http functional tests failure failed during debian ci leg in pr unhandled exception of type system io ioexception message system io ioexception the decryption operation failed see inner exception interop openssl sslexception decrypt failed with openssl error ssl error ssl system security cryptography cryptographicexception error occurred during a cryptographic operation stack trace at system net security sslstreaminternal readasyncinternal treadadapter adapter memory buffer in mnt j workspace dotnet corefx master linux tgroup netcoreapp cgroup release agroup testouter false prtest src system net security src system net security sslstreaminternal cs line at system io streamreader readbufferasync in root coreclr src mscorlib shared system io streamreader cs line at system io streamreader readlineasyncinternal in root coreclr src mscorlib shared system io streamreader cs line at system net test common loopbackserver readwriteacceptedasync socket s streamreader reader streamwriter writer string response in mnt j workspace dotnet corefx master linux tgroup netcoreapp cgroup release agroup testouter false prtest src common tests system net http loopbackserver cs line at system net test common loopbackserver acceptsocketasync socket server func funcasync options options in mnt j workspace dotnet corefx master linux tgroup netcoreapp cgroup release agroup testouter false prtest src common tests system net http loopbackserver cs line at system net http functional tests httpclienthandler dangerousacceptallcertificatesvalidator test c b d movenext in mnt j workspace dotnet corefx master linux tgroup netcoreapp cgroup release agroup testouter false prtest src system net http tests functionaltests httpclienthandlertest acceptallcerts cs line end of stack trace from previous location where exception was thrown at system net test common loopbackserver c b task t in mnt j workspace dotnet corefx master linux tgroup netcoreapp cgroup release agroup testouter false prtest src common tests system net http loopbackserver cs line at system threading executioncontext run executioncontext executioncontext contextcallback callback object state in root coreclr src mscorlib shared system threading executioncontext cs line at system threading tasks task executewiththreadlocal task currenttaskslot in root coreclr src mscorlib src system threading tasks task cs line end of stack trace from previous location where exception was thrown at system net http functional tests httpclienthandler dangerousacceptallcertificatesvalidator test setdelegate connectionsucceeds sslprotocols acceptedprotocol boolean requestonlythisprotocol in mnt j workspace dotnet corefx master linux tgroup netcoreapp cgroup release agroup testouter false prtest src system net http tests functionaltests httpclienthandlertest acceptallcerts cs line end of stack trace from previous location where exception was thrown inner stack trace at interop openssl decrypt safesslhandle context byte outbuffer offset count sslerrorcode errorcode in mnt j workspace dotnet corefx master linux tgroup netcoreapp cgroup release agroup testouter false prtest src common src interop unix system security cryptography native interop openssl cs line at system net security sslstreampal encryptdecrypthelper safedeletecontext securitycontext readonlymemory input offset size boolean encrypt byte output resultsize in mnt j workspace dotnet corefx master linux tgroup netcoreapp cgroup release agroup testouter false prtest src system net security src system net security sslstreampal unix cs line inner stack trace ,1
170130,13175132872.0,IssuesEvent,2020-08-12 00:36:58,open-telemetry/opentelemetry-java-instrumentation,https://api.github.com/repos/open-telemetry/opentelemetry-java-instrumentation,closed,Sporadic test failure (5x): AkkaExecutorInstrumentationTest,priority:p2 release:required-for-ga sporadic test failure,"This seems to have started failing a lot right with the merge of #911
```
Condition not satisfied:
TEST_WRITER.traces.size() == 1
| | | |
| | 2 false
| [[SpanWrapper{delegate=io.opentelemetry.sdk.trace.RecordEventsReadableSpan@3993f5de, resolvedLinks=[], resolvedEvents=[], attributes={}, totalAttributeCount=0, totalRecordedEvents=0, status=Status{canonicalCode=OK, description=null}, name=parent, endEpochNanos=1596857931107921918, hasEnded=true}], [SpanWrapper{delegate=io.opentelemetry.sdk.trace.RecordEventsReadableSpan@24f8e4a1, resolvedLinks=[], resolvedEvents=[], attributes={}, totalAttributeCount=0, totalRecordedEvents=0, status=Status{canonicalCode=OK, description=null}, name=asyncChild, endEpochNanos=1596857931096238331, hasEnded=true}]]
at AkkaExecutorInstrumentationTest.#poolImpl '#name' propagates(AkkaExecutorInstrumentationTest.groovy:80)
```
https://app.circleci.com/pipelines/github/open-telemetry/opentelemetry-java-instrumentation/2603/workflows/3b2be6ed-16d1-431a-8d19-b984185f089b/jobs/15974/tests
https://app.circleci.com/pipelines/github/open-telemetry/opentelemetry-java-instrumentation/2603/workflows/6950228d-199c-4a81-9350-c5b79b48877a/jobs/15975/tests
https://app.circleci.com/pipelines/github/open-telemetry/opentelemetry-java-instrumentation/2603/workflows/68ac4e59-5944-4c7e-956a-182281821001/jobs/15976/tests
https://app.circleci.com/pipelines/github/open-telemetry/opentelemetry-java-instrumentation/2603/workflows/e9190ac9-6419-4cac-ab85-4a6fe5795704/jobs/15977/tests
https://app.circleci.com/pipelines/github/open-telemetry/opentelemetry-java-instrumentation/2596/workflows/b710dd55-b9eb-4952-9244-aae9dc92663e/jobs/15956/tests",1.0,"Sporadic test failure (5x): AkkaExecutorInstrumentationTest - This seems to have started failing a lot right with the merge of #911
```
Condition not satisfied:
TEST_WRITER.traces.size() == 1
| | | |
| | 2 false
| [[SpanWrapper{delegate=io.opentelemetry.sdk.trace.RecordEventsReadableSpan@3993f5de, resolvedLinks=[], resolvedEvents=[], attributes={}, totalAttributeCount=0, totalRecordedEvents=0, status=Status{canonicalCode=OK, description=null}, name=parent, endEpochNanos=1596857931107921918, hasEnded=true}], [SpanWrapper{delegate=io.opentelemetry.sdk.trace.RecordEventsReadableSpan@24f8e4a1, resolvedLinks=[], resolvedEvents=[], attributes={}, totalAttributeCount=0, totalRecordedEvents=0, status=Status{canonicalCode=OK, description=null}, name=asyncChild, endEpochNanos=1596857931096238331, hasEnded=true}]]
at AkkaExecutorInstrumentationTest.#poolImpl '#name' propagates(AkkaExecutorInstrumentationTest.groovy:80)
```
https://app.circleci.com/pipelines/github/open-telemetry/opentelemetry-java-instrumentation/2603/workflows/3b2be6ed-16d1-431a-8d19-b984185f089b/jobs/15974/tests
https://app.circleci.com/pipelines/github/open-telemetry/opentelemetry-java-instrumentation/2603/workflows/6950228d-199c-4a81-9350-c5b79b48877a/jobs/15975/tests
https://app.circleci.com/pipelines/github/open-telemetry/opentelemetry-java-instrumentation/2603/workflows/68ac4e59-5944-4c7e-956a-182281821001/jobs/15976/tests
https://app.circleci.com/pipelines/github/open-telemetry/opentelemetry-java-instrumentation/2603/workflows/e9190ac9-6419-4cac-ab85-4a6fe5795704/jobs/15977/tests
https://app.circleci.com/pipelines/github/open-telemetry/opentelemetry-java-instrumentation/2596/workflows/b710dd55-b9eb-4952-9244-aae9dc92663e/jobs/15956/tests",0,sporadic test failure akkaexecutorinstrumentationtest this seems to have started failing a lot right with the merge of condition not satisfied test writer traces size false resolvedevents attributes totalattributecount totalrecordedevents status status canonicalcode ok description null name parent endepochnanos hasended true resolvedevents attributes totalattributecount totalrecordedevents status status canonicalcode ok description null name asyncchild endepochnanos hasended true at akkaexecutorinstrumentationtest poolimpl name propagates akkaexecutorinstrumentationtest groovy ,0
366170,10817970545.0,IssuesEvent,2019-11-08 10:54:47,pragdave/earmark,https://api.github.com/repos/pragdave/earmark,closed,Blockquotes nested in lists only work with an indentation of 2 spaces,Priority: HIGH bug,"### Input:
```markdown
- list
text indented with 4 spaces
> nested blockquote indented with 4 spaces
```
---
### Expected Output:
HTML output from [Daring Fireball](https://daringfireball.net/projects/markdown/dingus):
```html
list
text indented with 4 spaces
nested blockquote indented with 4 spaces
```
GFM:
- list
text indented with 4 spaces
> nested blockquote indented with 4 spaces
---
### Actual Output:
```html
list
text indented with 4 spaces
> nested blockquote indented with 4 spaces
```
- list
text indented with 4 spaces
> nested blockquote indented with 4 spaces
---
Playing around with other markdown parsers, it seems any number of spaces should produce the desired result, but on nested blockquotes, Earmark only works with 2.",1.0,"Blockquotes nested in lists only work with an indentation of 2 spaces - ### Input:
```markdown
- list
text indented with 4 spaces
> nested blockquote indented with 4 spaces
```
---
### Expected Output:
HTML output from [Daring Fireball](https://daringfireball.net/projects/markdown/dingus):
```html
list
text indented with 4 spaces
nested blockquote indented with 4 spaces
```
GFM:
- list
text indented with 4 spaces
> nested blockquote indented with 4 spaces
---
### Actual Output:
```html
list
text indented with 4 spaces
> nested blockquote indented with 4 spaces
```
- list
text indented with 4 spaces
> nested blockquote indented with 4 spaces
---
Playing around with other markdown parsers, it seems any number of spaces should produce the desired result, but on nested blockquotes, Earmark only works with 2.",0,blockquotes nested in lists only work with an indentation of spaces input markdown list text indented with spaces nested blockquote indented with spaces expected output html output from html list text indented with spaces nested blockquote indented with spaces gfm list text indented with spaces nested blockquote indented with spaces actual output html list text indented with spaces gt nested blockquote indented with spaces list text indented with spaces gt nested blockquote indented with spaces playing around with other markdown parsers it seems any number of spaces should produce the desired result but on nested blockquotes earmark only works with ,0
1800,19916554722.0,IssuesEvent,2022-01-25 23:37:36,Azure/azure-sdk-for-java,https://api.github.com/repos/Azure/azure-sdk-for-java,closed,Reducing AMQP ReactorSender memory consumption,Event Hubs Client pillar-reliability amqp,"The following sample code uses EH Producer API to send N events every second.
sample_code
```java
void recurringSend(final int eventsPerRecur, final int totalDurationInSec) {
final EventHubProducerAsyncClient producer = new EventHubClientBuilder()
.connectionString(System.getenv(""EH_CON_STR""), System.getenv(""EH_NAME""))
.buildAsyncProducerClient();
final ScheduledExecutorService recurringScheduler = Executors.newScheduledThreadPool(1);
final AtomicInteger msgId = new AtomicInteger(0);
recurringScheduler.scheduleAtFixedRate(() -> {
List> sendFutures = new ArrayList<>();
for (int i = 0; i < eventsPerRecur; i++) {
final byte[] data = (""msg#"" + msgId.getAndIncrement()).getBytes(StandardCharsets.UTF_8);
final Mono sendMono = producer.createBatch().flatMap(batch -> {
batch.tryAdd(new EventData(data));
return producer.send(batch);
});
final CompletableFuture sendFuture = sendMono.toFuture();
sendFutures.add(sendFuture);
}
try {
final CompletableFuture mergedFuture
= CompletableFuture.allOf(sendFutures.toArray(CompletableFuture[]::new));
mergedFuture.get();
} catch (InterruptedException | ExecutionException e) {
System.err.println(""Error occurred while waiting for the result. "" + e);
}
}, 1, 1, TimeUnit.SECONDS);
try {
recurringScheduler.awaitTermination(totalDurationInSec, TimeUnit.SECONDS);
recurringScheduler.shutdown();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
} finally {
producer.close();
}
}
```
Below is the memory snapshot for this code execution with eventsPerRecur:1000 and totalDurationInSec:960. The memory peeked to ~2 GB and then consistently stayed around ~1.5GB.
The setup - EventHubs and VM is running on the same region, using the latest (at the time of testing) azure-core-amqp version: 2.3.5.
After digging into the profiler data and trying out different approaches, it turns out that using `Mono.defer()` instead of `Mono.just()` in two places in ""Mono""ReactorSender::getLinkSize()"" method brought down the memory usage, with peek now capped at 700 MB.
i.e., peek memory usage reduced by a factor of 2.5 (2Gb -> 700Mb).
_Note that the purpose of this doc is not to promise that 700Mb (or any number) is the expected memory limit but only to describe the changes that reduced the overall memory consumption_.
There might be other potential areas where we can save memory usage, but need to discover those with proper profiling to avoid code change that causes regression.",True,"Reducing AMQP ReactorSender memory consumption - The following sample code uses EH Producer API to send N events every second.
sample_code
```java
void recurringSend(final int eventsPerRecur, final int totalDurationInSec) {
final EventHubProducerAsyncClient producer = new EventHubClientBuilder()
.connectionString(System.getenv(""EH_CON_STR""), System.getenv(""EH_NAME""))
.buildAsyncProducerClient();
final ScheduledExecutorService recurringScheduler = Executors.newScheduledThreadPool(1);
final AtomicInteger msgId = new AtomicInteger(0);
recurringScheduler.scheduleAtFixedRate(() -> {
List> sendFutures = new ArrayList<>();
for (int i = 0; i < eventsPerRecur; i++) {
final byte[] data = (""msg#"" + msgId.getAndIncrement()).getBytes(StandardCharsets.UTF_8);
final Mono sendMono = producer.createBatch().flatMap(batch -> {
batch.tryAdd(new EventData(data));
return producer.send(batch);
});
final CompletableFuture sendFuture = sendMono.toFuture();
sendFutures.add(sendFuture);
}
try {
final CompletableFuture mergedFuture
= CompletableFuture.allOf(sendFutures.toArray(CompletableFuture[]::new));
mergedFuture.get();
} catch (InterruptedException | ExecutionException e) {
System.err.println(""Error occurred while waiting for the result. "" + e);
}
}, 1, 1, TimeUnit.SECONDS);
try {
recurringScheduler.awaitTermination(totalDurationInSec, TimeUnit.SECONDS);
recurringScheduler.shutdown();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
} finally {
producer.close();
}
}
```
Below is the memory snapshot for this code execution with eventsPerRecur:1000 and totalDurationInSec:960. The memory peeked to ~2 GB and then consistently stayed around ~1.5GB.
The setup - EventHubs and VM is running on the same region, using the latest (at the time of testing) azure-core-amqp version: 2.3.5.
After digging into the profiler data and trying out different approaches, it turns out that using `Mono.defer()` instead of `Mono.just()` in two places in ""Mono""ReactorSender::getLinkSize()"" method brought down the memory usage, with peek now capped at 700 MB.
i.e., peek memory usage reduced by a factor of 2.5 (2Gb -> 700Mb).
_Note that the purpose of this doc is not to promise that 700Mb (or any number) is the expected memory limit but only to describe the changes that reduced the overall memory consumption_.
There might be other potential areas where we can save memory usage, but need to discover those with proper profiling to avoid code change that causes regression.",1,reducing amqp reactorsender memory consumption the following sample code uses eh producer api to send n events every second sample code java void recurringsend final int eventsperrecur final int totaldurationinsec final eventhubproducerasyncclient producer new eventhubclientbuilder connectionstring system getenv eh con str system getenv eh name buildasyncproducerclient final scheduledexecutorservice recurringscheduler executors newscheduledthreadpool final atomicinteger msgid new atomicinteger recurringscheduler scheduleatfixedrate list sendfutures new arraylist for int i i eventsperrecur i final byte data msg msgid getandincrement getbytes standardcharsets utf final mono sendmono producer createbatch flatmap batch batch tryadd new eventdata data return producer send batch final completablefuture sendfuture sendmono tofuture sendfutures add sendfuture try final completablefuture mergedfuture completablefuture allof sendfutures toarray completablefuture new mergedfuture get catch interruptedexception executionexception e system err println error occurred while waiting for the result e timeunit seconds try recurringscheduler awaittermination totaldurationinsec timeunit seconds recurringscheduler shutdown catch interruptedexception e thread currentthread interrupt finally producer close below is the memory snapshot for this code execution with eventsperrecur and totaldurationinsec the memory peeked to gb and then consistently stayed around the setup eventhubs and vm is running on the same region using the latest at the time of testing azure core amqp version img width alt sender src after digging into the profiler data and trying out different approaches it turns out that using mono defer instead of mono just in two places in mono reactorsender getlinksize method brought down the memory usage with peek now capped at mb i e peek memory usage reduced by a factor of img width alt sender src note that the purpose of this doc is not to promise that or any number is the expected memory limit but only to describe the changes that reduced the overall memory consumption there might be other potential areas where we can save memory usage but need to discover those with proper profiling to avoid code change that causes regression ,1
55274,6461619830.0,IssuesEvent,2017-08-16 08:37:06,dotnet/corefx,https://api.github.com/repos/dotnet/corefx,closed,"Test: System.Runtime.Serialization.Formatters.Tests.BinaryFormatterTests/Roundtrip_Exceptions failed with ""System.Runtime.Serialization.SerializationException""",area-System.Runtime test-run-uwp-ilc,"Opened on behalf of @Jiayili1
The test `System.Runtime.Serialization.Formatters.Tests.BinaryFormatterTests/Roundtrip_Exceptions(expected: System.AggregateException: Aggregate exception message (Exception message) ---> System.Excepti...` has failed.
System.Runtime.Serialization.SerializationException : Type '$BlockedFromReflection_0_6907734b' in Assembly 'System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' is not marked as serializable.
Stack Trace:
at System.Runtime.Serialization.FormatterServices.InternalGetSerializableMembers(Type type) in E:\A\_work\441\s\corefx\src\System.Runtime.Serialization.Formatters\src\System\Runtime\Serialization\FormatterServices.cs:line 82
at System.Runtime.Serialization.FormatterServices.<>c.b__5_0($MemberHolder mh) in E:\A\_work\441\s\corefx\src\System.Runtime.Serialization.Formatters\src\System\Runtime\Serialization\FormatterServices.cs:line 177
at System.Func$2.Invoke(Int32 arg) in Invoke:line 16707566
at System.Collections.Concurrent.ConcurrentDictionary$2.GetOrAdd(__Canon key, Func$2<__Canon,__Canon> valueFactory) in E:\A\_work\441\s\corefx\src\System.Collections.Concurrent\src\System\Collections\Concurrent\ConcurrentDictionary.cs:line 989
at System.Runtime.Serialization.FormatterServices.GetSerializableMembers(Type type, StreamingContext context) in E:\A\_work\441\s\corefx\src\System.Runtime.Serialization.Formatters\src\System\Runtime\Serialization\FormatterServices.cs:line 175
at System.Runtime.Serialization.Formatters.Binary.WriteObjectInfo.InitMemberInfo() in E:\A\_work\441\s\corefx\src\System.Runtime.Serialization.Formatters\src\System\Runtime\Serialization\Formatters\Binary\BinaryObjectInfo.cs:line 242
at System.Runtime.Serialization.Formatters.Binary.WriteObjectInfo.InitSerialize(Object obj, $ISurrogateSelector surrogateSelector, StreamingContext context, $SerObjectInfoInit serObjectInfoInit, IFormatterConverter converter, $ObjectWriter objectWriter, $SerializationBinder binder) in E:\A\_work\441\s\corefx\src\System.Runtime.Serialization.Formatters\src\System\Runtime\Serialization\Formatters\Binary\BinaryObjectInfo.cs:line 115
at System.Runtime.Serialization.Formatters.Binary.WriteObjectInfo.Serialize(Object obj, $ISurrogateSelector surrogateSelector, StreamingContext context, $SerObjectInfoInit serObjectInfoInit, IFormatterConverter converter, $ObjectWriter objectWriter, $SerializationBinder binder) in E:\A\_work\441\s\corefx\src\System.Runtime.Serialization.Formatters\src\System\Runtime\Serialization\Formatters\Binary\BinaryObjectInfo.cs:line 70
at System.Runtime.Serialization.Formatters.Binary.ObjectWriter.Write($WriteObjectInfo objectInfo, $NameInfo memberNameInfo, $NameInfo typeNameInfo) in E:\A\_work\441\s\corefx\src\System.Runtime.Serialization.Formatters\src\System\Runtime\Serialization\Formatters\Binary\BinaryObjectWriter.cs:line 174
at System.Runtime.Serialization.Formatters.Binary.ObjectWriter.Serialize(Object graph, $BinaryFormatterWriter serWriter, Boolean fCheck) in E:\A\_work\441\s\corefx\src\System.Runtime.Serialization.Formatters\src\System\Runtime\Serialization\Formatters\Binary\BinaryObjectWriter.cs:line 98
at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Serialize(Stream serializationStream, Object graph, Boolean check) in E:\A\_work\441\s\corefx\src\System.Runtime.Serialization.Formatters\src\System\Runtime\Serialization\Formatters\Binary\BinaryFormatter.cs:line 87
at System.Runtime.Serialization.Formatters.Tests.BinaryFormatterHelpers.Clone(__Canon obj) in E:\A\_work\441\s\corefx\src\Common\tests\System\Runtime\Serialization\Formatters\BinaryFormatterHelpers.cs:line 21
at System.Runtime.Serialization.Formatters.Tests.BinaryFormatterHelpers.AssertRoundtrips(__Canon expected, __Canon[] additionalGetters) in E:\A\_work\441\s\corefx\src\Common\tests\System\Runtime\Serialization\Formatters\BinaryFormatterHelpers.cs:line 39
at System.Runtime.Serialization.Formatters.Tests.BinaryFormatterTests.Roundtrip_Exceptions(Exception expected) in E:\A\_work\441\s\corefx\src\System.Runtime.Serialization.Formatters\tests\BinaryFormatterTests.cs:line 168
at _$ILCT$.$ILT$ReflectionDynamicInvoke$.InvokeRetVI(Object thisPtr, IntPtr methodToCall, ArgSetupState argSetupState, Boolean targetIsThisCall)
at System.InvokeUtils.CalliIntrinsics.Call(IntPtr dynamicInvokeHelperMethod, IntPtr dynamicInvokeHelperGenericDictionary, Object thisPtr, IntPtr methodToCall, ArgSetupState argSetupState, Boolean isTargetThisCall)
at System.InvokeUtils.CallDynamicInvokeMethod(Object thisPtr, IntPtr methodToCall, Object thisPtrDynamicInvokeMethod, IntPtr dynamicInvokeHelperMethod, IntPtr dynamicInvokeHelperGenericDictionary, Object targetMethodOrDelegate, Object[] parameters, BinderBundle binderBundle, Boolean invokeMethodHelperIsThisCall, Boolean methodToCallIsThisCall) in CallDynamicInvokeMethod:line 16707566
Build : Master - 20170807.01 (UWP ILC Tests)
Failing configurations:
- Windows.10.Amd64.ClientRS3-x86
- Debug
- Release
- Windows.10.Amd64.ClientRS3-x64
- Debug
- Release
Detail: https://mc.dot.net/#/product/netcore/master/source/official~2Fcorefx~2Fmaster~2F/type/test~2Ffunctional~2Filc~2F/build/20170807.01/workItem/System.Runtime.Serialization.Formatters.Tests/analysis/xunit/System.Runtime.Serialization.Formatters.Tests.BinaryFormatterTests~2FRoundtrip_Exceptions(expected:%20System.AggregateException:%20Aggregate%20exception%20message%20(Exception%20message)%20---%3E%20System.Excepti...",1.0,"Test: System.Runtime.Serialization.Formatters.Tests.BinaryFormatterTests/Roundtrip_Exceptions failed with ""System.Runtime.Serialization.SerializationException"" - Opened on behalf of @Jiayili1
The test `System.Runtime.Serialization.Formatters.Tests.BinaryFormatterTests/Roundtrip_Exceptions(expected: System.AggregateException: Aggregate exception message (Exception message) ---> System.Excepti...` has failed.
System.Runtime.Serialization.SerializationException : Type '$BlockedFromReflection_0_6907734b' in Assembly 'System.Private.CoreLib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' is not marked as serializable.
Stack Trace:
at System.Runtime.Serialization.FormatterServices.InternalGetSerializableMembers(Type type) in E:\A\_work\441\s\corefx\src\System.Runtime.Serialization.Formatters\src\System\Runtime\Serialization\FormatterServices.cs:line 82
at System.Runtime.Serialization.FormatterServices.<>c.b__5_0($MemberHolder mh) in E:\A\_work\441\s\corefx\src\System.Runtime.Serialization.Formatters\src\System\Runtime\Serialization\FormatterServices.cs:line 177
at System.Func$2.Invoke(Int32 arg) in Invoke:line 16707566
at System.Collections.Concurrent.ConcurrentDictionary$2.GetOrAdd(__Canon key, Func$2<__Canon,__Canon> valueFactory) in E:\A\_work\441\s\corefx\src\System.Collections.Concurrent\src\System\Collections\Concurrent\ConcurrentDictionary.cs:line 989
at System.Runtime.Serialization.FormatterServices.GetSerializableMembers(Type type, StreamingContext context) in E:\A\_work\441\s\corefx\src\System.Runtime.Serialization.Formatters\src\System\Runtime\Serialization\FormatterServices.cs:line 175
at System.Runtime.Serialization.Formatters.Binary.WriteObjectInfo.InitMemberInfo() in E:\A\_work\441\s\corefx\src\System.Runtime.Serialization.Formatters\src\System\Runtime\Serialization\Formatters\Binary\BinaryObjectInfo.cs:line 242
at System.Runtime.Serialization.Formatters.Binary.WriteObjectInfo.InitSerialize(Object obj, $ISurrogateSelector surrogateSelector, StreamingContext context, $SerObjectInfoInit serObjectInfoInit, IFormatterConverter converter, $ObjectWriter objectWriter, $SerializationBinder binder) in E:\A\_work\441\s\corefx\src\System.Runtime.Serialization.Formatters\src\System\Runtime\Serialization\Formatters\Binary\BinaryObjectInfo.cs:line 115
at System.Runtime.Serialization.Formatters.Binary.WriteObjectInfo.Serialize(Object obj, $ISurrogateSelector surrogateSelector, StreamingContext context, $SerObjectInfoInit serObjectInfoInit, IFormatterConverter converter, $ObjectWriter objectWriter, $SerializationBinder binder) in E:\A\_work\441\s\corefx\src\System.Runtime.Serialization.Formatters\src\System\Runtime\Serialization\Formatters\Binary\BinaryObjectInfo.cs:line 70
at System.Runtime.Serialization.Formatters.Binary.ObjectWriter.Write($WriteObjectInfo objectInfo, $NameInfo memberNameInfo, $NameInfo typeNameInfo) in E:\A\_work\441\s\corefx\src\System.Runtime.Serialization.Formatters\src\System\Runtime\Serialization\Formatters\Binary\BinaryObjectWriter.cs:line 174
at System.Runtime.Serialization.Formatters.Binary.ObjectWriter.Serialize(Object graph, $BinaryFormatterWriter serWriter, Boolean fCheck) in E:\A\_work\441\s\corefx\src\System.Runtime.Serialization.Formatters\src\System\Runtime\Serialization\Formatters\Binary\BinaryObjectWriter.cs:line 98
at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Serialize(Stream serializationStream, Object graph, Boolean check) in E:\A\_work\441\s\corefx\src\System.Runtime.Serialization.Formatters\src\System\Runtime\Serialization\Formatters\Binary\BinaryFormatter.cs:line 87
at System.Runtime.Serialization.Formatters.Tests.BinaryFormatterHelpers.Clone(__Canon obj) in E:\A\_work\441\s\corefx\src\Common\tests\System\Runtime\Serialization\Formatters\BinaryFormatterHelpers.cs:line 21
at System.Runtime.Serialization.Formatters.Tests.BinaryFormatterHelpers.AssertRoundtrips(__Canon expected, __Canon[] additionalGetters) in E:\A\_work\441\s\corefx\src\Common\tests\System\Runtime\Serialization\Formatters\BinaryFormatterHelpers.cs:line 39
at System.Runtime.Serialization.Formatters.Tests.BinaryFormatterTests.Roundtrip_Exceptions(Exception expected) in E:\A\_work\441\s\corefx\src\System.Runtime.Serialization.Formatters\tests\BinaryFormatterTests.cs:line 168
at _$ILCT$.$ILT$ReflectionDynamicInvoke$.InvokeRetVI(Object thisPtr, IntPtr methodToCall, ArgSetupState argSetupState, Boolean targetIsThisCall)
at System.InvokeUtils.CalliIntrinsics.Call(IntPtr dynamicInvokeHelperMethod, IntPtr dynamicInvokeHelperGenericDictionary, Object thisPtr, IntPtr methodToCall, ArgSetupState argSetupState, Boolean isTargetThisCall)
at System.InvokeUtils.CallDynamicInvokeMethod(Object thisPtr, IntPtr methodToCall, Object thisPtrDynamicInvokeMethod, IntPtr dynamicInvokeHelperMethod, IntPtr dynamicInvokeHelperGenericDictionary, Object targetMethodOrDelegate, Object[] parameters, BinderBundle binderBundle, Boolean invokeMethodHelperIsThisCall, Boolean methodToCallIsThisCall) in CallDynamicInvokeMethod:line 16707566
Build : Master - 20170807.01 (UWP ILC Tests)
Failing configurations:
- Windows.10.Amd64.ClientRS3-x86
- Debug
- Release
- Windows.10.Amd64.ClientRS3-x64
- Debug
- Release
Detail: https://mc.dot.net/#/product/netcore/master/source/official~2Fcorefx~2Fmaster~2F/type/test~2Ffunctional~2Filc~2F/build/20170807.01/workItem/System.Runtime.Serialization.Formatters.Tests/analysis/xunit/System.Runtime.Serialization.Formatters.Tests.BinaryFormatterTests~2FRoundtrip_Exceptions(expected:%20System.AggregateException:%20Aggregate%20exception%20message%20(Exception%20message)%20---%3E%20System.Excepti...",0,test system runtime serialization formatters tests binaryformattertests roundtrip exceptions failed with system runtime serialization serializationexception opened on behalf of the test system runtime serialization formatters tests binaryformattertests roundtrip exceptions expected system aggregateexception aggregate exception message exception message system excepti has failed system runtime serialization serializationexception type blockedfromreflection in assembly system private corelib version culture neutral publickeytoken is not marked as serializable stack trace at system runtime serialization formatterservices internalgetserializablemembers type type in e a work s corefx src system runtime serialization formatters src system runtime serialization formatterservices cs line at system runtime serialization formatterservices c b memberholder mh in e a work s corefx src system runtime serialization formatters src system runtime serialization formatterservices cs line at system func invoke arg in invoke line at system collections concurrent concurrentdictionary getoradd canon key func valuefactory in e a work s corefx src system collections concurrent src system collections concurrent concurrentdictionary cs line at system runtime serialization formatterservices getserializablemembers type type streamingcontext context in e a work s corefx src system runtime serialization formatters src system runtime serialization formatterservices cs line at system runtime serialization formatters binary writeobjectinfo initmemberinfo in e a work s corefx src system runtime serialization formatters src system runtime serialization formatters binary binaryobjectinfo cs line at system runtime serialization formatters binary writeobjectinfo initserialize object obj isurrogateselector surrogateselector streamingcontext context serobjectinfoinit serobjectinfoinit iformatterconverter converter objectwriter objectwriter serializationbinder binder in e a work s corefx src system runtime serialization formatters src system runtime serialization formatters binary binaryobjectinfo cs line at system runtime serialization formatters binary writeobjectinfo serialize object obj isurrogateselector surrogateselector streamingcontext context serobjectinfoinit serobjectinfoinit iformatterconverter converter objectwriter objectwriter serializationbinder binder in e a work s corefx src system runtime serialization formatters src system runtime serialization formatters binary binaryobjectinfo cs line at system runtime serialization formatters binary objectwriter write writeobjectinfo objectinfo nameinfo membernameinfo nameinfo typenameinfo in e a work s corefx src system runtime serialization formatters src system runtime serialization formatters binary binaryobjectwriter cs line at system runtime serialization formatters binary objectwriter serialize object graph binaryformatterwriter serwriter boolean fcheck in e a work s corefx src system runtime serialization formatters src system runtime serialization formatters binary binaryobjectwriter cs line at system runtime serialization formatters binary binaryformatter serialize stream serializationstream object graph boolean check in e a work s corefx src system runtime serialization formatters src system runtime serialization formatters binary binaryformatter cs line at system runtime serialization formatters tests binaryformatterhelpers clone canon obj in e a work s corefx src common tests system runtime serialization formatters binaryformatterhelpers cs line at system runtime serialization formatters tests binaryformatterhelpers assertroundtrips canon expected canon additionalgetters in e a work s corefx src common tests system runtime serialization formatters binaryformatterhelpers cs line at system runtime serialization formatters tests binaryformattertests roundtrip exceptions exception expected in e a work s corefx src system runtime serialization formatters tests binaryformattertests cs line at ilct ilt reflectiondynamicinvoke invokeretvi object thisptr intptr methodtocall argsetupstate argsetupstate boolean targetisthiscall at system invokeutils calliintrinsics call intptr dynamicinvokehelpermethod intptr dynamicinvokehelpergenericdictionary object thisptr intptr methodtocall argsetupstate argsetupstate boolean istargetthiscall at system invokeutils calldynamicinvokemethod object thisptr intptr methodtocall object thisptrdynamicinvokemethod intptr dynamicinvokehelpermethod intptr dynamicinvokehelpergenericdictionary object targetmethodordelegate object parameters binderbundle binderbundle boolean invokemethodhelperisthiscall boolean methodtocallisthiscall in calldynamicinvokemethod line build master uwp ilc tests failing configurations windows debug release windows debug release detail ,0
179652,14707768275.0,IssuesEvent,2021-01-04 22:12:47,cortex-lab/Rigbox,https://api.github.com/repos/cortex-lab/Rigbox,closed,Guide on how to update the code,documentation enhancement,"**Is your feature request related to a problem? Please describe.**
People appear to find it difficult to update the code and/or switch between versions.
**Describe the solution you'd like**
We should have a guide called 'Updating the code' that follows on from the installation guide, which details how to set the automatic code update in the paths file, how to switch between releases in Git using the tag, how to download the source code (or at least where the releases can be found on Github), how to switch between dev and master via git, and how to undo the last pull.",1.0,"Guide on how to update the code - **Is your feature request related to a problem? Please describe.**
People appear to find it difficult to update the code and/or switch between versions.
**Describe the solution you'd like**
We should have a guide called 'Updating the code' that follows on from the installation guide, which details how to set the automatic code update in the paths file, how to switch between releases in Git using the tag, how to download the source code (or at least where the releases can be found on Github), how to switch between dev and master via git, and how to undo the last pull.",0,guide on how to update the code is your feature request related to a problem please describe people appear to find it difficult to update the code and or switch between versions describe the solution you d like we should have a guide called updating the code that follows on from the installation guide which details how to set the automatic code update in the paths file how to switch between releases in git using the tag how to download the source code or at least where the releases can be found on github how to switch between dev and master via git and how to undo the last pull ,0
780,10476363397.0,IssuesEvent,2019-09-23 18:25:33,microsoft/BotFramework-DirectLineJS,https://api.github.com/repos/microsoft/BotFramework-DirectLineJS,opened,Happy path: upload a single file without text message,0 Reliability 0 Streaming Extensions,"1. Start a conversation
1. Upload a single file without text message
Make sure the bot can receive it.",True,"Happy path: upload a single file without text message - 1. Start a conversation
1. Upload a single file without text message
Make sure the bot can receive it.",1,happy path upload a single file without text message start a conversation upload a single file without text message make sure the bot can receive it ,1
161181,25299046173.0,IssuesEvent,2022-11-17 09:19:55,gitpod-io/gitpod,https://api.github.com/repos/gitpod-io/gitpod,opened,[Admin dashboard] - Show multiple lists under a user or team,feature: admin dashboard needs visual design team: webapp team: product-design,"The admin dashboard should able to represent the relationship between a user and other entities like usage and teams. Currently we only show workspaces.
The same requirement exists for teams e.g. to show team members + team projects + team usage.
How about additng tab navigation to toggle between differrent lists inder the user or team detail.",2.0,"[Admin dashboard] - Show multiple lists under a user or team - The admin dashboard should able to represent the relationship between a user and other entities like usage and teams. Currently we only show workspaces.
The same requirement exists for teams e.g. to show team members + team projects + team usage.
How about additng tab navigation to toggle between differrent lists inder the user or team detail.",0, show multiple lists under a user or team the admin dashboard should able to represent the relationship between a user and other entities like usage and teams currently we only show workspaces the same requirement exists for teams e g to show team members team projects team usage how about additng tab navigation to toggle between differrent lists inder the user or team detail ,0
188,4020461574.0,IssuesEvent,2016-05-16 18:30:03,wordpress-mobile/WordPress-iOS,https://api.github.com/repos/wordpress-mobile/WordPress-iOS,closed,People Management: Delete Users,People Management [Type] Enhancement,"#### Details:
Admins should be allowed to Delete Users. Let's replicate Calypso's UX:
- On delete, Posts should be reassigned to another user (to be picked).
- Calypso's User deletion can be found [here](https://github.com/Automattic/wp-calypso/blob/12a27beda288923d5186807f1157f04389f54c06/client/my-sites/people/delete-user/index.jsx), and the actual call, [here](https://github.com/Automattic/wp-calypso/blob/12a27beda288923d5186807f1157f04389f54c06/client/lib/users/actions.js#L59)
- A user cannot delete himself (sure!)
",1.0,"People Management: Delete Users - #### Details:
Admins should be allowed to Delete Users. Let's replicate Calypso's UX:
- On delete, Posts should be reassigned to another user (to be picked).
- Calypso's User deletion can be found [here](https://github.com/Automattic/wp-calypso/blob/12a27beda288923d5186807f1157f04389f54c06/client/my-sites/people/delete-user/index.jsx), and the actual call, [here](https://github.com/Automattic/wp-calypso/blob/12a27beda288923d5186807f1157f04389f54c06/client/lib/users/actions.js#L59)
- A user cannot delete himself (sure!)
",0,people management delete users details admins should be allowed to delete users let s replicate calypso s ux on delete posts should be reassigned to another user to be picked calypso s user deletion can be found and the actual call a user cannot delete himself sure ,0
600000,18287513645.0,IssuesEvent,2021-10-05 12:01:42,zephyrproject-rtos/zephyr,https://api.github.com/repos/zephyrproject-rtos/zephyr,closed,There is no way to leave a ipv6 multicast group,bug priority: low area: Networking area: OpenThread,"**Describe the bug**
It is possible for the coap_server to join a multicast group, however it is not possible to leave a multicast group. There is no implementation of `NET_EVENT_IPV6_CMD_MADDR_DEL` in `ipv6_addr_event_handler` in `openthread.c`.
Building the coap_server with Openthread support fails because of `net_ipv6_mld_join`. Replacing this method with `net_if_ipv6_maddr_add` and `net_if_ipv6_maddr_join` seems to solve the problem.
An issue related to the 2 points above has been reported here: https://github.com/zephyrproject-rtos/zephyr/issues/17692.
**To Reproduce**
Steps to reproduce the behavior:
1. Build the coap_server sample with Openthread
2. Build process fails
3. Change `net_ipv6_mld_join` to `net_if_ipv6_maddr_add` and `net_if_ipv6_maddr_join` to be able to build the sample.
4. There is no way of leaving a multicast group
**Expected behavior**
I expected that there were a method to leave a certain multicast group.
**Impact**
I am not able to leave a multicast group.
**Environment (please complete the following information):**
- Toolchain (e.g Zephyr SDK, ...)
- Zephyr: v2.6.99-ncs1-rc2",1.0,"There is no way to leave a ipv6 multicast group - **Describe the bug**
It is possible for the coap_server to join a multicast group, however it is not possible to leave a multicast group. There is no implementation of `NET_EVENT_IPV6_CMD_MADDR_DEL` in `ipv6_addr_event_handler` in `openthread.c`.
Building the coap_server with Openthread support fails because of `net_ipv6_mld_join`. Replacing this method with `net_if_ipv6_maddr_add` and `net_if_ipv6_maddr_join` seems to solve the problem.
An issue related to the 2 points above has been reported here: https://github.com/zephyrproject-rtos/zephyr/issues/17692.
**To Reproduce**
Steps to reproduce the behavior:
1. Build the coap_server sample with Openthread
2. Build process fails
3. Change `net_ipv6_mld_join` to `net_if_ipv6_maddr_add` and `net_if_ipv6_maddr_join` to be able to build the sample.
4. There is no way of leaving a multicast group
**Expected behavior**
I expected that there were a method to leave a certain multicast group.
**Impact**
I am not able to leave a multicast group.
**Environment (please complete the following information):**
- Toolchain (e.g Zephyr SDK, ...)
- Zephyr: v2.6.99-ncs1-rc2",0,there is no way to leave a multicast group describe the bug it is possible for the coap server to join a multicast group however it is not possible to leave a multicast group there is no implementation of net event cmd maddr del in addr event handler in openthread c building the coap server with openthread support fails because of net mld join replacing this method with net if maddr add and net if maddr join seems to solve the problem an issue related to the points above has been reported here to reproduce steps to reproduce the behavior build the coap server sample with openthread build process fails change net mld join to net if maddr add and net if maddr join to be able to build the sample there is no way of leaving a multicast group expected behavior i expected that there were a method to leave a certain multicast group impact i am not able to leave a multicast group environment please complete the following information toolchain e g zephyr sdk zephyr ,0
1073,12827069580.0,IssuesEvent,2020-07-06 17:45:13,osbuild/osbuild-composer,https://api.github.com/repos/osbuild/osbuild-composer,closed,RHEL 8.x has issues downloading from EPEL occasionally,ci-reliability,"```
[2020-07-06T14:51:42.719Z] + sudo dnf repository-packages osbuild-mock list
[2020-07-06T14:51:44.673Z] Updating Subscription Management repositories.
[2020-07-06T14:51:44.674Z] /usr/lib/python3.6/site-packages/dateutil/parser/_parser.py:70: UnicodeWarning: decode() called on unicode string, see https://bugzilla.redhat.com/show_bug.cgi?id=1693751
[2020-07-06T14:51:44.674Z] instream = instream.decode()
[2020-07-06T14:51:44.674Z]
[2020-07-06T14:51:45.629Z] osbuild mock osbuild/osbuild-composer/PR-809-8a 81 kB/s | 9.3 kB 00:00
[2020-07-06T14:51:45.630Z] Extra Packages for Enterprise Linux Modular 8 - 237 kB/s | 18 kB 00:00
[2020-07-06T14:51:45.902Z] Extra Packages for Enterprise Linux Modular 8 - 533 kB/s | 154 kB 00:00
[2020-07-06T14:51:46.163Z] Extra Packages for Enterprise Linux 8 - x86_64 70 kB/s | 19 kB 00:00
[2020-07-06T14:52:32.978Z] Extra Packages for Enterprise Linux 8 - x86_64 114 B/s | 4.7 kB 00:42
[2020-07-06T14:52:32.979Z] Errors during downloading metadata for repository 'epel':
[2020-07-06T14:52:32.979Z] - Curl error (28): Timeout was reached for http://mirror.compevo.com/epel/8/Everything/x86_64/repodata/repomd.xml [Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds]
[2020-07-06T14:52:32.979Z] - Downloading successful, but checksum doesn't match. Calculated: e0142781244cd8bcae4151d20e6d012fdc1a0cdb891a6addb7dce48ab939769b328a4652f58ed8980e16731fcffa0c7ece1f3ceb6e252cba984347af0c5b4adb(sha512) e0142781244cd8bcae4151d20e6d012fdc1a0cdb891a6addb7dce48ab939769b328a4652f58ed8980e16731fcffa0c7ece1f3ceb6e252cba984347af0c5b4adb(sha512) e0142781244cd8bcae4151d20e6d012fdc1a0cdb891a6addb7dce48ab939769b328a4652f58ed8980e16731fcffa0c7ece1f3ceb6e252cba984347af0c5b4adb(sha512) Expected: 10d1ee971104fe861b33c884b00403192f002da75cc916533f663be8a081079e76eef95edbdda29c45c2871cc176251586d1b3e6b24354d3932804ad7a627043(sha512) 33456fb7e50f318f327f2512f2aa7fce451994ae6445ce8111187d39288469303e85cc943a1d7178db81c0ae250dc31d53d20d97eb263bbc244dbe21b9bd0e3e(sha512) cc9f27ff3c3100c094a36e349ef599ba1a0a8d6614e4f399c750f8dbfbf755956698f07636bbb3c00180630873597da64cece73c97edf00a0af5e560b2abb113(sha512)
[2020-07-06T14:52:32.979Z] Error: Failed to download metadata for repo 'epel': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried
```",True,"RHEL 8.x has issues downloading from EPEL occasionally - ```
[2020-07-06T14:51:42.719Z] + sudo dnf repository-packages osbuild-mock list
[2020-07-06T14:51:44.673Z] Updating Subscription Management repositories.
[2020-07-06T14:51:44.674Z] /usr/lib/python3.6/site-packages/dateutil/parser/_parser.py:70: UnicodeWarning: decode() called on unicode string, see https://bugzilla.redhat.com/show_bug.cgi?id=1693751
[2020-07-06T14:51:44.674Z] instream = instream.decode()
[2020-07-06T14:51:44.674Z]
[2020-07-06T14:51:45.629Z] osbuild mock osbuild/osbuild-composer/PR-809-8a 81 kB/s | 9.3 kB 00:00
[2020-07-06T14:51:45.630Z] Extra Packages for Enterprise Linux Modular 8 - 237 kB/s | 18 kB 00:00
[2020-07-06T14:51:45.902Z] Extra Packages for Enterprise Linux Modular 8 - 533 kB/s | 154 kB 00:00
[2020-07-06T14:51:46.163Z] Extra Packages for Enterprise Linux 8 - x86_64 70 kB/s | 19 kB 00:00
[2020-07-06T14:52:32.978Z] Extra Packages for Enterprise Linux 8 - x86_64 114 B/s | 4.7 kB 00:42
[2020-07-06T14:52:32.979Z] Errors during downloading metadata for repository 'epel':
[2020-07-06T14:52:32.979Z] - Curl error (28): Timeout was reached for http://mirror.compevo.com/epel/8/Everything/x86_64/repodata/repomd.xml [Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds]
[2020-07-06T14:52:32.979Z] - Downloading successful, but checksum doesn't match. Calculated: e0142781244cd8bcae4151d20e6d012fdc1a0cdb891a6addb7dce48ab939769b328a4652f58ed8980e16731fcffa0c7ece1f3ceb6e252cba984347af0c5b4adb(sha512) e0142781244cd8bcae4151d20e6d012fdc1a0cdb891a6addb7dce48ab939769b328a4652f58ed8980e16731fcffa0c7ece1f3ceb6e252cba984347af0c5b4adb(sha512) e0142781244cd8bcae4151d20e6d012fdc1a0cdb891a6addb7dce48ab939769b328a4652f58ed8980e16731fcffa0c7ece1f3ceb6e252cba984347af0c5b4adb(sha512) Expected: 10d1ee971104fe861b33c884b00403192f002da75cc916533f663be8a081079e76eef95edbdda29c45c2871cc176251586d1b3e6b24354d3932804ad7a627043(sha512) 33456fb7e50f318f327f2512f2aa7fce451994ae6445ce8111187d39288469303e85cc943a1d7178db81c0ae250dc31d53d20d97eb263bbc244dbe21b9bd0e3e(sha512) cc9f27ff3c3100c094a36e349ef599ba1a0a8d6614e4f399c750f8dbfbf755956698f07636bbb3c00180630873597da64cece73c97edf00a0af5e560b2abb113(sha512)
[2020-07-06T14:52:32.979Z] Error: Failed to download metadata for repo 'epel': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried
```",1,rhel x has issues downloading from epel occasionally sudo dnf repository packages osbuild mock list updating subscription management repositories usr lib site packages dateutil parser parser py unicodewarning decode called on unicode string see instream instream decode osbuild mock osbuild osbuild composer pr kb s kb extra packages for enterprise linux modular kb s kb extra packages for enterprise linux modular kb s kb extra packages for enterprise linux kb s kb extra packages for enterprise linux b s kb errors during downloading metadata for repository epel curl error timeout was reached for downloading successful but checksum doesn t match calculated expected error failed to download metadata for repo epel cannot download repomd xml cannot download repodata repomd xml all mirrors were tried ,1
1202,13793214864.0,IssuesEvent,2020-10-09 14:39:07,argoproj/argo,https://api.github.com/repos/argoproj/argo,closed,v2.11: Steps (not DAG) always fails (with conflict error) and no helpful diagnostics (AWS only?),P2 bug epic/reliability,"## Summary
Trying Example https://argoproj.github.io/argo/examples/#output-parameters
## Diagnostics
Argo version 2.11.0
2nd step(consume-parameter) failed because it was unable to get logs, while step 1(generate-parameter) was successful in pulling logs
Below are 2 API to made to get logs
> http://internal-23e.us-east-1.elb.amazonaws.com:2746/artifacts/argo/output-parameters-wt294/output-parameters-wt294-3630452401/main-logs
(step : consume-parameter) Unsuccessful
**Response: Artifact no found**
> http://internal-23e.us-east-1.elb.amazonaws.com:2746/artifacts/argo/output-parameters-wt294/output-parameters-wt294-2743065841/main-logs
(step : generate-parameter) Successful
>
```yaml
Paste the workflow here, including status:
kubectl get wf -o yaml ${workflow}
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
creationTimestamp: ""2020-09-22T22:29:47Z""
generateName: output-parameters-dummy-
generation: 13
labels:
workflows.argoproj.io/completed: ""true""
workflows.argoproj.io/phase: Failed
name: output-parameters-dummy-k28ls
namespace: argo
resourceVersion: ""46048090""
selfLink: /apis/argoproj.io/v1alpha1/namespaces/argo/workflows/output-parameters-dummy-k28ls
uid: 590d6b5c-b75b-4d4f-a98d-8f88c7b1130f
spec:
arguments: {}
entrypoint: output-parameters
templates:
- arguments: {}
inputs: {}
metadata: {}
name: output-parameters
outputs: {}
steps:
- - arguments: {}
name: generate-parameter
template: whalesay
- - arguments:
parameters:
- name: message
value: '{{steps.generate-parameter.outputs.parameters.hello-param}}'
name: consume-parameter
template: print-message
- - arguments: {}
name: dummy
template: dummy
- arguments: {}
container:
args:
- echo -n hello world > /tmp/hello_world.txt
command:
- sh
- -c
image: docker/whalesay:latest
name: """"
resources: {}
inputs: {}
metadata: {}
name: whalesay
outputs:
parameters:
- name: hello-param
valueFrom:
path: /tmp/hello_world.txt
- arguments: {}
container:
args:
- '{{inputs.parameters.message}}'
command:
- cowsay
image: docker/whalesay:latest
name: """"
resources: {}
inputs:
parameters:
- name: message
metadata: {}
name: print-message
outputs: {}
- arguments: {}
container:
args:
- test
command:
- echo
image: selumalai/basicubuntu:latest
name: """"
resources: {}
inputs: {}
metadata: {}
name: dummy
outputs: {}
status:
conditions:
- status: ""True""
type: Completed
finishedAt: ""2020-09-22T22:31:00Z""
message: child 'output-parameters-dummy-k28ls[1].consume-parameter' errored
nodes:
output-parameters-dummy-k28ls:
children:
- output-parameters-dummy-k28ls-898385737
displayName: output-parameters-dummy-k28ls
finishedAt: ""2020-09-22T22:31:00Z""
id: output-parameters-dummy-k28ls
message: child 'output-parameters-dummy-k28ls[1].consume-parameter' errored
name: output-parameters-dummy-k28ls
phase: Failed
startedAt: ""2020-09-22T22:29:47Z""
templateName: output-parameters
templateScope: local/output-parameters-dummy-k28ls
type: Steps
output-parameters-dummy-k28ls-537766264:
boundaryID: output-parameters-dummy-k28ls
children:
- output-parameters-dummy-k28ls-4052725204
displayName: generate-parameter
finishedAt: ""2020-09-22T22:29:50Z""
id: output-parameters-dummy-k28ls-537766264
name: output-parameters-dummy-k28ls[0].generate-parameter
outputs:
artifacts:
- archiveLogs: true
name: main-logs
s3:
accessKeySecret:
key: accesskey
name: my-minio-cred
bucket: my-bucket
endpoint: minio:9000
insecure: true
key: output-parameters-dummy-k28ls/output-parameters-dummy-k28ls-537766264/main.log
secretKeySecret:
key: secretkey
name: my-minio-cred
parameters:
- name: hello-param
value: hello world
valueFrom:
path: /tmp/hello_world.txt
phase: Succeeded
startedAt: ""2020-09-22T22:29:47Z""
templateName: whalesay
templateScope: local/output-parameters-dummy-k28ls
type: Pod
output-parameters-dummy-k28ls-898385737:
boundaryID: output-parameters-dummy-k28ls
children:
- output-parameters-dummy-k28ls-537766264
displayName: '[0]'
finishedAt: ""2020-09-22T22:29:51Z""
id: output-parameters-dummy-k28ls-898385737
name: output-parameters-dummy-k28ls[0]
phase: Succeeded
startedAt: ""2020-09-22T22:29:47Z""
templateName: output-parameters
templateScope: local/output-parameters-dummy-k28ls
type: StepGroup
output-parameters-dummy-k28ls-4052725204:
boundaryID: output-parameters-dummy-k28ls
children:
- output-parameters-dummy-k28ls-4217895726
displayName: '[1]'
finishedAt: ""2020-09-22T22:31:00Z""
id: output-parameters-dummy-k28ls-4052725204
message: child 'output-parameters-dummy-k28ls[1].consume-parameter' errored
name: output-parameters-dummy-k28ls[1]
phase: Error
startedAt: ""2020-09-22T22:29:51Z""
templateName: output-parameters
type: StepGroup
output-parameters-dummy-k28ls-4217895726:
boundaryID: output-parameters-dummy-k28ls
displayName: consume-parameter
finishedAt: null
id: output-parameters-dummy-k28ls-4217895726
inputs:
parameters:
- name: message
value: hello world
name: output-parameters-dummy-k28ls[1].consume-parameter
phase: Pending
startedAt: ""2020-09-22T22:31:00Z""
templateName: print-message
templateScope: local/output-parameters-dummy-k28ls
type: Pod
phase: Failed
startedAt: ""2020-09-22T22:29:47Z""
```
```
Paste the logs from the workflow controller:
kubectl logs -n argo $(kubectl get pods -l app=workflow-controller -n argo -o name) | grep ${workflow}
k kubectl logs -n argo $(k kubectl get pods -l app=workflow-controller -n argo -o name) | grep output-parameters-dummy-k28ls
time=""2020-09-22T22:29:47Z"" level=info msg=""Processing workflow"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:47Z"" level=info msg=""Updated phase -> Running"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:47Z"" level=info msg=""Steps node output-parameters-dummy-k28ls initialized Running"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:47Z"" level=info msg=""StepGroup node output-parameters-dummy-k28ls-898385737 initialized Running"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:47Z"" level=info msg=""Pod node output-parameters-dummy-k28ls-537766264 initialized Pending"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:47Z"" level=info msg=""Created pod: output-parameters-dummy-k28ls[0].generate-parameter (output-parameters-dummy-k28ls-537766264)"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:47Z"" level=info msg=""Workflow step group node output-parameters-dummy-k28ls-898385737 not yet completed"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:47Z"" level=info msg=""Workflow update successful"" namespace=argo phase=Running resourceVersion=46047464 workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:48Z"" level=info msg=""Processing workflow"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:48Z"" level=info msg=""Updating node output-parameters-dummy-k28ls-537766264 message: ContainerCreating""
time=""2020-09-22T22:29:48Z"" level=info msg=""Skipped pod output-parameters-dummy-k28ls[0].generate-parameter (output-parameters-dummy-k28ls-537766264) creation: already exists"" namespace=argo podPhase=Pending workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:48Z"" level=info msg=""Workflow step group node output-parameters-dummy-k28ls-898385737 not yet completed"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:48Z"" level=info msg=""Workflow update successful"" namespace=argo phase=Running resourceVersion=46047471 workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:49Z"" level=info msg=""Processing workflow"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:49Z"" level=info msg=""Skipped pod output-parameters-dummy-k28ls[0].generate-parameter (output-parameters-dummy-k28ls-537766264) creation: already exists"" namespace=argo podPhase=Pending workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:49Z"" level=info msg=""Workflow step group node output-parameters-dummy-k28ls-898385737 not yet completed"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:49Z"" level=info msg=""Workflow update successful"" namespace=argo phase=Running resourceVersion=46047490 workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:50Z"" level=info msg=""Processing workflow"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:50Z"" level=info msg=""Workflow step group node output-parameters-dummy-k28ls-898385737 not yet completed"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:50Z"" level=info msg=""Workflow update successful"" namespace=argo phase=Running resourceVersion=46047498 workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:50Z"" level=info msg=""insignificant pod change"" key=argo/output-parameters-dummy-k28ls-537766264
time=""2020-09-22T22:29:51Z"" level=info msg=""Processing workflow"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:51Z"" level=info msg=""Workflow step group node output-parameters-dummy-k28ls-898385737 not yet completed"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:51Z"" level=info msg=""Workflow update successful"" namespace=argo phase=Running resourceVersion=46047512 workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:31:00Z"" level=info msg=""Processing workflow"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:31:00Z"" level=info msg=""node output-parameters-dummy-k28ls-898385737 phase Succeeded -> Running"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:31:00Z"" level=info msg=""Step group node &NodeStatus{ID:output-parameters-dummy-k28ls-898385737,Name:output-parameters-dummy-k28l[0],DisplayName:[0],Type:StepGroup,TemplateName:output-parameters,TemplateRef:nil,Phase:Running,BoundaryID:output-parameters-dummy-k28ls,Message:,StartedAt:2020-09-22 22:29:47 +0000 UTC,FinishedAt:2020-09-22 22:29:51 +0000 UTC,PodIP:,Daemoned:nil,Inputs:nil,Outputs:nil,Children:[output-parameters-dummy-k28ls-537766264],OutboundNodes:[],StoredTemplateID:,WorkflowTemplateName:,TemplateScope:local/output-parameters-dummy-k28ls,ResourcesDuration:ResourcesDuration{},HostNodeName:,MemoizationStatus:nil,} successful"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:31:00Z"" level=info msg=""node output-parameters-dummy-k28ls-898385737 phase Running -> Succeeded"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:31:00Z"" level=info msg=""SG Outbound nodes of output-parameters-dummy-k28ls-537766264 are [output-parameters-dummy-k28ls-537766264]"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:31:00Z"" level=info msg=""Pod node output-parameters-dummy-k28ls-4217895726 initialized Pending"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:31:00Z"" level=info msg=""Created pod: output-parameters-dummy-k28ls[1].consume-parameter (output-parameters-dummy-k28ls-4217895726)"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:31:00Z"" level=info msg=""Workflow step group node output-parameters-dummy-k28ls-4052725204 not yet completed"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:31:00Z"" level=warning msg=""Error updating workflow: Operation cannot be fulfilled on workflows.argoproj.io \""output-parameters-dummy-k28ls\"": the object has been modified; please apply your changes to the latest version and try again Conflict"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:31:00Z"" level=info msg=""Re-applying updates on latest version and retrying update"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:31:00Z"" level=info msg=""Update retry attempt 1 successful"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:31:00Z"" level=info msg=""Workflow update successful"" namespace=argo phase=Failed resourceVersion=46048090 workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:31:03Z"" level=info msg=""insignificant pod change"" key=argo/output-parameters-dummy-k28ls-4217895726
```
---
**Message from the maintainers**:
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
",True,"v2.11: Steps (not DAG) always fails (with conflict error) and no helpful diagnostics (AWS only?) - ## Summary
Trying Example https://argoproj.github.io/argo/examples/#output-parameters
## Diagnostics
Argo version 2.11.0
2nd step(consume-parameter) failed because it was unable to get logs, while step 1(generate-parameter) was successful in pulling logs
Below are 2 API to made to get logs
> http://internal-23e.us-east-1.elb.amazonaws.com:2746/artifacts/argo/output-parameters-wt294/output-parameters-wt294-3630452401/main-logs
(step : consume-parameter) Unsuccessful
**Response: Artifact no found**
> http://internal-23e.us-east-1.elb.amazonaws.com:2746/artifacts/argo/output-parameters-wt294/output-parameters-wt294-2743065841/main-logs
(step : generate-parameter) Successful
>
```yaml
Paste the workflow here, including status:
kubectl get wf -o yaml ${workflow}
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
creationTimestamp: ""2020-09-22T22:29:47Z""
generateName: output-parameters-dummy-
generation: 13
labels:
workflows.argoproj.io/completed: ""true""
workflows.argoproj.io/phase: Failed
name: output-parameters-dummy-k28ls
namespace: argo
resourceVersion: ""46048090""
selfLink: /apis/argoproj.io/v1alpha1/namespaces/argo/workflows/output-parameters-dummy-k28ls
uid: 590d6b5c-b75b-4d4f-a98d-8f88c7b1130f
spec:
arguments: {}
entrypoint: output-parameters
templates:
- arguments: {}
inputs: {}
metadata: {}
name: output-parameters
outputs: {}
steps:
- - arguments: {}
name: generate-parameter
template: whalesay
- - arguments:
parameters:
- name: message
value: '{{steps.generate-parameter.outputs.parameters.hello-param}}'
name: consume-parameter
template: print-message
- - arguments: {}
name: dummy
template: dummy
- arguments: {}
container:
args:
- echo -n hello world > /tmp/hello_world.txt
command:
- sh
- -c
image: docker/whalesay:latest
name: """"
resources: {}
inputs: {}
metadata: {}
name: whalesay
outputs:
parameters:
- name: hello-param
valueFrom:
path: /tmp/hello_world.txt
- arguments: {}
container:
args:
- '{{inputs.parameters.message}}'
command:
- cowsay
image: docker/whalesay:latest
name: """"
resources: {}
inputs:
parameters:
- name: message
metadata: {}
name: print-message
outputs: {}
- arguments: {}
container:
args:
- test
command:
- echo
image: selumalai/basicubuntu:latest
name: """"
resources: {}
inputs: {}
metadata: {}
name: dummy
outputs: {}
status:
conditions:
- status: ""True""
type: Completed
finishedAt: ""2020-09-22T22:31:00Z""
message: child 'output-parameters-dummy-k28ls[1].consume-parameter' errored
nodes:
output-parameters-dummy-k28ls:
children:
- output-parameters-dummy-k28ls-898385737
displayName: output-parameters-dummy-k28ls
finishedAt: ""2020-09-22T22:31:00Z""
id: output-parameters-dummy-k28ls
message: child 'output-parameters-dummy-k28ls[1].consume-parameter' errored
name: output-parameters-dummy-k28ls
phase: Failed
startedAt: ""2020-09-22T22:29:47Z""
templateName: output-parameters
templateScope: local/output-parameters-dummy-k28ls
type: Steps
output-parameters-dummy-k28ls-537766264:
boundaryID: output-parameters-dummy-k28ls
children:
- output-parameters-dummy-k28ls-4052725204
displayName: generate-parameter
finishedAt: ""2020-09-22T22:29:50Z""
id: output-parameters-dummy-k28ls-537766264
name: output-parameters-dummy-k28ls[0].generate-parameter
outputs:
artifacts:
- archiveLogs: true
name: main-logs
s3:
accessKeySecret:
key: accesskey
name: my-minio-cred
bucket: my-bucket
endpoint: minio:9000
insecure: true
key: output-parameters-dummy-k28ls/output-parameters-dummy-k28ls-537766264/main.log
secretKeySecret:
key: secretkey
name: my-minio-cred
parameters:
- name: hello-param
value: hello world
valueFrom:
path: /tmp/hello_world.txt
phase: Succeeded
startedAt: ""2020-09-22T22:29:47Z""
templateName: whalesay
templateScope: local/output-parameters-dummy-k28ls
type: Pod
output-parameters-dummy-k28ls-898385737:
boundaryID: output-parameters-dummy-k28ls
children:
- output-parameters-dummy-k28ls-537766264
displayName: '[0]'
finishedAt: ""2020-09-22T22:29:51Z""
id: output-parameters-dummy-k28ls-898385737
name: output-parameters-dummy-k28ls[0]
phase: Succeeded
startedAt: ""2020-09-22T22:29:47Z""
templateName: output-parameters
templateScope: local/output-parameters-dummy-k28ls
type: StepGroup
output-parameters-dummy-k28ls-4052725204:
boundaryID: output-parameters-dummy-k28ls
children:
- output-parameters-dummy-k28ls-4217895726
displayName: '[1]'
finishedAt: ""2020-09-22T22:31:00Z""
id: output-parameters-dummy-k28ls-4052725204
message: child 'output-parameters-dummy-k28ls[1].consume-parameter' errored
name: output-parameters-dummy-k28ls[1]
phase: Error
startedAt: ""2020-09-22T22:29:51Z""
templateName: output-parameters
type: StepGroup
output-parameters-dummy-k28ls-4217895726:
boundaryID: output-parameters-dummy-k28ls
displayName: consume-parameter
finishedAt: null
id: output-parameters-dummy-k28ls-4217895726
inputs:
parameters:
- name: message
value: hello world
name: output-parameters-dummy-k28ls[1].consume-parameter
phase: Pending
startedAt: ""2020-09-22T22:31:00Z""
templateName: print-message
templateScope: local/output-parameters-dummy-k28ls
type: Pod
phase: Failed
startedAt: ""2020-09-22T22:29:47Z""
```
```
Paste the logs from the workflow controller:
kubectl logs -n argo $(kubectl get pods -l app=workflow-controller -n argo -o name) | grep ${workflow}
k kubectl logs -n argo $(k kubectl get pods -l app=workflow-controller -n argo -o name) | grep output-parameters-dummy-k28ls
time=""2020-09-22T22:29:47Z"" level=info msg=""Processing workflow"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:47Z"" level=info msg=""Updated phase -> Running"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:47Z"" level=info msg=""Steps node output-parameters-dummy-k28ls initialized Running"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:47Z"" level=info msg=""StepGroup node output-parameters-dummy-k28ls-898385737 initialized Running"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:47Z"" level=info msg=""Pod node output-parameters-dummy-k28ls-537766264 initialized Pending"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:47Z"" level=info msg=""Created pod: output-parameters-dummy-k28ls[0].generate-parameter (output-parameters-dummy-k28ls-537766264)"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:47Z"" level=info msg=""Workflow step group node output-parameters-dummy-k28ls-898385737 not yet completed"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:47Z"" level=info msg=""Workflow update successful"" namespace=argo phase=Running resourceVersion=46047464 workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:48Z"" level=info msg=""Processing workflow"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:48Z"" level=info msg=""Updating node output-parameters-dummy-k28ls-537766264 message: ContainerCreating""
time=""2020-09-22T22:29:48Z"" level=info msg=""Skipped pod output-parameters-dummy-k28ls[0].generate-parameter (output-parameters-dummy-k28ls-537766264) creation: already exists"" namespace=argo podPhase=Pending workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:48Z"" level=info msg=""Workflow step group node output-parameters-dummy-k28ls-898385737 not yet completed"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:48Z"" level=info msg=""Workflow update successful"" namespace=argo phase=Running resourceVersion=46047471 workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:49Z"" level=info msg=""Processing workflow"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:49Z"" level=info msg=""Skipped pod output-parameters-dummy-k28ls[0].generate-parameter (output-parameters-dummy-k28ls-537766264) creation: already exists"" namespace=argo podPhase=Pending workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:49Z"" level=info msg=""Workflow step group node output-parameters-dummy-k28ls-898385737 not yet completed"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:49Z"" level=info msg=""Workflow update successful"" namespace=argo phase=Running resourceVersion=46047490 workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:50Z"" level=info msg=""Processing workflow"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:50Z"" level=info msg=""Workflow step group node output-parameters-dummy-k28ls-898385737 not yet completed"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:50Z"" level=info msg=""Workflow update successful"" namespace=argo phase=Running resourceVersion=46047498 workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:50Z"" level=info msg=""insignificant pod change"" key=argo/output-parameters-dummy-k28ls-537766264
time=""2020-09-22T22:29:51Z"" level=info msg=""Processing workflow"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:51Z"" level=info msg=""Workflow step group node output-parameters-dummy-k28ls-898385737 not yet completed"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:29:51Z"" level=info msg=""Workflow update successful"" namespace=argo phase=Running resourceVersion=46047512 workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:31:00Z"" level=info msg=""Processing workflow"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:31:00Z"" level=info msg=""node output-parameters-dummy-k28ls-898385737 phase Succeeded -> Running"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:31:00Z"" level=info msg=""Step group node &NodeStatus{ID:output-parameters-dummy-k28ls-898385737,Name:output-parameters-dummy-k28l[0],DisplayName:[0],Type:StepGroup,TemplateName:output-parameters,TemplateRef:nil,Phase:Running,BoundaryID:output-parameters-dummy-k28ls,Message:,StartedAt:2020-09-22 22:29:47 +0000 UTC,FinishedAt:2020-09-22 22:29:51 +0000 UTC,PodIP:,Daemoned:nil,Inputs:nil,Outputs:nil,Children:[output-parameters-dummy-k28ls-537766264],OutboundNodes:[],StoredTemplateID:,WorkflowTemplateName:,TemplateScope:local/output-parameters-dummy-k28ls,ResourcesDuration:ResourcesDuration{},HostNodeName:,MemoizationStatus:nil,} successful"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:31:00Z"" level=info msg=""node output-parameters-dummy-k28ls-898385737 phase Running -> Succeeded"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:31:00Z"" level=info msg=""SG Outbound nodes of output-parameters-dummy-k28ls-537766264 are [output-parameters-dummy-k28ls-537766264]"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:31:00Z"" level=info msg=""Pod node output-parameters-dummy-k28ls-4217895726 initialized Pending"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:31:00Z"" level=info msg=""Created pod: output-parameters-dummy-k28ls[1].consume-parameter (output-parameters-dummy-k28ls-4217895726)"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:31:00Z"" level=info msg=""Workflow step group node output-parameters-dummy-k28ls-4052725204 not yet completed"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:31:00Z"" level=warning msg=""Error updating workflow: Operation cannot be fulfilled on workflows.argoproj.io \""output-parameters-dummy-k28ls\"": the object has been modified; please apply your changes to the latest version and try again Conflict"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:31:00Z"" level=info msg=""Re-applying updates on latest version and retrying update"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:31:00Z"" level=info msg=""Update retry attempt 1 successful"" namespace=argo workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:31:00Z"" level=info msg=""Workflow update successful"" namespace=argo phase=Failed resourceVersion=46048090 workflow=output-parameters-dummy-k28ls
time=""2020-09-22T22:31:03Z"" level=info msg=""insignificant pod change"" key=argo/output-parameters-dummy-k28ls-4217895726
```
---
**Message from the maintainers**:
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
",1, steps not dag always fails with conflict error and no helpful diagnostics aws only summary trying example diagnostics argo version step consume parameter failed because it was unable to get logs while step generate parameter was successful in pulling logs below are api to made to get logs step consume parameter unsuccessful response artifact no found step generate parameter successful yaml paste the workflow here including status kubectl get wf o yaml workflow apiversion argoproj io kind workflow metadata creationtimestamp generatename output parameters dummy generation labels workflows argoproj io completed true workflows argoproj io phase failed name output parameters dummy namespace argo resourceversion selflink apis argoproj io namespaces argo workflows output parameters dummy uid spec arguments entrypoint output parameters templates arguments inputs metadata name output parameters outputs steps arguments name generate parameter template whalesay arguments parameters name message value steps generate parameter outputs parameters hello param name consume parameter template print message arguments name dummy template dummy arguments container args echo n hello world tmp hello world txt command sh c image docker whalesay latest name resources inputs metadata name whalesay outputs parameters name hello param valuefrom path tmp hello world txt arguments container args inputs parameters message command cowsay image docker whalesay latest name resources inputs parameters name message metadata name print message outputs arguments container args test command echo image selumalai basicubuntu latest name resources inputs metadata name dummy outputs status conditions status true type completed finishedat message child output parameters dummy consume parameter errored nodes output parameters dummy children output parameters dummy displayname output parameters dummy finishedat id output parameters dummy message child output parameters dummy consume parameter errored name output parameters dummy phase failed startedat templatename output parameters templatescope local output parameters dummy type steps output parameters dummy boundaryid output parameters dummy children output parameters dummy displayname generate parameter finishedat id output parameters dummy name output parameters dummy generate parameter outputs artifacts archivelogs true name main logs accesskeysecret key accesskey name my minio cred bucket my bucket endpoint minio insecure true key output parameters dummy output parameters dummy main log secretkeysecret key secretkey name my minio cred parameters name hello param value hello world valuefrom path tmp hello world txt phase succeeded startedat templatename whalesay templatescope local output parameters dummy type pod output parameters dummy boundaryid output parameters dummy children output parameters dummy displayname finishedat id output parameters dummy name output parameters dummy phase succeeded startedat templatename output parameters templatescope local output parameters dummy type stepgroup output parameters dummy boundaryid output parameters dummy children output parameters dummy displayname finishedat id output parameters dummy message child output parameters dummy consume parameter errored name output parameters dummy phase error startedat templatename output parameters type stepgroup output parameters dummy boundaryid output parameters dummy displayname consume parameter finishedat null id output parameters dummy inputs parameters name message value hello world name output parameters dummy consume parameter phase pending startedat templatename print message templatescope local output parameters dummy type pod phase failed startedat paste the logs from the workflow controller kubectl logs n argo kubectl get pods l app workflow controller n argo o name grep workflow k kubectl logs n argo k kubectl get pods l app workflow controller n argo o name grep output parameters dummy time level info msg processing workflow namespace argo workflow output parameters dummy time level info msg updated phase running namespace argo workflow output parameters dummy time level info msg steps node output parameters dummy initialized running namespace argo workflow output parameters dummy time level info msg stepgroup node output parameters dummy initialized running namespace argo workflow output parameters dummy time level info msg pod node output parameters dummy initialized pending namespace argo workflow output parameters dummy time level info msg created pod output parameters dummy generate parameter output parameters dummy namespace argo workflow output parameters dummy time level info msg workflow step group node output parameters dummy not yet completed namespace argo workflow output parameters dummy time level info msg workflow update successful namespace argo phase running resourceversion workflow output parameters dummy time level info msg processing workflow namespace argo workflow output parameters dummy time level info msg updating node output parameters dummy message containercreating time level info msg skipped pod output parameters dummy generate parameter output parameters dummy creation already exists namespace argo podphase pending workflow output parameters dummy time level info msg workflow step group node output parameters dummy not yet completed namespace argo workflow output parameters dummy time level info msg workflow update successful namespace argo phase running resourceversion workflow output parameters dummy time level info msg processing workflow namespace argo workflow output parameters dummy time level info msg skipped pod output parameters dummy generate parameter output parameters dummy creation already exists namespace argo podphase pending workflow output parameters dummy time level info msg workflow step group node output parameters dummy not yet completed namespace argo workflow output parameters dummy time level info msg workflow update successful namespace argo phase running resourceversion workflow output parameters dummy time level info msg processing workflow namespace argo workflow output parameters dummy time level info msg workflow step group node output parameters dummy not yet completed namespace argo workflow output parameters dummy time level info msg workflow update successful namespace argo phase running resourceversion workflow output parameters dummy time level info msg insignificant pod change key argo output parameters dummy time level info msg processing workflow namespace argo workflow output parameters dummy time level info msg workflow step group node output parameters dummy not yet completed namespace argo workflow output parameters dummy time level info msg workflow update successful namespace argo phase running resourceversion workflow output parameters dummy time level info msg processing workflow namespace argo workflow output parameters dummy time level info msg node output parameters dummy phase succeeded running namespace argo workflow output parameters dummy time level info msg step group node nodestatus id output parameters dummy name output parameters dummy displayname type stepgroup templatename output parameters templateref nil phase running boundaryid output parameters dummy message startedat utc finishedat utc podip daemoned nil inputs nil outputs nil children outboundnodes storedtemplateid workflowtemplatename templatescope local output parameters dummy resourcesduration resourcesduration hostnodename memoizationstatus nil successful namespace argo workflow output parameters dummy time level info msg node output parameters dummy phase running succeeded namespace argo workflow output parameters dummy time level info msg sg outbound nodes of output parameters dummy are namespace argo workflow output parameters dummy time level info msg pod node output parameters dummy initialized pending namespace argo workflow output parameters dummy time level info msg created pod output parameters dummy consume parameter output parameters dummy namespace argo workflow output parameters dummy time level info msg workflow step group node output parameters dummy not yet completed namespace argo workflow output parameters dummy time level warning msg error updating workflow operation cannot be fulfilled on workflows argoproj io output parameters dummy the object has been modified please apply your changes to the latest version and try again conflict namespace argo workflow output parameters dummy time level info msg re applying updates on latest version and retrying update namespace argo workflow output parameters dummy time level info msg update retry attempt successful namespace argo workflow output parameters dummy time level info msg workflow update successful namespace argo phase failed resourceversion workflow output parameters dummy time level info msg insignificant pod change key argo output parameters dummy message from the maintainers impacted by this bug give it a 👍 we prioritise the issues with the most 👍 ,1
303,6306302737.0,IssuesEvent,2017-07-21 20:38:17,dotnet/roslyn,https://api.github.com/repos/dotnet/roslyn,closed,ReferenceHighlightingViewTaggerProvider crashed VS after switching branches and reloading files,Area-IDE Bug Tenet-Reliability,"I switched branches, said no to Reloading the project, but yes to reloading the files, then this tagger crashed with:
```
Message: System.InvalidOperationException: Sequence contains no matching element
at System.Linq.Enumerable.First[TSource](IEnumerable`1 source, Func`2 predicate)
at Microsoft.CodeAnalysis.Editor.ReferenceHighlighting.ReferenceHighlightingViewTaggerProvider.ProduceTagsAsync(TaggerContext`1 context)
at Microsoft.CodeAnalysis.Editor.Tagging.AbstractAsynchronousTaggerProvider`1.TagSource.d__83.MoveNext()
Stack:
at System.Environment.FailFast(System.String, System.Exception)
at Microsoft.CodeAnalysis.FailFast.OnFatalException(System.Exception)
at Microsoft.CodeAnalysis.ErrorReporting.FatalError.Report(System.Exception, System.Action`1)
at Microsoft.CodeAnalysis.ErrorReporting.FatalError.Report(System.Exception)
at Roslyn.Utilities.TaskExtensions.ReportFatalErrorWorker(System.Threading.Tasks.Task, System.Object)
at System.Threading.Tasks.ContinuationTaskFromTask.InnerInvoke()
at System.Threading.Tasks.Task.Execute()
at System.Threading.Tasks.Task.ExecutionContextCallback(System.Object)
at System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
at System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef)
at System.Threading.Tasks.Task.ExecuteEntry(Boolean)
at System.Threading.Tasks.ThreadPoolTaskScheduler.TryExecuteTaskInline(System.Threading.Tasks.Task, Boolean)
at System.Threading.Tasks.TaskScheduler.TryRunInline(System.Threading.Tasks.Task, Boolean)
at System.Threading.Tasks.TaskContinuation.InlineIfPossibleOrElseQueue(System.Threading.Tasks.Task, Boolean)
at System.Threading.Tasks.StandardTaskContinuation.Run(System.Threading.Tasks.Task, Boolean)
at System.Threading.Tasks.Task.FinishContinuations()
at System.Threading.Tasks.Task.FinishStageThree()
at System.Threading.Tasks.Task.FinishStageTwo()
at System.Threading.Tasks.Task.Finish(Boolean)
at System.Threading.Tasks.Task`1[[System.Threading.Tasks.TaskExtensions+VoidResult, System.Core, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].TrySetException(System.Object)
at System.Threading.Tasks.UnwrapPromise`1[[System.Threading.Tasks.TaskExtensions+VoidResult, System.Core, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].TrySetFromTask(System.Threading.Tasks.Task, Boolean)
at System.Threading.Tasks.UnwrapPromise`1[[System.Threading.Tasks.TaskExtensions+VoidResult, System.Core, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].ProcessInnerTask(System.Threading.Tasks.Task)
at System.Threading.Tasks.UnwrapPromise`1[[System.Threading.Tasks.TaskExtensions+VoidResult, System.Core, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].ProcessCompletedOuterTask(System.Threading.Tasks.Task)
at System.Threading.Tasks.UnwrapPromise`1[[System.Threading.Tasks.TaskExtensions+VoidResult, System.Core, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].InvokeCore(System.Threading.Tasks.Task)
at System.Threading.Tasks.UnwrapPromise`1[[System.Threading.Tasks.TaskExtensions+VoidResult, System.Core, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].Invoke(System.Threading.Tasks.Task)
at System.Threading.Tasks.Task.FinishContinuations()
at System.Threading.Tasks.Task.FinishStageThree()
at System.Threading.Tasks.Task.FinishStageTwo()
at System.Threading.Tasks.Task.Finish(Boolean)
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef)
at System.Threading.Tasks.Task.ExecuteEntry(Boolean)
at System.Threading.Tasks.Task.System.Threading.IThreadPoolWorkItem.ExecuteWorkItem()
at System.Threading.ThreadPoolWorkQueue.Dispatch()
at System.Threading._ThreadPoolWaitCallback.PerformWaitCallback()
```",True,"ReferenceHighlightingViewTaggerProvider crashed VS after switching branches and reloading files - I switched branches, said no to Reloading the project, but yes to reloading the files, then this tagger crashed with:
```
Message: System.InvalidOperationException: Sequence contains no matching element
at System.Linq.Enumerable.First[TSource](IEnumerable`1 source, Func`2 predicate)
at Microsoft.CodeAnalysis.Editor.ReferenceHighlighting.ReferenceHighlightingViewTaggerProvider.ProduceTagsAsync(TaggerContext`1 context)
at Microsoft.CodeAnalysis.Editor.Tagging.AbstractAsynchronousTaggerProvider`1.TagSource.d__83.MoveNext()
Stack:
at System.Environment.FailFast(System.String, System.Exception)
at Microsoft.CodeAnalysis.FailFast.OnFatalException(System.Exception)
at Microsoft.CodeAnalysis.ErrorReporting.FatalError.Report(System.Exception, System.Action`1)
at Microsoft.CodeAnalysis.ErrorReporting.FatalError.Report(System.Exception)
at Roslyn.Utilities.TaskExtensions.ReportFatalErrorWorker(System.Threading.Tasks.Task, System.Object)
at System.Threading.Tasks.ContinuationTaskFromTask.InnerInvoke()
at System.Threading.Tasks.Task.Execute()
at System.Threading.Tasks.Task.ExecutionContextCallback(System.Object)
at System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
at System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef)
at System.Threading.Tasks.Task.ExecuteEntry(Boolean)
at System.Threading.Tasks.ThreadPoolTaskScheduler.TryExecuteTaskInline(System.Threading.Tasks.Task, Boolean)
at System.Threading.Tasks.TaskScheduler.TryRunInline(System.Threading.Tasks.Task, Boolean)
at System.Threading.Tasks.TaskContinuation.InlineIfPossibleOrElseQueue(System.Threading.Tasks.Task, Boolean)
at System.Threading.Tasks.StandardTaskContinuation.Run(System.Threading.Tasks.Task, Boolean)
at System.Threading.Tasks.Task.FinishContinuations()
at System.Threading.Tasks.Task.FinishStageThree()
at System.Threading.Tasks.Task.FinishStageTwo()
at System.Threading.Tasks.Task.Finish(Boolean)
at System.Threading.Tasks.Task`1[[System.Threading.Tasks.TaskExtensions+VoidResult, System.Core, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].TrySetException(System.Object)
at System.Threading.Tasks.UnwrapPromise`1[[System.Threading.Tasks.TaskExtensions+VoidResult, System.Core, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].TrySetFromTask(System.Threading.Tasks.Task, Boolean)
at System.Threading.Tasks.UnwrapPromise`1[[System.Threading.Tasks.TaskExtensions+VoidResult, System.Core, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].ProcessInnerTask(System.Threading.Tasks.Task)
at System.Threading.Tasks.UnwrapPromise`1[[System.Threading.Tasks.TaskExtensions+VoidResult, System.Core, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].ProcessCompletedOuterTask(System.Threading.Tasks.Task)
at System.Threading.Tasks.UnwrapPromise`1[[System.Threading.Tasks.TaskExtensions+VoidResult, System.Core, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].InvokeCore(System.Threading.Tasks.Task)
at System.Threading.Tasks.UnwrapPromise`1[[System.Threading.Tasks.TaskExtensions+VoidResult, System.Core, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]].Invoke(System.Threading.Tasks.Task)
at System.Threading.Tasks.Task.FinishContinuations()
at System.Threading.Tasks.Task.FinishStageThree()
at System.Threading.Tasks.Task.FinishStageTwo()
at System.Threading.Tasks.Task.Finish(Boolean)
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef)
at System.Threading.Tasks.Task.ExecuteEntry(Boolean)
at System.Threading.Tasks.Task.System.Threading.IThreadPoolWorkItem.ExecuteWorkItem()
at System.Threading.ThreadPoolWorkQueue.Dispatch()
at System.Threading._ThreadPoolWaitCallback.PerformWaitCallback()
```",1,referencehighlightingviewtaggerprovider crashed vs after switching branches and reloading files i switched branches said no to reloading the project but yes to reloading the files then this tagger crashed with message system invalidoperationexception sequence contains no matching element at system linq enumerable first ienumerable source func predicate at microsoft codeanalysis editor referencehighlighting referencehighlightingviewtaggerprovider producetagsasync taggercontext context at microsoft codeanalysis editor tagging abstractasynchronoustaggerprovider tagsource d movenext stack at system environment failfast system string system exception at microsoft codeanalysis failfast onfatalexception system exception at microsoft codeanalysis errorreporting fatalerror report system exception system action at microsoft codeanalysis errorreporting fatalerror report system exception at roslyn utilities taskextensions reportfatalerrorworker system threading tasks task system object at system threading tasks continuationtaskfromtask innerinvoke at system threading tasks task execute at system threading tasks task executioncontextcallback system object at system threading executioncontext runinternal system threading executioncontext system threading contextcallback system object boolean at system threading executioncontext run system threading executioncontext system threading contextcallback system object boolean at system threading tasks task executewiththreadlocal system threading tasks task byref at system threading tasks task executeentry boolean at system threading tasks threadpooltaskscheduler tryexecutetaskinline system threading tasks task boolean at system threading tasks taskscheduler tryruninline system threading tasks task boolean at system threading tasks taskcontinuation inlineifpossibleorelsequeue system threading tasks task boolean at system threading tasks standardtaskcontinuation run system threading tasks task boolean at system threading tasks task finishcontinuations at system threading tasks task finishstagethree at system threading tasks task finishstagetwo at system threading tasks task finish boolean at system threading tasks task trysetexception system object at system threading tasks unwrappromise trysetfromtask system threading tasks task boolean at system threading tasks unwrappromise processinnertask system threading tasks task at system threading tasks unwrappromise processcompletedoutertask system threading tasks task at system threading tasks unwrappromise invokecore system threading tasks task at system threading tasks unwrappromise invoke system threading tasks task at system threading tasks task finishcontinuations at system threading tasks task finishstagethree at system threading tasks task finishstagetwo at system threading tasks task finish boolean at system threading tasks task executewiththreadlocal system threading tasks task byref at system threading tasks task executeentry boolean at system threading tasks task system threading ithreadpoolworkitem executeworkitem at system threading threadpoolworkqueue dispatch at system threading threadpoolwaitcallback performwaitcallback ,1
952,11769937333.0,IssuesEvent,2020-03-15 17:01:19,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,Missing Link,Pri1 assigned-to-author doc-bug site-reliability-engineering/svc triaged,"Link for ""Awesome Site Reliability Engineering Tools"" is missing
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 6ef00ce4-c22a-4c2c-2f41-7d823c372864
* Version Independent ID: 6b24e6bf-152b-eca3-21aa-18cf3c720da0
* Content: [SRE link collections and digests](https://docs.microsoft.com/en-us/azure/site-reliability-engineering/resources/links#feedback)
* Content Source: [articles/site-reliability-engineering/resources/links.md](https://github.com/Microsoft/azure-docs/blob/master/articles/site-reliability-engineering/resources/links.md)
* Service: **site-reliability-engineering**
* GitHub Login: @dnblankedelman
* Microsoft Alias: **dnb**",True,"Missing Link - Link for ""Awesome Site Reliability Engineering Tools"" is missing
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 6ef00ce4-c22a-4c2c-2f41-7d823c372864
* Version Independent ID: 6b24e6bf-152b-eca3-21aa-18cf3c720da0
* Content: [SRE link collections and digests](https://docs.microsoft.com/en-us/azure/site-reliability-engineering/resources/links#feedback)
* Content Source: [articles/site-reliability-engineering/resources/links.md](https://github.com/Microsoft/azure-docs/blob/master/articles/site-reliability-engineering/resources/links.md)
* Service: **site-reliability-engineering**
* GitHub Login: @dnblankedelman
* Microsoft Alias: **dnb**",1,missing link link for awesome site reliability engineering tools is missing document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service site reliability engineering github login dnblankedelman microsoft alias dnb ,1
146307,11731679281.0,IssuesEvent,2020-03-11 00:58:10,microsoft/STL,https://api.github.com/repos/microsoft/STL,opened,VSO_0157762_feature_test_macros/test.cpp: Needs comment/error cleanup,test,"I wrote this feature-test macro test, which has proven to be difficult to maintain:
https://github.com/microsoft/STL/blob/285187b7b24be4bebfa271fbd0b1eacd995d18f4/tests/std/tests/VSO_0157762_feature_test_macros/test.cpp#L35-L42
First, `#error BOOM` isn't helpful. Instead, we should say something like `#error Expected __cpp_constexpr to be defined.` With VSCode multi-line regex search-and-replace, we should be able to make such changes systematically.
Second, nobody notices the comments indicating various sections, like `// Always defined to varying values (C++14 versus C++17-and-newer mode).` Nor do these really add value. Each preprocessor chunk self-documents what it expects, and we've accumulated a number of exceptions (as C1XX, Clang, and EDG implement features on different schedules). We've damaged this organization repeatedly during maintenance and it provides no value, so it should be removed. Instead, we should simply sort the preprocessor chunks lexicographically.
Third, this unusually covers both the compiler and the STL (for the historical reason that I implemented almost all of the feature-test macros in the compiler). We should consider splitting up this test, handing off the compiler part to the compiler team, and making the library part simply trust that `_HAS_CXX17` and `_HAS_CXX20` have been properly defined, instead of replicating that machinery with:
https://github.com/microsoft/STL/blob/285187b7b24be4bebfa271fbd0b1eacd995d18f4/tests/std/tests/VSO_0157762_feature_test_macros/test.cpp#L10-L32",1.0,"VSO_0157762_feature_test_macros/test.cpp: Needs comment/error cleanup - I wrote this feature-test macro test, which has proven to be difficult to maintain:
https://github.com/microsoft/STL/blob/285187b7b24be4bebfa271fbd0b1eacd995d18f4/tests/std/tests/VSO_0157762_feature_test_macros/test.cpp#L35-L42
First, `#error BOOM` isn't helpful. Instead, we should say something like `#error Expected __cpp_constexpr to be defined.` With VSCode multi-line regex search-and-replace, we should be able to make such changes systematically.
Second, nobody notices the comments indicating various sections, like `// Always defined to varying values (C++14 versus C++17-and-newer mode).` Nor do these really add value. Each preprocessor chunk self-documents what it expects, and we've accumulated a number of exceptions (as C1XX, Clang, and EDG implement features on different schedules). We've damaged this organization repeatedly during maintenance and it provides no value, so it should be removed. Instead, we should simply sort the preprocessor chunks lexicographically.
Third, this unusually covers both the compiler and the STL (for the historical reason that I implemented almost all of the feature-test macros in the compiler). We should consider splitting up this test, handing off the compiler part to the compiler team, and making the library part simply trust that `_HAS_CXX17` and `_HAS_CXX20` have been properly defined, instead of replicating that machinery with:
https://github.com/microsoft/STL/blob/285187b7b24be4bebfa271fbd0b1eacd995d18f4/tests/std/tests/VSO_0157762_feature_test_macros/test.cpp#L10-L32",0,vso feature test macros test cpp needs comment error cleanup i wrote this feature test macro test which has proven to be difficult to maintain first error boom isn t helpful instead we should say something like error expected cpp constexpr to be defined with vscode multi line regex search and replace we should be able to make such changes systematically second nobody notices the comments indicating various sections like always defined to varying values c versus c and newer mode nor do these really add value each preprocessor chunk self documents what it expects and we ve accumulated a number of exceptions as clang and edg implement features on different schedules we ve damaged this organization repeatedly during maintenance and it provides no value so it should be removed instead we should simply sort the preprocessor chunks lexicographically third this unusually covers both the compiler and the stl for the historical reason that i implemented almost all of the feature test macros in the compiler we should consider splitting up this test handing off the compiler part to the compiler team and making the library part simply trust that has and has have been properly defined instead of replicating that machinery with ,0
2137,23683878514.0,IssuesEvent,2022-08-29 03:07:48,StormSurgeLive/asgs,https://api.github.com/repos/StormSurgeLive/asgs,closed,give Operators the option to convert hotstart file to netCDF3 before reading,enhancement important non-critical workaround reliability,"From M.Akbar: Jason indicated that the MPI Abort error in ADCIRC+SWAN run is because all processors are trying to read the same `fort.68.nc` file at the same time. I was getting this error message:
```
INFO: readNetCDFHotstart: Opening hot start file ""./fort.68.nc"" for reading.
ERROR: check_err: NetCDF: HDF error
INFO: netcdfTerminate: ADCIRC Terminating.
```
This problem seems to be associated with NetCDF4 format of the file. I converted the `fort.68.nc` file from netCDF4 to netCDF3 as follows:
```
nccopy -k nc3 fort.68.nc fort368.nc
```
It converts NetCDF4 file (`fort.68.nc`) to NetCDF3 file (`fort368.nc`). Then I moved/saved `fort368.nc` as `fort.68.nc`, and changed `IHOT` value in `fort.15` from 568 to 368. After that I did regular preprocessing and job submission. I tried two cases and both are currently running without any errors.
Just wanted to share with you in case it helps someone.",True,"give Operators the option to convert hotstart file to netCDF3 before reading - From M.Akbar: Jason indicated that the MPI Abort error in ADCIRC+SWAN run is because all processors are trying to read the same `fort.68.nc` file at the same time. I was getting this error message:
```
INFO: readNetCDFHotstart: Opening hot start file ""./fort.68.nc"" for reading.
ERROR: check_err: NetCDF: HDF error
INFO: netcdfTerminate: ADCIRC Terminating.
```
This problem seems to be associated with NetCDF4 format of the file. I converted the `fort.68.nc` file from netCDF4 to netCDF3 as follows:
```
nccopy -k nc3 fort.68.nc fort368.nc
```
It converts NetCDF4 file (`fort.68.nc`) to NetCDF3 file (`fort368.nc`). Then I moved/saved `fort368.nc` as `fort.68.nc`, and changed `IHOT` value in `fort.15` from 568 to 368. After that I did regular preprocessing and job submission. I tried two cases and both are currently running without any errors.
Just wanted to share with you in case it helps someone.",1,give operators the option to convert hotstart file to before reading from m akbar jason indicated that the mpi abort error in adcirc swan run is because all processors are trying to read the same fort nc file at the same time i was getting this error message info readnetcdfhotstart opening hot start file fort nc for reading error check err netcdf hdf error info netcdfterminate adcirc terminating this problem seems to be associated with format of the file i converted the fort nc file from to as follows nccopy k fort nc nc it converts file fort nc to file nc then i moved saved nc as fort nc and changed ihot value in fort from to after that i did regular preprocessing and job submission i tried two cases and both are currently running without any errors just wanted to share with you in case it helps someone ,1
869,11185396863.0,IssuesEvent,2020-01-01 01:14:03,dotnet/roslyn,https://api.github.com/repos/dotnet/roslyn,opened,An ExceptionUtilities.UnexpectedValue is throw by VB compiler,Area-Compilers Bug Language-VB Tenet-Reliability,"
Compile the following code as a library:
```
Imports System
Structure SSSS3
Public A As String
Public B As Integer
End Structure
Structure SSSS2
Public S3 As SSSS3
End Structure
Structure SSSS
Public S2 As SSSS2
End Structure
Structure SSS
Public S As SSSS
End Structure
Class Clazz
Sub TEST()
Dim x As New SSS()
With x.S
With .S2
With .S3
Dim s As Action = Sub()
.A = """"
End Sub
End With
End With
End With
x.ToString()
End Sub
End Class
```
Observed: Compiler crashes
Expected: Success, it looks like the native compiler succeeds.
```
Microsoft.CodeAnalysis.dll!Roslyn.Utilities.ExceptionUtilities.UnexpectedValue(object o) Line 20 C#
> Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.LambdaRewriter.Analysis.VerifyCaptured(Microsoft.CodeAnalysis.VisualBasic.Symbol variableOrParameter, Microsoft.CodeAnalysis.SyntaxNode syntax) Line 451 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.LambdaRewriter.Analysis.ReferenceVariable(Microsoft.CodeAnalysis.VisualBasic.Symbol variableOrParameter, Microsoft.CodeAnalysis.SyntaxNode syntax) Line 436 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.LambdaRewriter.Analysis.VisitLocal(Microsoft.CodeAnalysis.VisualBasic.BoundLocal node) Line 485 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundLocal.Accept(Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor visitor) Line 6167 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.VisitExpressionWithoutStackGuard(Microsoft.CodeAnalysis.VisualBasic.BoundExpression node) Line 59 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor.VisitExpressionWithStackGuard(Integer recursionDepth, Microsoft.CodeAnalysis.VisualBasic.BoundExpression node) Line 184 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.Visit(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 48 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitFieldAccess(Microsoft.CodeAnalysis.VisualBasic.BoundFieldAccess node) Line 11441 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundFieldAccess.Accept(Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor visitor) Line 4170 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.VisitExpressionWithoutStackGuard(Microsoft.CodeAnalysis.VisualBasic.BoundExpression node) Line 59 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor.VisitExpressionWithStackGuard(Integer recursionDepth, Microsoft.CodeAnalysis.VisualBasic.BoundExpression node) Line 184 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.Visit(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 48 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitFieldAccess(Microsoft.CodeAnalysis.VisualBasic.BoundFieldAccess node) Line 11441 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundFieldAccess.Accept(Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor visitor) Line 4170 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.VisitExpressionWithoutStackGuard(Microsoft.CodeAnalysis.VisualBasic.BoundExpression node) Line 59 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor.VisitExpressionWithStackGuard(Integer recursionDepth, Microsoft.CodeAnalysis.VisualBasic.BoundExpression node) Line 184 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.Visit(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 48 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitAssignmentOperator(Microsoft.CodeAnalysis.VisualBasic.BoundAssignmentOperator node) Line 11203 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundAssignmentOperator.Accept(Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor visitor) Line 1786 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.VisitExpressionWithoutStackGuard(Microsoft.CodeAnalysis.VisualBasic.BoundExpression node) Line 59 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor.VisitExpressionWithStackGuard(Integer recursionDepth, Microsoft.CodeAnalysis.VisualBasic.BoundExpression node) Line 184 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.Visit(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 48 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitExpressionStatement(Microsoft.CodeAnalysis.VisualBasic.BoundExpressionStatement node) Line 11517 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundExpressionStatement.Accept(Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor visitor) Line 4857 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.Visit(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 51 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitSequencePoint(Microsoft.CodeAnalysis.VisualBasic.BoundSequencePoint node) Line 11274 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundSequencePoint.Accept(Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor visitor) Line 2506 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.Visit(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 51 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitList(Of Microsoft.CodeAnalysis.VisualBasic.BoundStatement)(System.Collections.Immutable.ImmutableArray(Of Microsoft.CodeAnalysis.VisualBasic.BoundStatement) list) Line 19 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitBlock(Microsoft.CodeAnalysis.VisualBasic.BoundBlock node) Line 11457 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.LambdaRewriter.Analysis.VisitLambda(Microsoft.CodeAnalysis.VisualBasic.BoundLambda node, Boolean convertToExpressionTree) Line 326 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.LambdaRewriter.Analysis.VisitConversion(Microsoft.CodeAnalysis.VisualBasic.BoundConversion conversion) Line 366 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundConversion.Accept(Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor visitor) Line 2123 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.VisitExpressionWithoutStackGuard(Microsoft.CodeAnalysis.VisualBasic.BoundExpression node) Line 59 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor.VisitExpressionWithStackGuard(Integer recursionDepth, Microsoft.CodeAnalysis.VisualBasic.BoundExpression node) Line 184 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.Visit(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 48 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitAssignmentOperator(Microsoft.CodeAnalysis.VisualBasic.BoundAssignmentOperator node) Line 11205 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundAssignmentOperator.Accept(Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor visitor) Line 1786 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.VisitExpressionWithoutStackGuard(Microsoft.CodeAnalysis.VisualBasic.BoundExpression node) Line 59 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor.VisitExpressionWithStackGuard(Microsoft.CodeAnalysis.VisualBasic.BoundExpression node) Line 203 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor.VisitExpressionWithStackGuard(Integer recursionDepth, Microsoft.CodeAnalysis.VisualBasic.BoundExpression node) Line 186 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.Visit(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 48 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitExpressionStatement(Microsoft.CodeAnalysis.VisualBasic.BoundExpressionStatement node) Line 11517 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundExpressionStatement.Accept(Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor visitor) Line 4857 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.Visit(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 51 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitSequencePoint(Microsoft.CodeAnalysis.VisualBasic.BoundSequencePoint node) Line 11274 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundSequencePoint.Accept(Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor visitor) Line 2506 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.Visit(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 51 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitList(Of Microsoft.CodeAnalysis.VisualBasic.BoundStatement)(System.Collections.Immutable.ImmutableArray(Of Microsoft.CodeAnalysis.VisualBasic.BoundStatement) list) Line 19 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitStatementList(Microsoft.CodeAnalysis.VisualBasic.BoundStatementList node) Line 11704 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundStatementList.Accept(Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor visitor) Line 6618 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.Visit(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 51 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitList(Of Microsoft.CodeAnalysis.VisualBasic.BoundStatement)(System.Collections.Immutable.ImmutableArray(Of Microsoft.CodeAnalysis.VisualBasic.BoundStatement) list) Line 19 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitBlock(Microsoft.CodeAnalysis.VisualBasic.BoundBlock node) Line 11457 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.LambdaRewriter.Analysis.VisitBlock(Microsoft.CodeAnalysis.VisualBasic.BoundBlock node) Line 282 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundBlock.Accept(Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor visitor) Line 4361 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.Visit(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 51 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitList(Of Microsoft.CodeAnalysis.VisualBasic.BoundStatement)(System.Collections.Immutable.ImmutableArray(Of Microsoft.CodeAnalysis.VisualBasic.BoundStatement) list) Line 19 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitBlock(Microsoft.CodeAnalysis.VisualBasic.BoundBlock node) Line 11457 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.LambdaRewriter.Analysis.VisitBlock(Microsoft.CodeAnalysis.VisualBasic.BoundBlock node) Line 282 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundBlock.Accept(Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor visitor) Line 4361 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.Visit(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 51 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitList(Of Microsoft.CodeAnalysis.VisualBasic.BoundStatement)(System.Collections.Immutable.ImmutableArray(Of Microsoft.CodeAnalysis.VisualBasic.BoundStatement) list) Line 19 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitBlock(Microsoft.CodeAnalysis.VisualBasic.BoundBlock node) Line 11457 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.LambdaRewriter.Analysis.VisitBlock(Microsoft.CodeAnalysis.VisualBasic.BoundBlock node) Line 282 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundBlock.Accept(Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor visitor) Line 4361 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.Visit(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 51 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitList(Of Microsoft.CodeAnalysis.VisualBasic.BoundStatement)(System.Collections.Immutable.ImmutableArray(Of Microsoft.CodeAnalysis.VisualBasic.BoundStatement) list) Line 19 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitBlock(Microsoft.CodeAnalysis.VisualBasic.BoundBlock node) Line 11457 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.LambdaRewriter.Analysis.VisitBlock(Microsoft.CodeAnalysis.VisualBasic.BoundBlock node) Line 282 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundBlock.Accept(Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor visitor) Line 4361 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.Visit(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 51 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitList(Of Microsoft.CodeAnalysis.VisualBasic.BoundStatement)(System.Collections.Immutable.ImmutableArray(Of Microsoft.CodeAnalysis.VisualBasic.BoundStatement) list) Line 19 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitBlock(Microsoft.CodeAnalysis.VisualBasic.BoundBlock node) Line 11457 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.LambdaRewriter.Analysis.VisitBlock(Microsoft.CodeAnalysis.VisualBasic.BoundBlock node) Line 282 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundBlock.Accept(Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor visitor) Line 4361 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.Visit(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 51 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitList(Of Microsoft.CodeAnalysis.VisualBasic.BoundStatement)(System.Collections.Immutable.ImmutableArray(Of Microsoft.CodeAnalysis.VisualBasic.BoundStatement) list) Line 19 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitBlock(Microsoft.CodeAnalysis.VisualBasic.BoundBlock node) Line 11457 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.LambdaRewriter.Analysis.VisitBlock(Microsoft.CodeAnalysis.VisualBasic.BoundBlock node) Line 282 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundBlock.Accept(Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor visitor) Line 4361 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.Visit(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 51 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitList(Of Microsoft.CodeAnalysis.VisualBasic.BoundStatement)(System.Collections.Immutable.ImmutableArray(Of Microsoft.CodeAnalysis.VisualBasic.BoundStatement) list) Line 19 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitBlock(Microsoft.CodeAnalysis.VisualBasic.BoundBlock node) Line 11457 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.LambdaRewriter.Analysis.VisitBlock(Microsoft.CodeAnalysis.VisualBasic.BoundBlock node) Line 282 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundBlock.Accept(Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor visitor) Line 4361 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.Visit(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 51 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.LambdaRewriter.Analysis.Analyze(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 159 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.LambdaRewriter.Analysis.AnalyzeMethodBody(Microsoft.CodeAnalysis.VisualBasic.BoundBlock node, Microsoft.CodeAnalysis.VisualBasic.Symbols.MethodSymbol method, System.Collections.Generic.ISet(Of Microsoft.CodeAnalysis.VisualBasic.Symbol) symbolsCapturedWithoutCtor, Microsoft.CodeAnalysis.VisualBasic.BindingDiagnosticBag diagnostics) Line 139 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.LambdaRewriter.Rewrite(Microsoft.CodeAnalysis.VisualBasic.BoundBlock node, Microsoft.CodeAnalysis.VisualBasic.Symbols.MethodSymbol method, Integer methodOrdinal, Microsoft.CodeAnalysis.PooledObjects.ArrayBuilder(Of Microsoft.CodeAnalysis.CodeGen.LambdaDebugInfo) lambdaDebugInfoBuilder, Microsoft.CodeAnalysis.PooledObjects.ArrayBuilder(Of Microsoft.CodeAnalysis.CodeGen.ClosureDebugInfo) closureDebugInfoBuilder, Integer delegateRelaxationIdDispenser, Microsoft.CodeAnalysis.CodeGen.VariableSlotAllocator slotAllocatorOpt, Microsoft.CodeAnalysis.VisualBasic.TypeCompilationState CompilationState, System.Collections.Generic.ISet(Of Microsoft.CodeAnalysis.VisualBasic.Symbol) symbolsCapturedWithoutCopyCtor, Microsoft.CodeAnalysis.VisualBasic.BindingDiagnosticBag diagnostics, System.Collections.Generic.HashSet(Of Microsoft.CodeAnalysis.VisualBasic.BoundNode) rewrittenNodes) Line 160 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.Rewriter.LowerBodyOrInitializer(Microsoft.CodeAnalysis.VisualBasic.Symbols.MethodSymbol method, Integer methodOrdinal, Microsoft.CodeAnalysis.VisualBasic.BoundBlock body, Microsoft.CodeAnalysis.VisualBasic.SynthesizedSubmissionFields previousSubmissionFields, Microsoft.CodeAnalysis.VisualBasic.TypeCompilationState compilationState, Boolean instrumentForDynamicAnalysis, System.Collections.Immutable.ImmutableArray(Of Microsoft.CodeAnalysis.CodeGen.SourceSpan) dynamicAnalysisSpans, Microsoft.CodeAnalysis.CodeGen.DebugDocumentProvider debugDocumentProvider, Microsoft.CodeAnalysis.VisualBasic.BindingDiagnosticBag diagnostics, Microsoft.CodeAnalysis.CodeGen.VariableSlotAllocator lazyVariableSlotAllocator, Microsoft.CodeAnalysis.PooledObjects.ArrayBuilder(Of Microsoft.CodeAnalysis.CodeGen.LambdaDebugInfo) lambdaDebugInfoBuilder, Microsoft.CodeAnalysis.PooledObjects.ArrayBuilder(Of Microsoft.CodeAnalysis.CodeGen.ClosureDebugInfo) closureDebugInfoBuilder, Integer delegateRelaxationIdDispenser, Microsoft.CodeAnalysis.VisualBasic.StateMachineTypeSymbol stateMachineTypeOpt, Boolean allowOmissionOfConditionalCalls, Boolean isBodySynthesized) Line 89 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.MethodCompiler.LowerAndEmitMethod(Microsoft.CodeAnalysis.VisualBasic.Symbols.MethodSymbol method, Integer methodOrdinal, Microsoft.CodeAnalysis.VisualBasic.BoundBlock block, Microsoft.CodeAnalysis.VisualBasic.Binder binderOpt, Microsoft.CodeAnalysis.VisualBasic.TypeCompilationState compilationState, Microsoft.CodeAnalysis.VisualBasic.BindingDiagnosticBag diagsForCurrentMethod, Microsoft.CodeAnalysis.VisualBasic.Binder.ProcessedFieldOrPropertyInitializers processedInitializers, Microsoft.CodeAnalysis.VisualBasic.SynthesizedSubmissionFields previousSubmissionFields, Microsoft.CodeAnalysis.VisualBasic.Symbols.MethodSymbol constructorToInject, Integer delegateRelaxationIdDispenser) Line 1457 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.MethodCompiler.CompileMethod(Microsoft.CodeAnalysis.VisualBasic.Symbols.MethodSymbol method, Integer methodOrdinal, Integer withEventPropertyIdDispenser, Integer delegateRelaxationIdDispenser, System.Predicate(Of Microsoft.CodeAnalysis.VisualBasic.Symbol) filter, Microsoft.CodeAnalysis.VisualBasic.TypeCompilationState compilationState, Microsoft.CodeAnalysis.VisualBasic.Binder.ProcessedFieldOrPropertyInitializers processedInitializers, Microsoft.CodeAnalysis.VisualBasic.Binder containingTypeBinder, Microsoft.CodeAnalysis.VisualBasic.SynthesizedSubmissionFields previousSubmissionFields, Microsoft.CodeAnalysis.VisualBasic.Symbols.MethodSymbol referencedConstructor) Line 1272 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.MethodCompiler.CompileNamedType(Microsoft.CodeAnalysis.VisualBasic.Symbols.NamedTypeSymbol containingType, System.Predicate(Of Microsoft.CodeAnalysis.VisualBasic.Symbol) filter) Line 690 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.MethodCompiler.VisitNamedType(Microsoft.CodeAnalysis.VisualBasic.Symbols.NamedTypeSymbol symbol) Line 522 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.Symbols.NamedTypeSymbol.Accept(Microsoft.CodeAnalysis.VisualBasic.VisualBasicSymbolVisitor visitor) Line 1251 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.MethodCompiler.CompileNamespace(Microsoft.CodeAnalysis.VisualBasic.Symbols.NamespaceSymbol symbol) Line 510 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.MethodCompiler.VisitNamespace(Microsoft.CodeAnalysis.VisualBasic.Symbols.NamespaceSymbol symbol) Line 490 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.Symbols.NamespaceSymbol.Accept(Microsoft.CodeAnalysis.VisualBasic.VisualBasicSymbolVisitor visitor) Line 566 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.MethodCompiler.CompileMethodBodies(Microsoft.CodeAnalysis.VisualBasic.VisualBasicCompilation compilation, Microsoft.CodeAnalysis.VisualBasic.Emit.PEModuleBuilder moduleBeingBuiltOpt, Boolean emittingPdb, Boolean emitTestCoverageData, Boolean hasDeclarationErrors, System.Predicate(Of Microsoft.CodeAnalysis.VisualBasic.Symbol) filter, Microsoft.CodeAnalysis.VisualBasic.BindingDiagnosticBag diagnostics, System.Threading.CancellationToken cancellationToken) Line 246 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.VisualBasicCompilation.CompileMethods(Microsoft.CodeAnalysis.Emit.CommonPEModuleBuilder moduleBuilder, Boolean emittingPdb, Boolean emitMetadataOnly, Boolean emitTestCoverageData, Microsoft.CodeAnalysis.DiagnosticBag diagnostics, System.Predicate(Of Microsoft.CodeAnalysis.Symbols.ISymbolInternal) filterOpt, System.Threading.CancellationToken cancellationToken) Line 2349 Basic
Microsoft.CodeAnalysis.dll!Microsoft.CodeAnalysis.Compilation.Emit(System.IO.Stream peStream, System.IO.Stream metadataPEStream, System.IO.Stream pdbStream, System.IO.Stream xmlDocumentationStream, System.IO.Stream win32Resources, System.Collections.Generic.IEnumerable manifestResources, Microsoft.CodeAnalysis.Emit.EmitOptions options, Microsoft.CodeAnalysis.IMethodSymbol debugEntryPoint, System.IO.Stream sourceLinkStream, System.Collections.Generic.IEnumerable embeddedTexts, Microsoft.CodeAnalysis.CodeGen.CompilationTestData testData, System.Threading.CancellationToken cancellationToken) Line 2497 C#
Microsoft.CodeAnalysis.dll!Microsoft.CodeAnalysis.Compilation.Emit(System.IO.Stream peStream, System.IO.Stream pdbStream, System.IO.Stream xmlDocumentationStream, System.IO.Stream win32Resources, System.Collections.Generic.IEnumerable manifestResources, Microsoft.CodeAnalysis.Emit.EmitOptions options, Microsoft.CodeAnalysis.IMethodSymbol debugEntryPoint, System.IO.Stream sourceLinkStream, System.Collections.Generic.IEnumerable embeddedTexts, System.IO.Stream metadataPEStream, System.Threading.CancellationToken cancellationToken) Line 2441 C#
Roslyn.Test.Utilities.dll!Microsoft.CodeAnalysis.DiagnosticExtensions.GetEmitDiagnostics(Microsoft.CodeAnalysis.VisualBasic.VisualBasicCompilation c, Microsoft.CodeAnalysis.Emit.EmitOptions options, System.Collections.Generic.IEnumerable manifestResources) Line 376 C#
Roslyn.Test.Utilities.dll!Microsoft.CodeAnalysis.DiagnosticExtensions.GetEmitDiagnostics(Microsoft.CodeAnalysis.VisualBasic.VisualBasicCompilation c) Line 388 C#
Microsoft.CodeAnalysis.VisualBasic.Test.Utilities.dll!Microsoft.CodeAnalysis.VisualBasic.UnitTests.CompilationUtils.VerifyUsedAssemblyReferences(System.Func(Of Microsoft.CodeAnalysis.VisualBasic.VisualBasicCompilation) createCompilationLambda) Line 69 Basic
Microsoft.CodeAnalysis.VisualBasic.Test.Utilities.dll!Microsoft.CodeAnalysis.VisualBasic.UnitTests.CompilationUtils.CreateEmptyCompilation(Microsoft.CodeAnalysis.VisualBasic.UnitTests.BasicTestSource source, System.Collections.Generic.IEnumerable(Of Microsoft.CodeAnalysis.MetadataReference) references, Microsoft.CodeAnalysis.VisualBasic.VisualBasicCompilationOptions options, Microsoft.CodeAnalysis.VisualBasic.VisualBasicParseOptions parseOptions, String assemblyName) Line 58 Basic
Microsoft.CodeAnalysis.VisualBasic.Semantic.UnitTests.dll!Microsoft.CodeAnalysis.VisualBasic.UnitTests.FlowTestBase.CompileAndGetModelAndSpan(System.Xml.Linq.XElement program, System.Collections.Generic.List(Of Microsoft.CodeAnalysis.VisualBasic.VisualBasicSyntaxNode) startNodes, System.Collections.Generic.List(Of Microsoft.CodeAnalysis.VisualBasic.VisualBasicSyntaxNode) endNodes, System.Xml.Linq.XCData ilSource, System.Xml.Linq.XElement errors, Microsoft.CodeAnalysis.VisualBasic.VisualBasicParseOptions parseOptions) Line 91 Basic
Microsoft.CodeAnalysis.VisualBasic.Semantic.UnitTests.dll!Microsoft.CodeAnalysis.VisualBasic.UnitTests.FlowTestBase.CompileAndGetModelAndSpan(Of Microsoft.CodeAnalysis.DataFlowAnalysis)(System.Xml.Linq.XElement program, System.Func(Of Microsoft.CodeAnalysis.SemanticModel, System.Collections.Generic.List(Of Microsoft.CodeAnalysis.VisualBasic.VisualBasicSyntaxNode), System.Collections.Generic.List(Of Microsoft.CodeAnalysis.VisualBasic.VisualBasicSyntaxNode), Microsoft.CodeAnalysis.DataFlowAnalysis) analysisDelegate, System.Xml.Linq.XCData ilSource, System.Xml.Linq.XElement errors) Line 74 Basic
Microsoft.CodeAnalysis.VisualBasic.Semantic.UnitTests.dll!Microsoft.CodeAnalysis.VisualBasic.UnitTests.FlowTestBase.CompileAndAnalyzeDataFlow(System.Xml.Linq.XElement program, System.Xml.Linq.XCData ilSource, System.Xml.Linq.XElement errors) Line 64 Basic
Microsoft.CodeAnalysis.VisualBasic.Semantic.UnitTests.dll!Microsoft.CodeAnalysis.VisualBasic.UnitTests.FlowAnalysisTests.WithStatement_Expression_LValue_4d() Line 8827 Basic
```
The scenario is taken from ```Microsoft.CodeAnalysis.VisualBasic.UnitTests.FlowAnalysisTests.WithStatement_Expression_LValue_4d``` unit-test. It will be disabled in dotnet/features/UsedAssemblyReferences branch because it blocks the feature test hook.
",True,"An ExceptionUtilities.UnexpectedValue is throw by VB compiler -
Compile the following code as a library:
```
Imports System
Structure SSSS3
Public A As String
Public B As Integer
End Structure
Structure SSSS2
Public S3 As SSSS3
End Structure
Structure SSSS
Public S2 As SSSS2
End Structure
Structure SSS
Public S As SSSS
End Structure
Class Clazz
Sub TEST()
Dim x As New SSS()
With x.S
With .S2
With .S3
Dim s As Action = Sub()
.A = """"
End Sub
End With
End With
End With
x.ToString()
End Sub
End Class
```
Observed: Compiler crashes
Expected: Success, it looks like the native compiler succeeds.
```
Microsoft.CodeAnalysis.dll!Roslyn.Utilities.ExceptionUtilities.UnexpectedValue(object o) Line 20 C#
> Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.LambdaRewriter.Analysis.VerifyCaptured(Microsoft.CodeAnalysis.VisualBasic.Symbol variableOrParameter, Microsoft.CodeAnalysis.SyntaxNode syntax) Line 451 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.LambdaRewriter.Analysis.ReferenceVariable(Microsoft.CodeAnalysis.VisualBasic.Symbol variableOrParameter, Microsoft.CodeAnalysis.SyntaxNode syntax) Line 436 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.LambdaRewriter.Analysis.VisitLocal(Microsoft.CodeAnalysis.VisualBasic.BoundLocal node) Line 485 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundLocal.Accept(Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor visitor) Line 6167 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.VisitExpressionWithoutStackGuard(Microsoft.CodeAnalysis.VisualBasic.BoundExpression node) Line 59 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor.VisitExpressionWithStackGuard(Integer recursionDepth, Microsoft.CodeAnalysis.VisualBasic.BoundExpression node) Line 184 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.Visit(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 48 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitFieldAccess(Microsoft.CodeAnalysis.VisualBasic.BoundFieldAccess node) Line 11441 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundFieldAccess.Accept(Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor visitor) Line 4170 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.VisitExpressionWithoutStackGuard(Microsoft.CodeAnalysis.VisualBasic.BoundExpression node) Line 59 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor.VisitExpressionWithStackGuard(Integer recursionDepth, Microsoft.CodeAnalysis.VisualBasic.BoundExpression node) Line 184 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.Visit(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 48 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitFieldAccess(Microsoft.CodeAnalysis.VisualBasic.BoundFieldAccess node) Line 11441 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundFieldAccess.Accept(Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor visitor) Line 4170 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.VisitExpressionWithoutStackGuard(Microsoft.CodeAnalysis.VisualBasic.BoundExpression node) Line 59 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor.VisitExpressionWithStackGuard(Integer recursionDepth, Microsoft.CodeAnalysis.VisualBasic.BoundExpression node) Line 184 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.Visit(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 48 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitAssignmentOperator(Microsoft.CodeAnalysis.VisualBasic.BoundAssignmentOperator node) Line 11203 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundAssignmentOperator.Accept(Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor visitor) Line 1786 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.VisitExpressionWithoutStackGuard(Microsoft.CodeAnalysis.VisualBasic.BoundExpression node) Line 59 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor.VisitExpressionWithStackGuard(Integer recursionDepth, Microsoft.CodeAnalysis.VisualBasic.BoundExpression node) Line 184 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.Visit(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 48 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitExpressionStatement(Microsoft.CodeAnalysis.VisualBasic.BoundExpressionStatement node) Line 11517 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundExpressionStatement.Accept(Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor visitor) Line 4857 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.Visit(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 51 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitSequencePoint(Microsoft.CodeAnalysis.VisualBasic.BoundSequencePoint node) Line 11274 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundSequencePoint.Accept(Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor visitor) Line 2506 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.Visit(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 51 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitList(Of Microsoft.CodeAnalysis.VisualBasic.BoundStatement)(System.Collections.Immutable.ImmutableArray(Of Microsoft.CodeAnalysis.VisualBasic.BoundStatement) list) Line 19 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitBlock(Microsoft.CodeAnalysis.VisualBasic.BoundBlock node) Line 11457 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.LambdaRewriter.Analysis.VisitLambda(Microsoft.CodeAnalysis.VisualBasic.BoundLambda node, Boolean convertToExpressionTree) Line 326 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.LambdaRewriter.Analysis.VisitConversion(Microsoft.CodeAnalysis.VisualBasic.BoundConversion conversion) Line 366 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundConversion.Accept(Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor visitor) Line 2123 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.VisitExpressionWithoutStackGuard(Microsoft.CodeAnalysis.VisualBasic.BoundExpression node) Line 59 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor.VisitExpressionWithStackGuard(Integer recursionDepth, Microsoft.CodeAnalysis.VisualBasic.BoundExpression node) Line 184 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.Visit(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 48 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitAssignmentOperator(Microsoft.CodeAnalysis.VisualBasic.BoundAssignmentOperator node) Line 11205 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundAssignmentOperator.Accept(Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor visitor) Line 1786 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.VisitExpressionWithoutStackGuard(Microsoft.CodeAnalysis.VisualBasic.BoundExpression node) Line 59 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor.VisitExpressionWithStackGuard(Microsoft.CodeAnalysis.VisualBasic.BoundExpression node) Line 203 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor.VisitExpressionWithStackGuard(Integer recursionDepth, Microsoft.CodeAnalysis.VisualBasic.BoundExpression node) Line 186 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.Visit(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 48 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitExpressionStatement(Microsoft.CodeAnalysis.VisualBasic.BoundExpressionStatement node) Line 11517 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundExpressionStatement.Accept(Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor visitor) Line 4857 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.Visit(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 51 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitSequencePoint(Microsoft.CodeAnalysis.VisualBasic.BoundSequencePoint node) Line 11274 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundSequencePoint.Accept(Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor visitor) Line 2506 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.Visit(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 51 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitList(Of Microsoft.CodeAnalysis.VisualBasic.BoundStatement)(System.Collections.Immutable.ImmutableArray(Of Microsoft.CodeAnalysis.VisualBasic.BoundStatement) list) Line 19 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitStatementList(Microsoft.CodeAnalysis.VisualBasic.BoundStatementList node) Line 11704 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundStatementList.Accept(Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor visitor) Line 6618 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.Visit(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 51 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitList(Of Microsoft.CodeAnalysis.VisualBasic.BoundStatement)(System.Collections.Immutable.ImmutableArray(Of Microsoft.CodeAnalysis.VisualBasic.BoundStatement) list) Line 19 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitBlock(Microsoft.CodeAnalysis.VisualBasic.BoundBlock node) Line 11457 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.LambdaRewriter.Analysis.VisitBlock(Microsoft.CodeAnalysis.VisualBasic.BoundBlock node) Line 282 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundBlock.Accept(Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor visitor) Line 4361 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.Visit(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 51 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitList(Of Microsoft.CodeAnalysis.VisualBasic.BoundStatement)(System.Collections.Immutable.ImmutableArray(Of Microsoft.CodeAnalysis.VisualBasic.BoundStatement) list) Line 19 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitBlock(Microsoft.CodeAnalysis.VisualBasic.BoundBlock node) Line 11457 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.LambdaRewriter.Analysis.VisitBlock(Microsoft.CodeAnalysis.VisualBasic.BoundBlock node) Line 282 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundBlock.Accept(Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor visitor) Line 4361 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.Visit(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 51 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitList(Of Microsoft.CodeAnalysis.VisualBasic.BoundStatement)(System.Collections.Immutable.ImmutableArray(Of Microsoft.CodeAnalysis.VisualBasic.BoundStatement) list) Line 19 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitBlock(Microsoft.CodeAnalysis.VisualBasic.BoundBlock node) Line 11457 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.LambdaRewriter.Analysis.VisitBlock(Microsoft.CodeAnalysis.VisualBasic.BoundBlock node) Line 282 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundBlock.Accept(Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor visitor) Line 4361 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.Visit(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 51 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitList(Of Microsoft.CodeAnalysis.VisualBasic.BoundStatement)(System.Collections.Immutable.ImmutableArray(Of Microsoft.CodeAnalysis.VisualBasic.BoundStatement) list) Line 19 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitBlock(Microsoft.CodeAnalysis.VisualBasic.BoundBlock node) Line 11457 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.LambdaRewriter.Analysis.VisitBlock(Microsoft.CodeAnalysis.VisualBasic.BoundBlock node) Line 282 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundBlock.Accept(Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor visitor) Line 4361 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.Visit(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 51 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitList(Of Microsoft.CodeAnalysis.VisualBasic.BoundStatement)(System.Collections.Immutable.ImmutableArray(Of Microsoft.CodeAnalysis.VisualBasic.BoundStatement) list) Line 19 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitBlock(Microsoft.CodeAnalysis.VisualBasic.BoundBlock node) Line 11457 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.LambdaRewriter.Analysis.VisitBlock(Microsoft.CodeAnalysis.VisualBasic.BoundBlock node) Line 282 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundBlock.Accept(Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor visitor) Line 4361 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.Visit(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 51 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitList(Of Microsoft.CodeAnalysis.VisualBasic.BoundStatement)(System.Collections.Immutable.ImmutableArray(Of Microsoft.CodeAnalysis.VisualBasic.BoundStatement) list) Line 19 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitBlock(Microsoft.CodeAnalysis.VisualBasic.BoundBlock node) Line 11457 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.LambdaRewriter.Analysis.VisitBlock(Microsoft.CodeAnalysis.VisualBasic.BoundBlock node) Line 282 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundBlock.Accept(Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor visitor) Line 4361 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.Visit(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 51 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitList(Of Microsoft.CodeAnalysis.VisualBasic.BoundStatement)(System.Collections.Immutable.ImmutableArray(Of Microsoft.CodeAnalysis.VisualBasic.BoundStatement) list) Line 19 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalker.VisitBlock(Microsoft.CodeAnalysis.VisualBasic.BoundBlock node) Line 11457 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.LambdaRewriter.Analysis.VisitBlock(Microsoft.CodeAnalysis.VisualBasic.BoundBlock node) Line 282 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundBlock.Accept(Microsoft.CodeAnalysis.VisualBasic.BoundTreeVisitor visitor) Line 4361 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.BoundTreeWalkerWithStackGuard.Visit(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 51 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.LambdaRewriter.Analysis.Analyze(Microsoft.CodeAnalysis.VisualBasic.BoundNode node) Line 159 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.LambdaRewriter.Analysis.AnalyzeMethodBody(Microsoft.CodeAnalysis.VisualBasic.BoundBlock node, Microsoft.CodeAnalysis.VisualBasic.Symbols.MethodSymbol method, System.Collections.Generic.ISet(Of Microsoft.CodeAnalysis.VisualBasic.Symbol) symbolsCapturedWithoutCtor, Microsoft.CodeAnalysis.VisualBasic.BindingDiagnosticBag diagnostics) Line 139 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.LambdaRewriter.Rewrite(Microsoft.CodeAnalysis.VisualBasic.BoundBlock node, Microsoft.CodeAnalysis.VisualBasic.Symbols.MethodSymbol method, Integer methodOrdinal, Microsoft.CodeAnalysis.PooledObjects.ArrayBuilder(Of Microsoft.CodeAnalysis.CodeGen.LambdaDebugInfo) lambdaDebugInfoBuilder, Microsoft.CodeAnalysis.PooledObjects.ArrayBuilder(Of Microsoft.CodeAnalysis.CodeGen.ClosureDebugInfo) closureDebugInfoBuilder, Integer delegateRelaxationIdDispenser, Microsoft.CodeAnalysis.CodeGen.VariableSlotAllocator slotAllocatorOpt, Microsoft.CodeAnalysis.VisualBasic.TypeCompilationState CompilationState, System.Collections.Generic.ISet(Of Microsoft.CodeAnalysis.VisualBasic.Symbol) symbolsCapturedWithoutCopyCtor, Microsoft.CodeAnalysis.VisualBasic.BindingDiagnosticBag diagnostics, System.Collections.Generic.HashSet(Of Microsoft.CodeAnalysis.VisualBasic.BoundNode) rewrittenNodes) Line 160 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.Rewriter.LowerBodyOrInitializer(Microsoft.CodeAnalysis.VisualBasic.Symbols.MethodSymbol method, Integer methodOrdinal, Microsoft.CodeAnalysis.VisualBasic.BoundBlock body, Microsoft.CodeAnalysis.VisualBasic.SynthesizedSubmissionFields previousSubmissionFields, Microsoft.CodeAnalysis.VisualBasic.TypeCompilationState compilationState, Boolean instrumentForDynamicAnalysis, System.Collections.Immutable.ImmutableArray(Of Microsoft.CodeAnalysis.CodeGen.SourceSpan) dynamicAnalysisSpans, Microsoft.CodeAnalysis.CodeGen.DebugDocumentProvider debugDocumentProvider, Microsoft.CodeAnalysis.VisualBasic.BindingDiagnosticBag diagnostics, Microsoft.CodeAnalysis.CodeGen.VariableSlotAllocator lazyVariableSlotAllocator, Microsoft.CodeAnalysis.PooledObjects.ArrayBuilder(Of Microsoft.CodeAnalysis.CodeGen.LambdaDebugInfo) lambdaDebugInfoBuilder, Microsoft.CodeAnalysis.PooledObjects.ArrayBuilder(Of Microsoft.CodeAnalysis.CodeGen.ClosureDebugInfo) closureDebugInfoBuilder, Integer delegateRelaxationIdDispenser, Microsoft.CodeAnalysis.VisualBasic.StateMachineTypeSymbol stateMachineTypeOpt, Boolean allowOmissionOfConditionalCalls, Boolean isBodySynthesized) Line 89 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.MethodCompiler.LowerAndEmitMethod(Microsoft.CodeAnalysis.VisualBasic.Symbols.MethodSymbol method, Integer methodOrdinal, Microsoft.CodeAnalysis.VisualBasic.BoundBlock block, Microsoft.CodeAnalysis.VisualBasic.Binder binderOpt, Microsoft.CodeAnalysis.VisualBasic.TypeCompilationState compilationState, Microsoft.CodeAnalysis.VisualBasic.BindingDiagnosticBag diagsForCurrentMethod, Microsoft.CodeAnalysis.VisualBasic.Binder.ProcessedFieldOrPropertyInitializers processedInitializers, Microsoft.CodeAnalysis.VisualBasic.SynthesizedSubmissionFields previousSubmissionFields, Microsoft.CodeAnalysis.VisualBasic.Symbols.MethodSymbol constructorToInject, Integer delegateRelaxationIdDispenser) Line 1457 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.MethodCompiler.CompileMethod(Microsoft.CodeAnalysis.VisualBasic.Symbols.MethodSymbol method, Integer methodOrdinal, Integer withEventPropertyIdDispenser, Integer delegateRelaxationIdDispenser, System.Predicate(Of Microsoft.CodeAnalysis.VisualBasic.Symbol) filter, Microsoft.CodeAnalysis.VisualBasic.TypeCompilationState compilationState, Microsoft.CodeAnalysis.VisualBasic.Binder.ProcessedFieldOrPropertyInitializers processedInitializers, Microsoft.CodeAnalysis.VisualBasic.Binder containingTypeBinder, Microsoft.CodeAnalysis.VisualBasic.SynthesizedSubmissionFields previousSubmissionFields, Microsoft.CodeAnalysis.VisualBasic.Symbols.MethodSymbol referencedConstructor) Line 1272 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.MethodCompiler.CompileNamedType(Microsoft.CodeAnalysis.VisualBasic.Symbols.NamedTypeSymbol containingType, System.Predicate(Of Microsoft.CodeAnalysis.VisualBasic.Symbol) filter) Line 690 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.MethodCompiler.VisitNamedType(Microsoft.CodeAnalysis.VisualBasic.Symbols.NamedTypeSymbol symbol) Line 522 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.Symbols.NamedTypeSymbol.Accept(Microsoft.CodeAnalysis.VisualBasic.VisualBasicSymbolVisitor visitor) Line 1251 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.MethodCompiler.CompileNamespace(Microsoft.CodeAnalysis.VisualBasic.Symbols.NamespaceSymbol symbol) Line 510 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.MethodCompiler.VisitNamespace(Microsoft.CodeAnalysis.VisualBasic.Symbols.NamespaceSymbol symbol) Line 490 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.Symbols.NamespaceSymbol.Accept(Microsoft.CodeAnalysis.VisualBasic.VisualBasicSymbolVisitor visitor) Line 566 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.MethodCompiler.CompileMethodBodies(Microsoft.CodeAnalysis.VisualBasic.VisualBasicCompilation compilation, Microsoft.CodeAnalysis.VisualBasic.Emit.PEModuleBuilder moduleBeingBuiltOpt, Boolean emittingPdb, Boolean emitTestCoverageData, Boolean hasDeclarationErrors, System.Predicate(Of Microsoft.CodeAnalysis.VisualBasic.Symbol) filter, Microsoft.CodeAnalysis.VisualBasic.BindingDiagnosticBag diagnostics, System.Threading.CancellationToken cancellationToken) Line 246 Basic
Microsoft.CodeAnalysis.VisualBasic.dll!Microsoft.CodeAnalysis.VisualBasic.VisualBasicCompilation.CompileMethods(Microsoft.CodeAnalysis.Emit.CommonPEModuleBuilder moduleBuilder, Boolean emittingPdb, Boolean emitMetadataOnly, Boolean emitTestCoverageData, Microsoft.CodeAnalysis.DiagnosticBag diagnostics, System.Predicate(Of Microsoft.CodeAnalysis.Symbols.ISymbolInternal) filterOpt, System.Threading.CancellationToken cancellationToken) Line 2349 Basic
Microsoft.CodeAnalysis.dll!Microsoft.CodeAnalysis.Compilation.Emit(System.IO.Stream peStream, System.IO.Stream metadataPEStream, System.IO.Stream pdbStream, System.IO.Stream xmlDocumentationStream, System.IO.Stream win32Resources, System.Collections.Generic.IEnumerable manifestResources, Microsoft.CodeAnalysis.Emit.EmitOptions options, Microsoft.CodeAnalysis.IMethodSymbol debugEntryPoint, System.IO.Stream sourceLinkStream, System.Collections.Generic.IEnumerable embeddedTexts, Microsoft.CodeAnalysis.CodeGen.CompilationTestData testData, System.Threading.CancellationToken cancellationToken) Line 2497 C#
Microsoft.CodeAnalysis.dll!Microsoft.CodeAnalysis.Compilation.Emit(System.IO.Stream peStream, System.IO.Stream pdbStream, System.IO.Stream xmlDocumentationStream, System.IO.Stream win32Resources, System.Collections.Generic.IEnumerable manifestResources, Microsoft.CodeAnalysis.Emit.EmitOptions options, Microsoft.CodeAnalysis.IMethodSymbol debugEntryPoint, System.IO.Stream sourceLinkStream, System.Collections.Generic.IEnumerable embeddedTexts, System.IO.Stream metadataPEStream, System.Threading.CancellationToken cancellationToken) Line 2441 C#
Roslyn.Test.Utilities.dll!Microsoft.CodeAnalysis.DiagnosticExtensions.GetEmitDiagnostics(Microsoft.CodeAnalysis.VisualBasic.VisualBasicCompilation c, Microsoft.CodeAnalysis.Emit.EmitOptions options, System.Collections.Generic.IEnumerable manifestResources) Line 376 C#
Roslyn.Test.Utilities.dll!Microsoft.CodeAnalysis.DiagnosticExtensions.GetEmitDiagnostics(Microsoft.CodeAnalysis.VisualBasic.VisualBasicCompilation c) Line 388 C#
Microsoft.CodeAnalysis.VisualBasic.Test.Utilities.dll!Microsoft.CodeAnalysis.VisualBasic.UnitTests.CompilationUtils.VerifyUsedAssemblyReferences(System.Func(Of Microsoft.CodeAnalysis.VisualBasic.VisualBasicCompilation) createCompilationLambda) Line 69 Basic
Microsoft.CodeAnalysis.VisualBasic.Test.Utilities.dll!Microsoft.CodeAnalysis.VisualBasic.UnitTests.CompilationUtils.CreateEmptyCompilation(Microsoft.CodeAnalysis.VisualBasic.UnitTests.BasicTestSource source, System.Collections.Generic.IEnumerable(Of Microsoft.CodeAnalysis.MetadataReference) references, Microsoft.CodeAnalysis.VisualBasic.VisualBasicCompilationOptions options, Microsoft.CodeAnalysis.VisualBasic.VisualBasicParseOptions parseOptions, String assemblyName) Line 58 Basic
Microsoft.CodeAnalysis.VisualBasic.Semantic.UnitTests.dll!Microsoft.CodeAnalysis.VisualBasic.UnitTests.FlowTestBase.CompileAndGetModelAndSpan(System.Xml.Linq.XElement program, System.Collections.Generic.List(Of Microsoft.CodeAnalysis.VisualBasic.VisualBasicSyntaxNode) startNodes, System.Collections.Generic.List(Of Microsoft.CodeAnalysis.VisualBasic.VisualBasicSyntaxNode) endNodes, System.Xml.Linq.XCData ilSource, System.Xml.Linq.XElement errors, Microsoft.CodeAnalysis.VisualBasic.VisualBasicParseOptions parseOptions) Line 91 Basic
Microsoft.CodeAnalysis.VisualBasic.Semantic.UnitTests.dll!Microsoft.CodeAnalysis.VisualBasic.UnitTests.FlowTestBase.CompileAndGetModelAndSpan(Of Microsoft.CodeAnalysis.DataFlowAnalysis)(System.Xml.Linq.XElement program, System.Func(Of Microsoft.CodeAnalysis.SemanticModel, System.Collections.Generic.List(Of Microsoft.CodeAnalysis.VisualBasic.VisualBasicSyntaxNode), System.Collections.Generic.List(Of Microsoft.CodeAnalysis.VisualBasic.VisualBasicSyntaxNode), Microsoft.CodeAnalysis.DataFlowAnalysis) analysisDelegate, System.Xml.Linq.XCData ilSource, System.Xml.Linq.XElement errors) Line 74 Basic
Microsoft.CodeAnalysis.VisualBasic.Semantic.UnitTests.dll!Microsoft.CodeAnalysis.VisualBasic.UnitTests.FlowTestBase.CompileAndAnalyzeDataFlow(System.Xml.Linq.XElement program, System.Xml.Linq.XCData ilSource, System.Xml.Linq.XElement errors) Line 64 Basic
Microsoft.CodeAnalysis.VisualBasic.Semantic.UnitTests.dll!Microsoft.CodeAnalysis.VisualBasic.UnitTests.FlowAnalysisTests.WithStatement_Expression_LValue_4d() Line 8827 Basic
```
The scenario is taken from ```Microsoft.CodeAnalysis.VisualBasic.UnitTests.FlowAnalysisTests.WithStatement_Expression_LValue_4d``` unit-test. It will be disabled in dotnet/features/UsedAssemblyReferences branch because it blocks the feature test hook.
",1,an exceptionutilities unexpectedvalue is throw by vb compiler compile the following code as a library imports system structure public a as string public b as integer end structure structure public as end structure structure ssss public as end structure structure sss public s as ssss end structure class clazz sub test dim x as new sss with x s with with dim s as action sub a end sub end with end with end with x tostring end sub end class observed compiler crashes expected success it looks like the native compiler succeeds microsoft codeanalysis dll roslyn utilities exceptionutilities unexpectedvalue object o line c microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic lambdarewriter analysis verifycaptured microsoft codeanalysis visualbasic symbol variableorparameter microsoft codeanalysis syntaxnode syntax line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic lambdarewriter analysis referencevariable microsoft codeanalysis visualbasic symbol variableorparameter microsoft codeanalysis syntaxnode syntax line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic lambdarewriter analysis visitlocal microsoft codeanalysis visualbasic boundlocal node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundlocal accept microsoft codeanalysis visualbasic boundtreevisitor visitor line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalkerwithstackguard visitexpressionwithoutstackguard microsoft codeanalysis visualbasic boundexpression node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreevisitor visitexpressionwithstackguard integer recursiondepth microsoft codeanalysis visualbasic boundexpression node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalkerwithstackguard visit microsoft codeanalysis visualbasic boundnode node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalker visitfieldaccess microsoft codeanalysis visualbasic boundfieldaccess node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundfieldaccess accept microsoft codeanalysis visualbasic boundtreevisitor visitor line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalkerwithstackguard visitexpressionwithoutstackguard microsoft codeanalysis visualbasic boundexpression node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreevisitor visitexpressionwithstackguard integer recursiondepth microsoft codeanalysis visualbasic boundexpression node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalkerwithstackguard visit microsoft codeanalysis visualbasic boundnode node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalker visitfieldaccess microsoft codeanalysis visualbasic boundfieldaccess node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundfieldaccess accept microsoft codeanalysis visualbasic boundtreevisitor visitor line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalkerwithstackguard visitexpressionwithoutstackguard microsoft codeanalysis visualbasic boundexpression node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreevisitor visitexpressionwithstackguard integer recursiondepth microsoft codeanalysis visualbasic boundexpression node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalkerwithstackguard visit microsoft codeanalysis visualbasic boundnode node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalker visitassignmentoperator microsoft codeanalysis visualbasic boundassignmentoperator node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundassignmentoperator accept microsoft codeanalysis visualbasic boundtreevisitor visitor line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalkerwithstackguard visitexpressionwithoutstackguard microsoft codeanalysis visualbasic boundexpression node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreevisitor visitexpressionwithstackguard integer recursiondepth microsoft codeanalysis visualbasic boundexpression node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalkerwithstackguard visit microsoft codeanalysis visualbasic boundnode node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalker visitexpressionstatement microsoft codeanalysis visualbasic boundexpressionstatement node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundexpressionstatement accept microsoft codeanalysis visualbasic boundtreevisitor visitor line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalkerwithstackguard visit microsoft codeanalysis visualbasic boundnode node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalker visitsequencepoint microsoft codeanalysis visualbasic boundsequencepoint node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundsequencepoint accept microsoft codeanalysis visualbasic boundtreevisitor visitor line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalkerwithstackguard visit microsoft codeanalysis visualbasic boundnode node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalker visitlist of microsoft codeanalysis visualbasic boundstatement system collections immutable immutablearray of microsoft codeanalysis visualbasic boundstatement list line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalker visitblock microsoft codeanalysis visualbasic boundblock node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic lambdarewriter analysis visitlambda microsoft codeanalysis visualbasic boundlambda node boolean converttoexpressiontree line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic lambdarewriter analysis visitconversion microsoft codeanalysis visualbasic boundconversion conversion line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundconversion accept microsoft codeanalysis visualbasic boundtreevisitor visitor line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalkerwithstackguard visitexpressionwithoutstackguard microsoft codeanalysis visualbasic boundexpression node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreevisitor visitexpressionwithstackguard integer recursiondepth microsoft codeanalysis visualbasic boundexpression node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalkerwithstackguard visit microsoft codeanalysis visualbasic boundnode node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalker visitassignmentoperator microsoft codeanalysis visualbasic boundassignmentoperator node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundassignmentoperator accept microsoft codeanalysis visualbasic boundtreevisitor visitor line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalkerwithstackguard visitexpressionwithoutstackguard microsoft codeanalysis visualbasic boundexpression node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreevisitor visitexpressionwithstackguard microsoft codeanalysis visualbasic boundexpression node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreevisitor visitexpressionwithstackguard integer recursiondepth microsoft codeanalysis visualbasic boundexpression node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalkerwithstackguard visit microsoft codeanalysis visualbasic boundnode node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalker visitexpressionstatement microsoft codeanalysis visualbasic boundexpressionstatement node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundexpressionstatement accept microsoft codeanalysis visualbasic boundtreevisitor visitor line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalkerwithstackguard visit microsoft codeanalysis visualbasic boundnode node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalker visitsequencepoint microsoft codeanalysis visualbasic boundsequencepoint node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundsequencepoint accept microsoft codeanalysis visualbasic boundtreevisitor visitor line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalkerwithstackguard visit microsoft codeanalysis visualbasic boundnode node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalker visitlist of microsoft codeanalysis visualbasic boundstatement system collections immutable immutablearray of microsoft codeanalysis visualbasic boundstatement list line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalker visitstatementlist microsoft codeanalysis visualbasic boundstatementlist node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundstatementlist accept microsoft codeanalysis visualbasic boundtreevisitor visitor line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalkerwithstackguard visit microsoft codeanalysis visualbasic boundnode node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalker visitlist of microsoft codeanalysis visualbasic boundstatement system collections immutable immutablearray of microsoft codeanalysis visualbasic boundstatement list line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalker visitblock microsoft codeanalysis visualbasic boundblock node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic lambdarewriter analysis visitblock microsoft codeanalysis visualbasic boundblock node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundblock accept microsoft codeanalysis visualbasic boundtreevisitor visitor line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalkerwithstackguard visit microsoft codeanalysis visualbasic boundnode node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalker visitlist of microsoft codeanalysis visualbasic boundstatement system collections immutable immutablearray of microsoft codeanalysis visualbasic boundstatement list line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalker visitblock microsoft codeanalysis visualbasic boundblock node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic lambdarewriter analysis visitblock microsoft codeanalysis visualbasic boundblock node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundblock accept microsoft codeanalysis visualbasic boundtreevisitor visitor line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalkerwithstackguard visit microsoft codeanalysis visualbasic boundnode node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalker visitlist of microsoft codeanalysis visualbasic boundstatement system collections immutable immutablearray of microsoft codeanalysis visualbasic boundstatement list line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalker visitblock microsoft codeanalysis visualbasic boundblock node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic lambdarewriter analysis visitblock microsoft codeanalysis visualbasic boundblock node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundblock accept microsoft codeanalysis visualbasic boundtreevisitor visitor line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalkerwithstackguard visit microsoft codeanalysis visualbasic boundnode node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalker visitlist of microsoft codeanalysis visualbasic boundstatement system collections immutable immutablearray of microsoft codeanalysis visualbasic boundstatement list line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalker visitblock microsoft codeanalysis visualbasic boundblock node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic lambdarewriter analysis visitblock microsoft codeanalysis visualbasic boundblock node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundblock accept microsoft codeanalysis visualbasic boundtreevisitor visitor line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalkerwithstackguard visit microsoft codeanalysis visualbasic boundnode node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalker visitlist of microsoft codeanalysis visualbasic boundstatement system collections immutable immutablearray of microsoft codeanalysis visualbasic boundstatement list line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalker visitblock microsoft codeanalysis visualbasic boundblock node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic lambdarewriter analysis visitblock microsoft codeanalysis visualbasic boundblock node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundblock accept microsoft codeanalysis visualbasic boundtreevisitor visitor line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalkerwithstackguard visit microsoft codeanalysis visualbasic boundnode node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalker visitlist of microsoft codeanalysis visualbasic boundstatement system collections immutable immutablearray of microsoft codeanalysis visualbasic boundstatement list line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalker visitblock microsoft codeanalysis visualbasic boundblock node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic lambdarewriter analysis visitblock microsoft codeanalysis visualbasic boundblock node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundblock accept microsoft codeanalysis visualbasic boundtreevisitor visitor line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalkerwithstackguard visit microsoft codeanalysis visualbasic boundnode node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalker visitlist of microsoft codeanalysis visualbasic boundstatement system collections immutable immutablearray of microsoft codeanalysis visualbasic boundstatement list line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalker visitblock microsoft codeanalysis visualbasic boundblock node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic lambdarewriter analysis visitblock microsoft codeanalysis visualbasic boundblock node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundblock accept microsoft codeanalysis visualbasic boundtreevisitor visitor line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic boundtreewalkerwithstackguard visit microsoft codeanalysis visualbasic boundnode node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic lambdarewriter analysis analyze microsoft codeanalysis visualbasic boundnode node line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic lambdarewriter analysis analyzemethodbody microsoft codeanalysis visualbasic boundblock node microsoft codeanalysis visualbasic symbols methodsymbol method system collections generic iset of microsoft codeanalysis visualbasic symbol symbolscapturedwithoutctor microsoft codeanalysis visualbasic bindingdiagnosticbag diagnostics line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic lambdarewriter rewrite microsoft codeanalysis visualbasic boundblock node microsoft codeanalysis visualbasic symbols methodsymbol method integer methodordinal microsoft codeanalysis pooledobjects arraybuilder of microsoft codeanalysis codegen lambdadebuginfo lambdadebuginfobuilder microsoft codeanalysis pooledobjects arraybuilder of microsoft codeanalysis codegen closuredebuginfo closuredebuginfobuilder integer delegaterelaxationiddispenser microsoft codeanalysis codegen variableslotallocator slotallocatoropt microsoft codeanalysis visualbasic typecompilationstate compilationstate system collections generic iset of microsoft codeanalysis visualbasic symbol symbolscapturedwithoutcopyctor microsoft codeanalysis visualbasic bindingdiagnosticbag diagnostics system collections generic hashset of microsoft codeanalysis visualbasic boundnode rewrittennodes line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic rewriter lowerbodyorinitializer microsoft codeanalysis visualbasic symbols methodsymbol method integer methodordinal microsoft codeanalysis visualbasic boundblock body microsoft codeanalysis visualbasic synthesizedsubmissionfields previoussubmissionfields microsoft codeanalysis visualbasic typecompilationstate compilationstate boolean instrumentfordynamicanalysis system collections immutable immutablearray of microsoft codeanalysis codegen sourcespan dynamicanalysisspans microsoft codeanalysis codegen debugdocumentprovider debugdocumentprovider microsoft codeanalysis visualbasic bindingdiagnosticbag diagnostics microsoft codeanalysis codegen variableslotallocator lazyvariableslotallocator microsoft codeanalysis pooledobjects arraybuilder of microsoft codeanalysis codegen lambdadebuginfo lambdadebuginfobuilder microsoft codeanalysis pooledobjects arraybuilder of microsoft codeanalysis codegen closuredebuginfo closuredebuginfobuilder integer delegaterelaxationiddispenser microsoft codeanalysis visualbasic statemachinetypesymbol statemachinetypeopt boolean allowomissionofconditionalcalls boolean isbodysynthesized line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic methodcompiler lowerandemitmethod microsoft codeanalysis visualbasic symbols methodsymbol method integer methodordinal microsoft codeanalysis visualbasic boundblock block microsoft codeanalysis visualbasic binder binderopt microsoft codeanalysis visualbasic typecompilationstate compilationstate microsoft codeanalysis visualbasic bindingdiagnosticbag diagsforcurrentmethod microsoft codeanalysis visualbasic binder processedfieldorpropertyinitializers processedinitializers microsoft codeanalysis visualbasic synthesizedsubmissionfields previoussubmissionfields microsoft codeanalysis visualbasic symbols methodsymbol constructortoinject integer delegaterelaxationiddispenser line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic methodcompiler compilemethod microsoft codeanalysis visualbasic symbols methodsymbol method integer methodordinal integer witheventpropertyiddispenser integer delegaterelaxationiddispenser system predicate of microsoft codeanalysis visualbasic symbol filter microsoft codeanalysis visualbasic typecompilationstate compilationstate microsoft codeanalysis visualbasic binder processedfieldorpropertyinitializers processedinitializers microsoft codeanalysis visualbasic binder containingtypebinder microsoft codeanalysis visualbasic synthesizedsubmissionfields previoussubmissionfields microsoft codeanalysis visualbasic symbols methodsymbol referencedconstructor line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic methodcompiler compilenamedtype microsoft codeanalysis visualbasic symbols namedtypesymbol containingtype system predicate of microsoft codeanalysis visualbasic symbol filter line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic methodcompiler visitnamedtype microsoft codeanalysis visualbasic symbols namedtypesymbol symbol line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic symbols namedtypesymbol accept microsoft codeanalysis visualbasic visualbasicsymbolvisitor visitor line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic methodcompiler compilenamespace microsoft codeanalysis visualbasic symbols namespacesymbol symbol line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic methodcompiler visitnamespace microsoft codeanalysis visualbasic symbols namespacesymbol symbol line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic symbols namespacesymbol accept microsoft codeanalysis visualbasic visualbasicsymbolvisitor visitor line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic methodcompiler compilemethodbodies microsoft codeanalysis visualbasic visualbasiccompilation compilation microsoft codeanalysis visualbasic emit pemodulebuilder modulebeingbuiltopt boolean emittingpdb boolean emittestcoveragedata boolean hasdeclarationerrors system predicate of microsoft codeanalysis visualbasic symbol filter microsoft codeanalysis visualbasic bindingdiagnosticbag diagnostics system threading cancellationtoken cancellationtoken line basic microsoft codeanalysis visualbasic dll microsoft codeanalysis visualbasic visualbasiccompilation compilemethods microsoft codeanalysis emit commonpemodulebuilder modulebuilder boolean emittingpdb boolean emitmetadataonly boolean emittestcoveragedata microsoft codeanalysis diagnosticbag diagnostics system predicate of microsoft codeanalysis symbols isymbolinternal filteropt system threading cancellationtoken cancellationtoken line basic microsoft codeanalysis dll microsoft codeanalysis compilation emit system io stream pestream system io stream metadatapestream system io stream pdbstream system io stream xmldocumentationstream system io stream system collections generic ienumerable manifestresources microsoft codeanalysis emit emitoptions options microsoft codeanalysis imethodsymbol debugentrypoint system io stream sourcelinkstream system collections generic ienumerable embeddedtexts microsoft codeanalysis codegen compilationtestdata testdata system threading cancellationtoken cancellationtoken line c microsoft codeanalysis dll microsoft codeanalysis compilation emit system io stream pestream system io stream pdbstream system io stream xmldocumentationstream system io stream system collections generic ienumerable manifestresources microsoft codeanalysis emit emitoptions options microsoft codeanalysis imethodsymbol debugentrypoint system io stream sourcelinkstream system collections generic ienumerable embeddedtexts system io stream metadatapestream system threading cancellationtoken cancellationtoken line c roslyn test utilities dll microsoft codeanalysis diagnosticextensions getemitdiagnostics microsoft codeanalysis visualbasic visualbasiccompilation c microsoft codeanalysis emit emitoptions options system collections generic ienumerable manifestresources line c roslyn test utilities dll microsoft codeanalysis diagnosticextensions getemitdiagnostics microsoft codeanalysis visualbasic visualbasiccompilation c line c microsoft codeanalysis visualbasic test utilities dll microsoft codeanalysis visualbasic unittests compilationutils verifyusedassemblyreferences system func of microsoft codeanalysis visualbasic visualbasiccompilation createcompilationlambda line basic microsoft codeanalysis visualbasic test utilities dll microsoft codeanalysis visualbasic unittests compilationutils createemptycompilation microsoft codeanalysis visualbasic unittests basictestsource source system collections generic ienumerable of microsoft codeanalysis metadatareference references microsoft codeanalysis visualbasic visualbasiccompilationoptions options microsoft codeanalysis visualbasic visualbasicparseoptions parseoptions string assemblyname line basic microsoft codeanalysis visualbasic semantic unittests dll microsoft codeanalysis visualbasic unittests flowtestbase compileandgetmodelandspan system xml linq xelement program system collections generic list of microsoft codeanalysis visualbasic visualbasicsyntaxnode startnodes system collections generic list of microsoft codeanalysis visualbasic visualbasicsyntaxnode endnodes system xml linq xcdata ilsource system xml linq xelement errors microsoft codeanalysis visualbasic visualbasicparseoptions parseoptions line basic microsoft codeanalysis visualbasic semantic unittests dll microsoft codeanalysis visualbasic unittests flowtestbase compileandgetmodelandspan of microsoft codeanalysis dataflowanalysis system xml linq xelement program system func of microsoft codeanalysis semanticmodel system collections generic list of microsoft codeanalysis visualbasic visualbasicsyntaxnode system collections generic list of microsoft codeanalysis visualbasic visualbasicsyntaxnode microsoft codeanalysis dataflowanalysis analysisdelegate system xml linq xcdata ilsource system xml linq xelement errors line basic microsoft codeanalysis visualbasic semantic unittests dll microsoft codeanalysis visualbasic unittests flowtestbase compileandanalyzedataflow system xml linq xelement program system xml linq xcdata ilsource system xml linq xelement errors line basic microsoft codeanalysis visualbasic semantic unittests dll microsoft codeanalysis visualbasic unittests flowanalysistests withstatement expression lvalue line basic the scenario is taken from microsoft codeanalysis visualbasic unittests flowanalysistests withstatement expression lvalue unit test it will be disabled in dotnet features usedassemblyreferences branch because it blocks the feature test hook ,1
27145,21213517653.0,IssuesEvent,2022-04-11 03:45:38,woocommerce/woocommerce,https://api.github.com/repos/woocommerce/woocommerce,opened,Standardize `build` Executor,tool: monorepo infrastructure,"
**Prerequisites (mark completed items with an [x]):**
- [x] I have checked that my issue type is not listed here https://github.com/woocommerce/woocommerce/issues/new/choose
- [x] My issue is not a security issue, support request, bug report, enhancement or feature request (Please use the link above if it is).
**Issue Description:**
All of the `build` executors for projects within the monorepo should perform any steps necessary for a project to be executed. From a cursory look, there seem to be keys like `""outputs""` that we might want to use as well, in order to better track changes to projects perhaps?
Ideally, we should not be relying on `pnpm run` scripts for executors. There are a number of executors provided by `@nrwl` packages that deeply integrate with Nx and should be considered in place of `pnpm` scripts. For instance, packages like `@nrwl/js:tsc` are good for compiling TypeScript. One caveat to this approach, however, is that each command can only feature a single executor. Looking at [a repository from Nx](https://github.com/nrwl/nx/blob/2f78f29483da1b7eaf8675c4042d7a064da17ce4/nx-dev/nx-dev/project.json), it seems like the standard practice is to have distinct build steps where necessary and then a `dependsOn` property for the actual `build` command.
For now, we can just support those cases with `@nrwl/workspace:run-commands`, but ideally, we can use `build-{language}` or such as a prefix. Packages that don't use Webpack for instance, might support `build-js` and `build-css` that each perform their own build steps and have a `build` that depends on those two commands.",1.0,"Standardize `build` Executor -
**Prerequisites (mark completed items with an [x]):**
- [x] I have checked that my issue type is not listed here https://github.com/woocommerce/woocommerce/issues/new/choose
- [x] My issue is not a security issue, support request, bug report, enhancement or feature request (Please use the link above if it is).
**Issue Description:**
All of the `build` executors for projects within the monorepo should perform any steps necessary for a project to be executed. From a cursory look, there seem to be keys like `""outputs""` that we might want to use as well, in order to better track changes to projects perhaps?
Ideally, we should not be relying on `pnpm run` scripts for executors. There are a number of executors provided by `@nrwl` packages that deeply integrate with Nx and should be considered in place of `pnpm` scripts. For instance, packages like `@nrwl/js:tsc` are good for compiling TypeScript. One caveat to this approach, however, is that each command can only feature a single executor. Looking at [a repository from Nx](https://github.com/nrwl/nx/blob/2f78f29483da1b7eaf8675c4042d7a064da17ce4/nx-dev/nx-dev/project.json), it seems like the standard practice is to have distinct build steps where necessary and then a `dependsOn` property for the actual `build` command.
For now, we can just support those cases with `@nrwl/workspace:run-commands`, but ideally, we can use `build-{language}` or such as a prefix. Packages that don't use Webpack for instance, might support `build-js` and `build-css` that each perform their own build steps and have a `build` that depends on those two commands.",0,standardize build executor prerequisites mark completed items with an i have checked that my issue type is not listed here my issue is not a security issue support request bug report enhancement or feature request please use the link above if it is issue description all of the build executors for projects within the monorepo should perform any steps necessary for a project to be executed from a cursory look there seem to be keys like outputs that we might want to use as well in order to better track changes to projects perhaps ideally we should not be relying on pnpm run scripts for executors there are a number of executors provided by nrwl packages that deeply integrate with nx and should be considered in place of pnpm scripts for instance packages like nrwl js tsc are good for compiling typescript one caveat to this approach however is that each command can only feature a single executor looking at it seems like the standard practice is to have distinct build steps where necessary and then a dependson property for the actual build command for now we can just support those cases with nrwl workspace run commands but ideally we can use build language or such as a prefix packages that don t use webpack for instance might support build js and build css that each perform their own build steps and have a build that depends on those two commands ,0
344893,10349717465.0,IssuesEvent,2019-09-04 23:37:07,oslc-op/oslc-specs,https://api.github.com/repos/oslc-op/oslc-specs,opened,Provide data in a TRS event for filtering,Core: TRS Jira: trs Priority: High Xtra: Jira,"Currently, a TRS event only references the tracked resource‘s URI. If a consumer of the TRS data is only interested in a subset of the exposed tracked resources, it has to consume the events and perform a GET on each referenced tracked resource in order to determine whether it is of interest.
---
_Migrated from https://issues.oasis-open.org/browse/OSLCCORE-71 (opened by @DavidJHoney; previously assigned to @jamsden)_
",1.0,"Provide data in a TRS event for filtering - Currently, a TRS event only references the tracked resource‘s URI. If a consumer of the TRS data is only interested in a subset of the exposed tracked resources, it has to consume the events and perform a GET on each referenced tracked resource in order to determine whether it is of interest.
---
_Migrated from https://issues.oasis-open.org/browse/OSLCCORE-71 (opened by @DavidJHoney; previously assigned to @jamsden)_
",0,provide data in a trs event for filtering currently a trs event only references the tracked resource‘s uri if a consumer of the trs data is only interested in a subset of the exposed tracked resources it has to consume the events and perform a get on each referenced tracked resource in order to determine whether it is of interest migrated from opened by davidjhoney previously assigned to jamsden ,0
2352,24883873307.0,IssuesEvent,2022-10-28 05:30:10,ppy/osu,https://api.github.com/repos/ppy/osu,closed,Editor crash with buggy slider repeats,ruleset:osu! area:editor type:reliability,"### Type
Crash to desktop
### Bug description
Change slider velocity, adjust length, and add slider repeat will result in crash
osu!lazer did not generate any logs
Event Viewer log provided
### Screenshots or videos
https://user-images.githubusercontent.com/54123532/197366912-1c224f3a-50b7-4869-8970-2f000b53c212.mp4
### Version
2022.1022.0-lazer
### Logs
Log Name: Application
Source: .NET Runtime
Date: 10/22/2022 8:01:51 PM
Event ID: 1026
Task Category: None
Level: Error
Keywords: Classic
User: N/A
Computer: DESKTOP-16TONKH
Description:
Application: osu!.exe
CoreCLR Version: 6.0.222.6406
.NET Version: 6.0.2
Description: The process was terminated due to an unhandled exception.
Exception Info: System.ArgumentOutOfRangeException: Index was out of range. Must be non-negative and less than the size of the collection. (Parameter 'index')
at System.Collections.Generic.List`1.get_Item(Int32 index)
at osu.Game.Rulesets.Osu.Objects.Slider.UpdateNestedSamples()
at osu.Game.Rulesets.Osu.Objects.Slider.CreateNestedHitObjects(CancellationToken cancellationToken)
at osu.Game.Rulesets.Objects.HitObject.ApplyDefaults(ControlPointInfo controlPointInfo, IBeatmapDifficultyInfo difficulty, CancellationToken cancellationToken)
at osu.Game.Screens.Edit.EditorBeatmap.UpdateState()
at osu.Game.Screens.Edit.EditorBeatmap.Update()
at osu.Framework.Graphics.Drawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Platform.GameHost.UpdateFrame()
at osu.Framework.Threading.GameThread.processFrame()
--- End of stack trace from previous location ---
at osu.Framework.Platform.GameHost.<>c__DisplayClass133_0.b__0()
at osu.Framework.Threading.ScheduledDelegate.RunTaskInternal()
at osu.Framework.Threading.Scheduler.Update()
at osu.Framework.Threading.GameThread.processFrame()
at osu.Framework.Threading.GameThread.RunSingleFrame()
at osu.Framework.Platform.ThreadRunner.RunMainLoop()
at osu.Framework.Platform.GameHost.windowUpdate()
at osu.Framework.Platform.SDL2DesktopWindow.Run()
at osu.Framework.Platform.GameHost.Run(Game game)
at osu.Desktop.Program.Main(String[] args)
Event Xml:
102602000x80000000000000123024ApplicationDESKTOP-16TONKH
Application: osu!.exe
CoreCLR Version: 6.0.222.6406
.NET Version: 6.0.2
Description: The process was terminated due to an unhandled exception.
Exception Info: System.ArgumentOutOfRangeException: Index was out of range. Must be non-negative and less than the size of the collection. (Parameter 'index')
at System.Collections.Generic.List`1.get_Item(Int32 index)
at osu.Game.Rulesets.Osu.Objects.Slider.UpdateNestedSamples()
at osu.Game.Rulesets.Osu.Objects.Slider.CreateNestedHitObjects(CancellationToken cancellationToken)
at osu.Game.Rulesets.Objects.HitObject.ApplyDefaults(ControlPointInfo controlPointInfo, IBeatmapDifficultyInfo difficulty, CancellationToken cancellationToken)
at osu.Game.Screens.Edit.EditorBeatmap.UpdateState()
at osu.Game.Screens.Edit.EditorBeatmap.Update()
at osu.Framework.Graphics.Drawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Platform.GameHost.UpdateFrame()
at osu.Framework.Threading.GameThread.processFrame()
--- End of stack trace from previous location ---
at osu.Framework.Platform.GameHost.<>c__DisplayClass133_0.<abortExecutionFromException>b__0()
at osu.Framework.Threading.ScheduledDelegate.RunTaskInternal()
at osu.Framework.Threading.Scheduler.Update()
at osu.Framework.Threading.GameThread.processFrame()
at osu.Framework.Threading.GameThread.RunSingleFrame()
at osu.Framework.Platform.ThreadRunner.RunMainLoop()
at osu.Framework.Platform.GameHost.windowUpdate()
at osu.Framework.Platform.SDL2DesktopWindow.Run()
at osu.Framework.Platform.GameHost.Run(Game game)
at osu.Desktop.Program.Main(String[] args)
",True,"Editor crash with buggy slider repeats - ### Type
Crash to desktop
### Bug description
Change slider velocity, adjust length, and add slider repeat will result in crash
osu!lazer did not generate any logs
Event Viewer log provided
### Screenshots or videos
https://user-images.githubusercontent.com/54123532/197366912-1c224f3a-50b7-4869-8970-2f000b53c212.mp4
### Version
2022.1022.0-lazer
### Logs
Log Name: Application
Source: .NET Runtime
Date: 10/22/2022 8:01:51 PM
Event ID: 1026
Task Category: None
Level: Error
Keywords: Classic
User: N/A
Computer: DESKTOP-16TONKH
Description:
Application: osu!.exe
CoreCLR Version: 6.0.222.6406
.NET Version: 6.0.2
Description: The process was terminated due to an unhandled exception.
Exception Info: System.ArgumentOutOfRangeException: Index was out of range. Must be non-negative and less than the size of the collection. (Parameter 'index')
at System.Collections.Generic.List`1.get_Item(Int32 index)
at osu.Game.Rulesets.Osu.Objects.Slider.UpdateNestedSamples()
at osu.Game.Rulesets.Osu.Objects.Slider.CreateNestedHitObjects(CancellationToken cancellationToken)
at osu.Game.Rulesets.Objects.HitObject.ApplyDefaults(ControlPointInfo controlPointInfo, IBeatmapDifficultyInfo difficulty, CancellationToken cancellationToken)
at osu.Game.Screens.Edit.EditorBeatmap.UpdateState()
at osu.Game.Screens.Edit.EditorBeatmap.Update()
at osu.Framework.Graphics.Drawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Platform.GameHost.UpdateFrame()
at osu.Framework.Threading.GameThread.processFrame()
--- End of stack trace from previous location ---
at osu.Framework.Platform.GameHost.<>c__DisplayClass133_0.b__0()
at osu.Framework.Threading.ScheduledDelegate.RunTaskInternal()
at osu.Framework.Threading.Scheduler.Update()
at osu.Framework.Threading.GameThread.processFrame()
at osu.Framework.Threading.GameThread.RunSingleFrame()
at osu.Framework.Platform.ThreadRunner.RunMainLoop()
at osu.Framework.Platform.GameHost.windowUpdate()
at osu.Framework.Platform.SDL2DesktopWindow.Run()
at osu.Framework.Platform.GameHost.Run(Game game)
at osu.Desktop.Program.Main(String[] args)
Event Xml:
102602000x80000000000000123024ApplicationDESKTOP-16TONKH
Application: osu!.exe
CoreCLR Version: 6.0.222.6406
.NET Version: 6.0.2
Description: The process was terminated due to an unhandled exception.
Exception Info: System.ArgumentOutOfRangeException: Index was out of range. Must be non-negative and less than the size of the collection. (Parameter 'index')
at System.Collections.Generic.List`1.get_Item(Int32 index)
at osu.Game.Rulesets.Osu.Objects.Slider.UpdateNestedSamples()
at osu.Game.Rulesets.Osu.Objects.Slider.CreateNestedHitObjects(CancellationToken cancellationToken)
at osu.Game.Rulesets.Objects.HitObject.ApplyDefaults(ControlPointInfo controlPointInfo, IBeatmapDifficultyInfo difficulty, CancellationToken cancellationToken)
at osu.Game.Screens.Edit.EditorBeatmap.UpdateState()
at osu.Game.Screens.Edit.EditorBeatmap.Update()
at osu.Framework.Graphics.Drawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
at osu.Framework.Platform.GameHost.UpdateFrame()
at osu.Framework.Threading.GameThread.processFrame()
--- End of stack trace from previous location ---
at osu.Framework.Platform.GameHost.<>c__DisplayClass133_0.<abortExecutionFromException>b__0()
at osu.Framework.Threading.ScheduledDelegate.RunTaskInternal()
at osu.Framework.Threading.Scheduler.Update()
at osu.Framework.Threading.GameThread.processFrame()
at osu.Framework.Threading.GameThread.RunSingleFrame()
at osu.Framework.Platform.ThreadRunner.RunMainLoop()
at osu.Framework.Platform.GameHost.windowUpdate()
at osu.Framework.Platform.SDL2DesktopWindow.Run()
at osu.Framework.Platform.GameHost.Run(Game game)
at osu.Desktop.Program.Main(String[] args)
",1,editor crash with buggy slider repeats type crash to desktop bug description change slider velocity adjust length and add slider repeat will result in crash osu lazer did not generate any logs event viewer log provided screenshots or videos version lazer logs log name application source net runtime date pm event id task category none level error keywords classic user n a computer desktop description application osu exe coreclr version net version description the process was terminated due to an unhandled exception exception info system argumentoutofrangeexception index was out of range must be non negative and less than the size of the collection parameter index at system collections generic list get item index at osu game rulesets osu objects slider updatenestedsamples at osu game rulesets osu objects slider createnestedhitobjects cancellationtoken cancellationtoken at osu game rulesets objects hitobject applydefaults controlpointinfo controlpointinfo ibeatmapdifficultyinfo difficulty cancellationtoken cancellationtoken at osu game screens edit editorbeatmap updatestate at osu game screens edit editorbeatmap update at osu framework graphics drawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework platform gamehost updateframe at osu framework threading gamethread processframe end of stack trace from previous location at osu framework platform gamehost c b at osu framework threading scheduleddelegate runtaskinternal at osu framework threading scheduler update at osu framework threading gamethread processframe at osu framework threading gamethread runsingleframe at osu framework platform threadrunner runmainloop at osu framework platform gamehost windowupdate at osu framework platform run at osu framework platform gamehost run game game at osu desktop program main string args event xml event xmlns application desktop application osu exe coreclr version net version description the process was terminated due to an unhandled exception exception info system argumentoutofrangeexception index was out of range must be non negative and less than the size of the collection parameter index at system collections generic list get item index at osu game rulesets osu objects slider updatenestedsamples at osu game rulesets osu objects slider createnestedhitobjects cancellationtoken cancellationtoken at osu game rulesets objects hitobject applydefaults controlpointinfo controlpointinfo ibeatmapdifficultyinfo difficulty cancellationtoken cancellationtoken at osu game screens edit editorbeatmap updatestate at osu game screens edit editorbeatmap update at osu framework graphics drawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework platform gamehost updateframe at osu framework threading gamethread processframe end of stack trace from previous location at osu framework platform gamehost lt gt c lt abortexecutionfromexception gt b at osu framework threading scheduleddelegate runtaskinternal at osu framework threading scheduler update at osu framework threading gamethread processframe at osu framework threading gamethread runsingleframe at osu framework platform threadrunner runmainloop at osu framework platform gamehost windowupdate at osu framework platform run at osu framework platform gamehost run game game at osu desktop program main string args ,1
892,11562284401.0,IssuesEvent,2020-02-20 01:57:36,dotnet/runtime,https://api.github.com/repos/dotnet/runtime,closed,Test failed: Interop\COM\NETClients\IDispatch\NETClientIDispatch\NETClientIDispatch.cmd,arch-x64 area-Interop os-windows tenet-reliability,"**Job:**
[coreclr-gcstress0x3-gcstress0xc #20191006.1 Run Test Pri1 Windows_NT x64 checked](https://dev.azure.com/dnceng/public/_build/results?buildId=379231)
**Detail:**
https://helix.dot.net/api/2019-06-17/jobs/2670f608-f646-4388-9819-cf699ea0ecd9/workitems/Interop/console
**OS & Arch:**
windows x64
**Mode:**
COMPlus_TieredCompilation=0
COMPlus_GCStress=0x3
**Note:**
I try to repro this failure but get the following information:
```
Test Failure: System.Runtime.InteropServices.COMException (0x80040154): Retrieving the COM class factory for component with CLSID {0F8ACD0C-ECE0-4F2A-BD1B-6BFCA93A0726} failed due to the following error: 80040154 Class not registered (0x80040154 (REGDB_E_CLASSNOTREG)).
at NetClient.Program.Validate_Numeric_In_ReturnByRef()
at NetClient.Program.Main(String[] doNotUse)
Expected: 100
Actual: 101
END EXECUTION - FAILED
FAILED
```
**Log:**
```
Interop\COM\NETClients\IDispatch\NETClientIDispatch\NETClientIDispatch.cmd [FAIL]
Assert failure(PID 3044 [0x00000be4], Thread: 7624 [0x1dc8]): !CREATE_CHECK_STRING(pMT && pMT->Validate())
CORECLR! Object::ValidateInner + 0x14A (0x00007ff9`41d02bfa)
CORECLR! Object::Validate + 0x13A (0x00007ff9`41d02a6a)
CORECLR! WKS::GCHeap::Promote + 0x9F (0x00007ff9`4215368f)
CORECLR! GCFrame::GcScanRoots + 0x6C (0x00007ff9`41c77dec)
CORECLR! GcStackCrawlCallBack + 0x386 (0x00007ff9`420462b6)
CORECLR! Thread::MakeStackwalkerCallback + 0x52 (0x00007ff9`41aa54ca)
CORECLR! Thread::StackWalkFramesEx + 0x17A (0x00007ff9`41aa7542)
CORECLR! Thread::StackWalkFrames + 0x184 (0x00007ff9`41aa7328)
CORECLR! ScanStackRoots + 0x28D (0x00007ff9`421daddd)
CORECLR! GCToEEInterface::GcScanRoots + 0x1DA (0x00007ff9`421d99a6)
File: f:\workspace.10\_work\1\s\src\vm\object.cpp Line: 597
Image: C:\dotnetbuild\work\2670f608-f646-4388-9819-cf699ea0ecd9\Payload\CoreRun.exe
Return code: 1
Raw output file: C:\dotnetbuild\work\2670f608-f646-4388-9819-cf699ea0ecd9\Work\d2a02d31-1a0f-4dec-bca1-2d30c4347b53\Exec\Interop\COM\Reports\Interop.COM\NETClients\IDispatch\NETClientIDispatch\NETClientIDispatch.output.txt
Raw output:
BEGIN EXECUTION
""C:\dotnetbuild\work\2670f608-f646-4388-9819-cf699ea0ecd9\Payload\corerun.exe"" NETClientIDispatch.dll
Calling DoubleNumeric_ReturnByRef ...
Call to DoubleNumeric_ReturnByRef complete
Calling Add_Float_ReturnAndUpdateByRef ...
Call to Add_Float_ReturnAndUpdateByRef complete: 0.1 + 0.2 = 0.3; 0.3 == 0.3
Calling Add_Double_ReturnAndUpdateByRef ...
Call to Add_Double_ReturnAndUpdateByRef complete: 0.1 + 0.2 = 0.30000000000000004; 0.30000000000000004 == 0.30000000000000004
Calling TriggerException with Disp 127...
Expected: 100
Actual: -1073740286
END EXECUTION - FAILED
FAILED
Test Harness Exitcode is : 1
```",True,"Test failed: Interop\COM\NETClients\IDispatch\NETClientIDispatch\NETClientIDispatch.cmd - **Job:**
[coreclr-gcstress0x3-gcstress0xc #20191006.1 Run Test Pri1 Windows_NT x64 checked](https://dev.azure.com/dnceng/public/_build/results?buildId=379231)
**Detail:**
https://helix.dot.net/api/2019-06-17/jobs/2670f608-f646-4388-9819-cf699ea0ecd9/workitems/Interop/console
**OS & Arch:**
windows x64
**Mode:**
COMPlus_TieredCompilation=0
COMPlus_GCStress=0x3
**Note:**
I try to repro this failure but get the following information:
```
Test Failure: System.Runtime.InteropServices.COMException (0x80040154): Retrieving the COM class factory for component with CLSID {0F8ACD0C-ECE0-4F2A-BD1B-6BFCA93A0726} failed due to the following error: 80040154 Class not registered (0x80040154 (REGDB_E_CLASSNOTREG)).
at NetClient.Program.Validate_Numeric_In_ReturnByRef()
at NetClient.Program.Main(String[] doNotUse)
Expected: 100
Actual: 101
END EXECUTION - FAILED
FAILED
```
**Log:**
```
Interop\COM\NETClients\IDispatch\NETClientIDispatch\NETClientIDispatch.cmd [FAIL]
Assert failure(PID 3044 [0x00000be4], Thread: 7624 [0x1dc8]): !CREATE_CHECK_STRING(pMT && pMT->Validate())
CORECLR! Object::ValidateInner + 0x14A (0x00007ff9`41d02bfa)
CORECLR! Object::Validate + 0x13A (0x00007ff9`41d02a6a)
CORECLR! WKS::GCHeap::Promote + 0x9F (0x00007ff9`4215368f)
CORECLR! GCFrame::GcScanRoots + 0x6C (0x00007ff9`41c77dec)
CORECLR! GcStackCrawlCallBack + 0x386 (0x00007ff9`420462b6)
CORECLR! Thread::MakeStackwalkerCallback + 0x52 (0x00007ff9`41aa54ca)
CORECLR! Thread::StackWalkFramesEx + 0x17A (0x00007ff9`41aa7542)
CORECLR! Thread::StackWalkFrames + 0x184 (0x00007ff9`41aa7328)
CORECLR! ScanStackRoots + 0x28D (0x00007ff9`421daddd)
CORECLR! GCToEEInterface::GcScanRoots + 0x1DA (0x00007ff9`421d99a6)
File: f:\workspace.10\_work\1\s\src\vm\object.cpp Line: 597
Image: C:\dotnetbuild\work\2670f608-f646-4388-9819-cf699ea0ecd9\Payload\CoreRun.exe
Return code: 1
Raw output file: C:\dotnetbuild\work\2670f608-f646-4388-9819-cf699ea0ecd9\Work\d2a02d31-1a0f-4dec-bca1-2d30c4347b53\Exec\Interop\COM\Reports\Interop.COM\NETClients\IDispatch\NETClientIDispatch\NETClientIDispatch.output.txt
Raw output:
BEGIN EXECUTION
""C:\dotnetbuild\work\2670f608-f646-4388-9819-cf699ea0ecd9\Payload\corerun.exe"" NETClientIDispatch.dll
Calling DoubleNumeric_ReturnByRef ...
Call to DoubleNumeric_ReturnByRef complete
Calling Add_Float_ReturnAndUpdateByRef ...
Call to Add_Float_ReturnAndUpdateByRef complete: 0.1 + 0.2 = 0.3; 0.3 == 0.3
Calling Add_Double_ReturnAndUpdateByRef ...
Call to Add_Double_ReturnAndUpdateByRef complete: 0.1 + 0.2 = 0.30000000000000004; 0.30000000000000004 == 0.30000000000000004
Calling TriggerException with Disp 127...
Expected: 100
Actual: -1073740286
END EXECUTION - FAILED
FAILED
Test Harness Exitcode is : 1
```",1,test failed interop com netclients idispatch netclientidispatch netclientidispatch cmd job detail os arch windows mode complus tieredcompilation complus gcstress note i try to repro this failure but get the following information test failure system runtime interopservices comexception retrieving the com class factory for component with clsid failed due to the following error class not registered regdb e classnotreg at netclient program validate numeric in returnbyref at netclient program main string donotuse expected actual end execution failed failed log interop com netclients idispatch netclientidispatch netclientidispatch cmd assert failure pid thread create check string pmt pmt validate coreclr object validateinner coreclr object validate coreclr wks gcheap promote coreclr gcframe gcscanroots coreclr gcstackcrawlcallback coreclr thread makestackwalkercallback coreclr thread stackwalkframesex coreclr thread stackwalkframes coreclr scanstackroots coreclr gctoeeinterface gcscanroots file f workspace work s src vm object cpp line image c dotnetbuild work payload corerun exe return code raw output file c dotnetbuild work work exec interop com reports interop com netclients idispatch netclientidispatch netclientidispatch output txt raw output begin execution c dotnetbuild work payload corerun exe netclientidispatch dll calling doublenumeric returnbyref call to doublenumeric returnbyref complete calling add float returnandupdatebyref call to add float returnandupdatebyref complete calling add double returnandupdatebyref call to add double returnandupdatebyref complete calling triggerexception with disp expected actual end execution failed failed test harness exitcode is ,1
1767,3368413898.0,IssuesEvent,2015-11-22 23:02:03,rust-lang/rust,https://api.github.com/repos/rust-lang/rust,closed,DragonFlyBSD builder cannot build rustc,A-infrastructure,"See http://buildbot.rust-lang.org/builders/auto-dragonflybsd-64-opt/builds/554/steps/compile/logs/stdio
```
rustc: x86_64-unknown-dragonfly/stage0/lib/rustlib/x86_64-unknown-dragonfly/lib/libstd
../src/libstd/sys/unix/stack_overflow.rs:48:16: 48:24 error: unresolved import `libc::SIGSTKSZ`. There is no `SIGSTKSZ` in `libc` [E0432]
../src/libstd/sys/unix/stack_overflow.rs:48 SIGSTKSZ, sighandler_t};
^~~~~~~~
```",1.0,"DragonFlyBSD builder cannot build rustc - See http://buildbot.rust-lang.org/builders/auto-dragonflybsd-64-opt/builds/554/steps/compile/logs/stdio
```
rustc: x86_64-unknown-dragonfly/stage0/lib/rustlib/x86_64-unknown-dragonfly/lib/libstd
../src/libstd/sys/unix/stack_overflow.rs:48:16: 48:24 error: unresolved import `libc::SIGSTKSZ`. There is no `SIGSTKSZ` in `libc` [E0432]
../src/libstd/sys/unix/stack_overflow.rs:48 SIGSTKSZ, sighandler_t};
^~~~~~~~
```",0,dragonflybsd builder cannot build rustc see rustc unknown dragonfly lib rustlib unknown dragonfly lib libstd src libstd sys unix stack overflow rs error unresolved import libc sigstksz there is no sigstksz in libc src libstd sys unix stack overflow rs sigstksz sighandler t ,0
726,10130669539.0,IssuesEvent,2019-08-01 17:32:42,microsoft/VFSForGit,https://api.github.com/repos/microsoft/VFSForGit,closed,Mac: Validate filesystem is configured to be case insensitive before cloning and mounting,affects: correctness affects: polish affects: reliability platform: macOS pri2,Currently VFS4G only supports Mac when its file system is configured to be case insensitive.,True,Mac: Validate filesystem is configured to be case insensitive before cloning and mounting - Currently VFS4G only supports Mac when its file system is configured to be case insensitive.,1,mac validate filesystem is configured to be case insensitive before cloning and mounting currently only supports mac when its file system is configured to be case insensitive ,1
105336,11447775029.0,IssuesEvent,2020-02-06 00:54:57,xamcat/mobcat-samples,https://api.github.com/repos/xamcat/mobcat-samples,opened,Update Readme with steps for using mobcat-library packages,documentation,"- Update current readme with steps to use packages
- Update readme build badges with new pipelines",1.0,"Update Readme with steps for using mobcat-library packages - - Update current readme with steps to use packages
- Update readme build badges with new pipelines",0,update readme with steps for using mobcat library packages update current readme with steps to use packages update readme build badges with new pipelines,0
27183,21335210127.0,IssuesEvent,2022-04-18 13:49:20,ZcashFoundation/zebra,https://api.github.com/repos/ZcashFoundation/zebra,closed,Add dev zones for testing DNS seeders,A-infrastructure C-enhancement P-Low :snowflake: A-network,"## Motivation
There doesn't seem to be any way to test a newly launched ZFND seeder from Zebra.
We'd need to specify its IP address, and the canonical ZFND seeder DNS name, like:
```sh
dig @34.72.199.73 testnet.seeder.zfnd.org
```
But Zebra only uses `tokio`'s basic DNS resolver API.
### Design
We could add `dev.testnet.seeder.zfnd.org` and `dev.mainnet.seeder.zfnd.org` DNS names, so we can test development versions of the seeders before deployment.
(We'd need to change the seeder to serve the `dev.*` zones in addition to `{test,main}net.seeder.zfnd.org`. And then add those zones to our DNS records pointing at a fixed Google Cloud IP.)
## Related Work
Discovered as part of:
- #2797
- #2804",1.0,"Add dev zones for testing DNS seeders - ## Motivation
There doesn't seem to be any way to test a newly launched ZFND seeder from Zebra.
We'd need to specify its IP address, and the canonical ZFND seeder DNS name, like:
```sh
dig @34.72.199.73 testnet.seeder.zfnd.org
```
But Zebra only uses `tokio`'s basic DNS resolver API.
### Design
We could add `dev.testnet.seeder.zfnd.org` and `dev.mainnet.seeder.zfnd.org` DNS names, so we can test development versions of the seeders before deployment.
(We'd need to change the seeder to serve the `dev.*` zones in addition to `{test,main}net.seeder.zfnd.org`. And then add those zones to our DNS records pointing at a fixed Google Cloud IP.)
## Related Work
Discovered as part of:
- #2797
- #2804",0,add dev zones for testing dns seeders motivation there doesn t seem to be any way to test a newly launched zfnd seeder from zebra we d need to specify its ip address and the canonical zfnd seeder dns name like sh dig testnet seeder zfnd org but zebra only uses tokio s basic dns resolver api design we could add dev testnet seeder zfnd org and dev mainnet seeder zfnd org dns names so we can test development versions of the seeders before deployment we d need to change the seeder to serve the dev zones in addition to test main net seeder zfnd org and then add those zones to our dns records pointing at a fixed google cloud ip related work discovered as part of ,0
3059,32043443468.0,IssuesEvent,2023-09-22 21:42:53,department-of-veterans-affairs/va.gov-team,https://api.github.com/repos/department-of-veterans-affairs/va.gov-team,opened,Linting discovery: Why are there so many rules and why is spacing and other formatting included?,platform-reliability-team,"## Issue Description
Our collection of linting rules has inflated to quite a large size. This has caused the check in GitHub actions that runs against main to take longer than our site does to build, as well as caused many conflicts with developer work streams.
---
## Tasks
- [ ] Look through linting errors and our linting rule list to see what linting rules may not need to be present
- [ ] Make recommendations to smooth out the amount of time linting takes to run
- [ ] Consider recommendations to change some errors into warnings, to avoid work streams being blocked over insignificant issues like formatting and spacing.
- [ ] Put recommendations in a ticket for implementation.
- [ ] Schedule a discussion around the rule adjustments that include relevant parties (Likely Clint, Joe, and key front end people).
## Acceptance Criteria
- [ ] A new ticket exists with recommendations for linting adjustments.
- [ ] A meeting is scheduled to come to a consensus on the path forward.
---
",True,"Linting discovery: Why are there so many rules and why is spacing and other formatting included? - ## Issue Description
Our collection of linting rules has inflated to quite a large size. This has caused the check in GitHub actions that runs against main to take longer than our site does to build, as well as caused many conflicts with developer work streams.
---
## Tasks
- [ ] Look through linting errors and our linting rule list to see what linting rules may not need to be present
- [ ] Make recommendations to smooth out the amount of time linting takes to run
- [ ] Consider recommendations to change some errors into warnings, to avoid work streams being blocked over insignificant issues like formatting and spacing.
- [ ] Put recommendations in a ticket for implementation.
- [ ] Schedule a discussion around the rule adjustments that include relevant parties (Likely Clint, Joe, and key front end people).
## Acceptance Criteria
- [ ] A new ticket exists with recommendations for linting adjustments.
- [ ] A meeting is scheduled to come to a consensus on the path forward.
---
",1,linting discovery why are there so many rules and why is spacing and other formatting included issue description our collection of linting rules has inflated to quite a large size this has caused the check in github actions that runs against main to take longer than our site does to build as well as caused many conflicts with developer work streams tasks look through linting errors and our linting rule list to see what linting rules may not need to be present make recommendations to smooth out the amount of time linting takes to run consider recommendations to change some errors into warnings to avoid work streams being blocked over insignificant issues like formatting and spacing put recommendations in a ticket for implementation schedule a discussion around the rule adjustments that include relevant parties likely clint joe and key front end people acceptance criteria a new ticket exists with recommendations for linting adjustments a meeting is scheduled to come to a consensus on the path forward ,1
136535,12717872778.0,IssuesEvent,2020-06-24 06:20:29,godotengine/godot,https://api.github.com/repos/godotengine/godot,closed,[3.2.2beta3-Gles2] WorldEnvironment is not working!,documentation enhancement junior job,"Godot 3.2.2 beta3
GLES2 android
WorldEnvironment is not working in playmode and android but it is working perfectly in editor
Editor

Playmode

",1.0,"[3.2.2beta3-Gles2] WorldEnvironment is not working! - Godot 3.2.2 beta3
GLES2 android
WorldEnvironment is not working in playmode and android but it is working perfectly in editor
Editor

Playmode

",0, worldenvironment is not working godot android worldenvironment is not working in playmode and android but it is working perfectly in editor editor playmode ,0
664164,22241657359.0,IssuesEvent,2022-06-09 06:11:52,altoxml/schema,https://api.github.com/repos/altoxml/schema,closed,Clarify implicit reading order,8 published high priority,"There are currently no mentions of reading order anywhere in the standard and most people treat the sequence of elements as the order these elements should be read, e.g. the n-th `` in a `` is the n-th word a human reader would read in that line.
Apparently this isn't evident to everyone out there. [These](https://twitter.com/tillgrallert/status/1369761658437054464) tweets document that Transkribus's ALTO output sorts `` elements from left to right which causes an inversion for RTL text. We should probably clarify that `//` are to be ordered in a way that corresponds to the text flow.",1.0,"Clarify implicit reading order - There are currently no mentions of reading order anywhere in the standard and most people treat the sequence of elements as the order these elements should be read, e.g. the n-th `` in a `` is the n-th word a human reader would read in that line.
Apparently this isn't evident to everyone out there. [These](https://twitter.com/tillgrallert/status/1369761658437054464) tweets document that Transkribus's ALTO output sorts `` elements from left to right which causes an inversion for RTL text. We should probably clarify that `//` are to be ordered in a way that corresponds to the text flow.",0,clarify implicit reading order there are currently no mentions of reading order anywhere in the standard and most people treat the sequence of elements as the order these elements should be read e g the n th in a is the n th word a human reader would read in that line apparently this isn t evident to everyone out there tweets document that transkribus s alto output sorts elements from left to right which causes an inversion for rtl text we should probably clarify that are to be ordered in a way that corresponds to the text flow ,0
68960,9222556375.0,IssuesEvent,2019-03-11 23:22:16,adobe/spectrum-css,https://api.github.com/repos/adobe/spectrum-css,opened,Show Markup broken for some examples,documentation,"### Expected Behavior
Show Markup does what it promises to do, all of the time
### Actual Behavior
Broken promises, broken hearts\
#### Steps to Reproduce
1. http://opensource.adobe.com/spectrum-css/2.8.0/docs/#toast
2. Click ""Show Markup""
3. Cry
#### Spectrum-CSS version
2.8.0
",1.0,"Show Markup broken for some examples - ### Expected Behavior
Show Markup does what it promises to do, all of the time
### Actual Behavior
Broken promises, broken hearts\
#### Steps to Reproduce
1. http://opensource.adobe.com/spectrum-css/2.8.0/docs/#toast
2. Click ""Show Markup""
3. Cry
#### Spectrum-CSS version
2.8.0
",0,show markup broken for some examples expected behavior show markup does what it promises to do all of the time actual behavior broken promises broken hearts steps to reproduce click show markup cry spectrum css version ,0
16551,9828803489.0,IssuesEvent,2019-06-15 15:00:28,AOSC-Dev/aosc-os-abbs,https://api.github.com/repos/AOSC-Dev/aosc-os-abbs,opened,"chromium, google-chrome: security update to 75.0.3770.90",security to-stable upgrade,"
**CVE IDs:** CVE-2019-5828, CVE-2019-5829, CVE-2019-5830, CVE-2019-5831, CVE-2019-5832, CVE-2019-5833, CVE-2019-5835, CVE-2019-5836, CVE-2019-5837, CVE-2019-5838, CVE-2019-5839, CVE-2019-5840, CVE-2019-5842
**Other security advisory IDs:** ASA-201906-4, ASA-201906-11
**Descriptions:**
https://chromereleases.googleblog.com/2019/06/stable-channel-update-for-desktop_13.html
[$N/A][[961413](https://crbug.com/961413)] High CVE-2019-5842: Use-after-free in Blink. Reported by BUGFENSE Anonymous Bug Bounties https://bugfense.io on 2019-05-09
https://chromereleases.googleblog.com/2019/06/stable-channel-update-for-desktop.html
[$5000][[956597](https://crbug.com/956597)] High CVE-2019-5828: Use after free in ServiceWorker. Reported by leecraso of Beihang University and Guang Gong of Alpha Team, Qihoo 360 on 2019-04-25
[$500][[958533](https://crbug.com/958533)] High CVE-2019-5829: Use after free in Download Manager. Reported by Lucas Pinheiro, Microsoft Browser Vulnerability Research on 2019-05-01
[$TBD][[665766](https://crbug.com/665766)] Medium CVE-2019-5830: Incorrectly credentialed requests in CORS. Reported by Andrew Krasichkov, Yandex Security Team on 2016-11-16
[$TBD][[950328](https://crbug.com/950328)] Medium CVE-2019-5831: Incorrect map processing in V8. Reported by yngwei(JiaWei, Yin) of IIE Varas and sakura of Tecent Xuanwu Lab on 2019-04-07
[$TBD][[959390](https://crbug.com/959390)] Medium CVE-2019-5832: Incorrect CORS handling in XHR. Reported by Sergey Shekyan (Shape Security) on 2019-05-03
[$N/A][[945067](https://crbug.com/945067)] Medium CVE-2019-5833: Inconsistent security UI placement. Reported by Khalil Zhani on 2019-03-23
~~[$N/A][[962368](https://crbug.com/962368)] Medium CVE-2019-5834: URL spoof in Omnibox on iOS. Reported by Khalil Zhani on 2019-05-13~~ (not applicable to AOSC OS)
[$1000][[939239](https://crbug.com/939239)] Medium CVE-2019-5835: Out of bounds read in Swiftshader. Reported by Wenxiang Qian of Tencent Blade Team on 2019-03-07
[$1000][[947342](https://crbug.com/947342)] Medium CVE-2019-5836: Heap buffer overflow in Angle. Reported by Omair on 2019-03-29
[$500][[918293](https://crbug.com/918293)] Medium CVE-2019-5837: Cross-origin resources size disclosure in Appcache . Reported by Adam Iwaniuk on 2018-12-30
[$500][[893087](https://crbug.com/893087)] Low CVE-2019-5838: Overly permissive tab access in Extensions. Reported by David Erceg on 2018-10-08
[$500][[925614](https://crbug.com/925614)] Low CVE-2019-5839: Incorrect handling of certain code points in Blink. Reported by Masato Kinugawa on 2019-01-26
[$N/A][[951782](https://crbug.com/951782)] Low CVE-2019-5840: Popup blocker bypass. Reported by Eliya Stein, Jerome Dangu on 2019-04-11
**Architectural progress:**
- [ ] AMD64 `amd64`
- [ ] 32-bit Optional Environment `optenv32`
- [ ] AArch64 `arm64`
- [ ] ARMv7 `armel`
- [ ] PowerPC 64-bit BE `ppc64`
- [ ] PowerPC 32-bit BE `powerpc`
- [ ] RISC-V 64-bit `riscv64`
",True,"chromium, google-chrome: security update to 75.0.3770.90 -
**CVE IDs:** CVE-2019-5828, CVE-2019-5829, CVE-2019-5830, CVE-2019-5831, CVE-2019-5832, CVE-2019-5833, CVE-2019-5835, CVE-2019-5836, CVE-2019-5837, CVE-2019-5838, CVE-2019-5839, CVE-2019-5840, CVE-2019-5842
**Other security advisory IDs:** ASA-201906-4, ASA-201906-11
**Descriptions:**
https://chromereleases.googleblog.com/2019/06/stable-channel-update-for-desktop_13.html
[$N/A][[961413](https://crbug.com/961413)] High CVE-2019-5842: Use-after-free in Blink. Reported by BUGFENSE Anonymous Bug Bounties https://bugfense.io on 2019-05-09
https://chromereleases.googleblog.com/2019/06/stable-channel-update-for-desktop.html
[$5000][[956597](https://crbug.com/956597)] High CVE-2019-5828: Use after free in ServiceWorker. Reported by leecraso of Beihang University and Guang Gong of Alpha Team, Qihoo 360 on 2019-04-25
[$500][[958533](https://crbug.com/958533)] High CVE-2019-5829: Use after free in Download Manager. Reported by Lucas Pinheiro, Microsoft Browser Vulnerability Research on 2019-05-01
[$TBD][[665766](https://crbug.com/665766)] Medium CVE-2019-5830: Incorrectly credentialed requests in CORS. Reported by Andrew Krasichkov, Yandex Security Team on 2016-11-16
[$TBD][[950328](https://crbug.com/950328)] Medium CVE-2019-5831: Incorrect map processing in V8. Reported by yngwei(JiaWei, Yin) of IIE Varas and sakura of Tecent Xuanwu Lab on 2019-04-07
[$TBD][[959390](https://crbug.com/959390)] Medium CVE-2019-5832: Incorrect CORS handling in XHR. Reported by Sergey Shekyan (Shape Security) on 2019-05-03
[$N/A][[945067](https://crbug.com/945067)] Medium CVE-2019-5833: Inconsistent security UI placement. Reported by Khalil Zhani on 2019-03-23
~~[$N/A][[962368](https://crbug.com/962368)] Medium CVE-2019-5834: URL spoof in Omnibox on iOS. Reported by Khalil Zhani on 2019-05-13~~ (not applicable to AOSC OS)
[$1000][[939239](https://crbug.com/939239)] Medium CVE-2019-5835: Out of bounds read in Swiftshader. Reported by Wenxiang Qian of Tencent Blade Team on 2019-03-07
[$1000][[947342](https://crbug.com/947342)] Medium CVE-2019-5836: Heap buffer overflow in Angle. Reported by Omair on 2019-03-29
[$500][[918293](https://crbug.com/918293)] Medium CVE-2019-5837: Cross-origin resources size disclosure in Appcache . Reported by Adam Iwaniuk on 2018-12-30
[$500][[893087](https://crbug.com/893087)] Low CVE-2019-5838: Overly permissive tab access in Extensions. Reported by David Erceg on 2018-10-08
[$500][[925614](https://crbug.com/925614)] Low CVE-2019-5839: Incorrect handling of certain code points in Blink. Reported by Masato Kinugawa on 2019-01-26
[$N/A][[951782](https://crbug.com/951782)] Low CVE-2019-5840: Popup blocker bypass. Reported by Eliya Stein, Jerome Dangu on 2019-04-11
**Architectural progress:**
- [ ] AMD64 `amd64`
- [ ] 32-bit Optional Environment `optenv32`
- [ ] AArch64 `arm64`
- [ ] ARMv7 `armel`
- [ ] PowerPC 64-bit BE `ppc64`
- [ ] PowerPC 32-bit BE `powerpc`
- [ ] RISC-V 64-bit `riscv64`
",0,chromium google chrome security update to cve ids cve cve cve cve cve cve cve cve cve cve cve cve cve other security advisory ids asa asa descriptions high cve use after free in blink reported by bugfense anonymous bug bounties on high cve use after free in serviceworker reported by leecraso of beihang university and guang gong of alpha team qihoo on high cve use after free in download manager reported by lucas pinheiro microsoft browser vulnerability research on medium cve incorrectly credentialed requests in cors reported by andrew krasichkov yandex security team on medium cve incorrect map processing in reported by yngwei jiawei yin of iie varas and sakura of tecent xuanwu lab on medium cve incorrect cors handling in xhr reported by sergey shekyan shape security on medium cve inconsistent security ui placement reported by khalil zhani on medium cve url spoof in omnibox on ios reported by khalil zhani on not applicable to aosc os medium cve out of bounds read in swiftshader reported by wenxiang qian of tencent blade team on medium cve heap buffer overflow in angle reported by omair on medium cve cross origin resources size disclosure in appcache reported by adam iwaniuk on low cve overly permissive tab access in extensions reported by david erceg on low cve incorrect handling of certain code points in blink reported by masato kinugawa on low cve popup blocker bypass reported by eliya stein jerome dangu on architectural progress bit optional environment armel powerpc bit be powerpc bit be powerpc risc v bit ,0
2012,3249508583.0,IssuesEvent,2015-10-18 07:38:38,ember-cli/ember-cli,https://api.github.com/repos/ember-cli/ember-cli,closed,ember server is dramatic slowly at windows 10 + SSD + 16G Memory,performance windows,"I'm newbie of ember, when start to build an existing large project (10k files), it takes 19703 ms to initial and takes 20k+ ms to increase build when i change anything in any file, is there any way to improve it? Because my colleague using mac is quite fast and only few seconds to build up.
I tried to install ember-cli-windows and several similar tools but not one help, it there any way to solve this problem?
The project will concat all vendor js files into ""vendor.js"" and all project files into an ""app.js"", is there anyway to avoid it concat and process vendor files every time? This is my increase build time:

It's really depressed when work with live-reload, you have to wait for half minute at every save, and I like to save at every few seconds.
",True,"ember server is dramatic slowly at windows 10 + SSD + 16G Memory - I'm newbie of ember, when start to build an existing large project (10k files), it takes 19703 ms to initial and takes 20k+ ms to increase build when i change anything in any file, is there any way to improve it? Because my colleague using mac is quite fast and only few seconds to build up.
I tried to install ember-cli-windows and several similar tools but not one help, it there any way to solve this problem?
The project will concat all vendor js files into ""vendor.js"" and all project files into an ""app.js"", is there anyway to avoid it concat and process vendor files every time? This is my increase build time:

It's really depressed when work with live-reload, you have to wait for half minute at every save, and I like to save at every few seconds.
",0,ember server is dramatic slowly at windows ssd memory i m newbie of ember when start to build an existing large project files it takes ms to initial and takes ms to increase build when i change anything in any file is there any way to improve it because my colleague using mac is quite fast and only few seconds to build up i tried to install ember cli windows and several similar tools but not one help it there any way to solve this problem the project will concat all vendor js files into vendor js and all project files into an app js is there anyway to avoid it concat and process vendor files every time this is my increase build time it s really depressed when work with live reload you have to wait for half minute at every save and i like to save at every few seconds ,0
187857,14433311848.0,IssuesEvent,2020-12-07 04:29:54,eclipse/openj9,https://api.github.com/repos/eclipse/openj9,reopened,Windows LambdaLoadTest hang,test failure,"https://ci.eclipse.org/openj9/job/Test_openjdk8_j9_special.system_x86-32_windows_Personal/33
LambdaLoadTest_OpenJ9_NonLinux_special_24
variation: Mode107-OSRG
JVM_OPTIONS: -Xgcpolicy:optthruput -Xdebug -Xrunjdwp:transport=dt_socket,address=8888,server=y,onthrow=no.pkg.foo,launch=echo -Xjit:enableOSR,enableOSROnGuardFailure,count=1,disableAsyncCompilation
No diagnostic files generated.
```
10:22:03.554 - Completed 3.0%. Number of tests started=6
10:22:23.882 - Completed 3.0%. Number of tests started=6 (+0)
10:22:43.460 - Completed 3.0%. Number of tests started=6 (+0)
10:23:03.476 - Completed 3.0%. Number of tests started=6 (+0)
10:23:23.492 - Completed 3.0%. Number of tests started=6 (+0)
10:23:43.398 - Completed 3.0%. Number of tests started=6 (+0)
10:24:03.413 - Completed 3.0%. Number of tests started=6 (+0)
10:24:23.429 - Completed 3.0%. Number of tests started=6 (+0)
10:24:43.445 - Completed 3.0%. Number of tests started=6 (+0)
10:25:03.460 - Completed 3.0%. Number of tests started=6 (+0)
10:25:23.476 - Completed 3.0%. Number of tests started=6 (+0)
10:25:43.492 - Completed 3.0%. Number of tests started=6 (+0)
10:26:03.398 - Completed 3.0%. Number of tests started=6 (+0)
10:26:23.414 - Completed 3.0%. Number of tests started=6 (+0)
10:26:43.429 - Completed 3.0%. Number of tests started=6 (+0)
10:27:03.445 - Completed 3.0%. Number of tests started=6 (+0)
10:27:23.460 - Completed 3.0%. Number of tests started=6 (+0)
10:27:43.476 - Completed 3.0%. Number of tests started=6 (+0)
10:28:03.492 - Completed 3.0%. Number of tests started=6 (+0)
10:28:23.398 - Completed 3.0%. Number of tests started=6 (+0)
10:28:43.414 - Completed 3.0%. Number of tests started=6 (+0)
10:29:03.429 - Completed 3.0%. Number of tests started=6 (+0)
10:29:23.445 - Completed 3.0%. Number of tests started=6 (+0)
10:29:43.461 - Completed 3.0%. Number of tests started=6 (+0)
10:30:03.476 - Completed 3.0%. Number of tests started=6 (+0)
10:30:23.492 - Completed 3.0%. Number of tests started=6 (+0)
10:30:43.398 - Completed 3.0%. Number of tests started=6 (+0)
10:31:03.414 - Completed 3.0%. Number of tests started=6 (+0)
10:31:23.429 - Completed 3.0%. Number of tests started=6 (+0)
10:31:43.445 - Completed 3.0%. Number of tests started=6 (+0)
10:32:03.461 - Completed 3.0%. Number of tests started=6 (+0)
10:32:23.414 - Completed 3.0%. Number of tests started=6 (+0)
10:32:43.398 - Completed 3.0%. Number of tests started=6 (+0)
10:33:03.429 - Completed 3.0%. Number of tests started=6 (+0)
10:33:23.445 - Completed 3.0%. Number of tests started=6 (+0)
10:33:43.461 - Completed 3.0%. Number of tests started=6 (+0)
10:34:03.476 - Completed 3.0%. Number of tests started=6 (+0)
10:34:23.492 - Completed 3.0%. Number of tests started=6 (+0)
10:34:43.398 - Completed 3.0%. Number of tests started=6 (+0)
10:35:03.414 - Completed 3.0%. Number of tests started=6 (+0)
10:35:23.429 - Completed 3.0%. Number of tests started=6 (+0)
10:35:43.445 - Completed 3.0%. Number of tests started=6 (+0)
10:36:03.461 - Completed 3.0%. Number of tests started=6 (+0)
10:36:23.476 - Completed 3.0%. Number of tests started=6 (+0)
10:36:43.492 - Completed 3.0%. Number of tests started=6 (+0)
10:37:03.398 - Completed 3.0%. Number of tests started=6 (+0)
10:37:23.414 - Completed 3.0%. Number of tests started=6 (+0)
10:37:43.430 - Completed 3.0%. Number of tests started=6 (+0)
10:38:03.445 - Completed 3.0%. Number of tests started=6 (+0)
10:38:23.461 - Completed 3.0%. Number of tests started=6 (+0)
10:38:43.476 - Completed 3.0%. Number of tests started=6 (+0)
10:39:03.492 - Completed 3.0%. Number of tests started=6 (+0)
10:39:23.398 - Completed 3.0%. Number of tests started=6 (+0)
10:39:43.414 - Completed 3.0%. Number of tests started=6 (+0)
10:40:03.430 - Completed 3.0%. Number of tests started=6 (+0)
10:40:23.445 - Completed 3.0%. Number of tests started=6 (+0)
10:40:43.461 - Completed 3.0%. Number of tests started=6 (+0)
10:41:03.476 - Completed 3.0%. Number of tests started=6 (+0)
10:41:23.492 - Completed 3.0%. Number of tests started=6 (+0)
10:41:43.398 - Completed 3.0%. Number of tests started=6 (+0)
10:42:03.414 - Completed 3.0%. Number of tests started=6 (+0)
10:42:23.430 - Completed 3.0%. Number of tests started=6 (+0)
10:42:43.445 - Completed 3.0%. Number of tests started=6 (+0)
10:43:03.461 - Completed 3.0%. Number of tests started=6 (+0)
10:43:23.477 - Completed 3.0%. Number of tests started=6 (+0)
10:43:43.492 - Completed 3.0%. Number of tests started=6 (+0)
10:44:03.398 - Completed 3.0%. Number of tests started=6 (+0)
10:44:23.414 - Completed 3.0%. Number of tests started=6 (+0)
10:44:43.430 - Completed 3.0%. Number of tests started=6 (+0)
10:45:03.445 - Completed 3.0%. Number of tests started=6 (+0)
10:45:23.461 - Completed 3.0%. Number of tests started=6 (+0)
10:45:43.477 - Completed 3.0%. Number of tests started=6 (+0)
10:46:03.492 - Completed 3.0%. Number of tests started=6 (+0)
10:46:23.398 - Completed 3.0%. Number of tests started=6 (+0)
10:46:43.414 - Completed 3.0%. Number of tests started=6 (+0)
10:47:03.430 - Completed 3.0%. Number of tests started=6 (+0)
10:47:23.508 - Completed 3.0%. Number of tests started=6 (+0)
10:47:43.430 - Completed 3.0%. Number of tests started=6 (+0)
10:48:03.461 - Completed 3.0%. Number of tests started=6 (+0)
10:48:23.399 - Completed 3.0%. Number of tests started=6 (+0)
10:48:43.414 - Completed 3.0%. Number of tests started=6 (+0)
10:49:03.430 - Completed 3.0%. Number of tests started=6 (+0)
10:49:23.446 - Completed 3.0%. Number of tests started=6 (+0)
10:49:43.461 - Completed 3.0%. Number of tests started=6 (+0)
10:50:03.477 - Completed 3.0%. Number of tests started=6 (+0)
10:50:23.492 - Completed 3.0%. Number of tests started=6 (+0)
10:50:43.399 - Completed 3.0%. Number of tests started=6 (+0)
10:51:03.414 - Completed 3.0%. Number of tests started=6 (+0)
10:51:23.430 - Completed 3.0%. Number of tests started=6 (+0)
10:51:43.446 - Completed 3.0%. Number of tests started=6 (+0)
10:52:03.461 - Completed 3.0%. Number of tests started=6 (+0)
10:52:23.477 - Completed 3.0%. Number of tests started=6 (+0)
10:52:43.492 - Completed 3.0%. Number of tests started=6 (+0)
10:53:03.399 - Completed 3.0%. Number of tests started=6 (+0)
10:53:23.414 - Completed 3.0%. Number of tests started=6 (+0)
10:53:43.430 - Completed 3.0%. Number of tests started=6 (+0)
10:54:03.446 - Completed 3.0%. Number of tests started=6 (+0)
10:54:23.461 - Completed 3.0%. Number of tests started=6 (+0)
10:54:43.477 - Completed 3.0%. Number of tests started=6 (+0)
10:55:03.493 - Completed 3.0%. Number of tests started=6 (+0)
10:55:23.399 - Completed 3.0%. Number of tests started=6 (+0)
10:55:43.414 - Completed 3.0%. Number of tests started=6 (+0)
10:56:03.430 - Completed 3.0%. Number of tests started=6 (+0)
10:56:23.446 - Completed 3.0%. Number of tests started=6 (+0)
10:56:43.461 - Completed 3.0%. Number of tests started=6 (+0)
10:57:03.477 - Completed 3.0%. Number of tests started=6 (+0)
10:57:23.493 - Completed 3.0%. Number of tests started=6 (+0)
10:57:43.399 - Completed 3.0%. Number of tests started=6 (+0)
10:58:03.414 - Completed 3.0%. Number of tests started=6 (+0)
10:58:23.430 - Completed 3.0%. Number of tests started=6 (+0)
10:58:43.446 - Completed 3.0%. Number of tests started=6 (+0)
10:59:03.461 - Completed 3.0%. Number of tests started=6 (+0)
10:59:23.477 - Completed 3.0%. Number of tests started=6 (+0)
10:59:43.493 - Completed 3.0%. Number of tests started=6 (+0)
11:00:03.399 - Completed 3.0%. Number of tests started=6 (+0)
11:00:23.415 - Completed 3.0%. Number of tests started=6 (+0)
11:00:43.430 - Completed 3.0%. Number of tests started=6 (+0)
11:01:03.446 - Completed 3.0%. Number of tests started=6 (+0)
11:01:23.461 - Completed 3.0%. Number of tests started=6 (+0)
11:01:43.477 - Completed 3.0%. Number of tests started=6 (+0)
11:02:03.493 - Completed 3.0%. Number of tests started=6 (+0)
11:02:23.399 - Completed 3.0%. Number of tests started=6 (+0)
11:02:43.415 - Completed 3.0%. Number of tests started=6 (+0)
11:03:03.430 - Completed 3.0%. Number of tests started=6 (+0)
11:03:23.446 - Completed 3.0%. Number of tests started=6 (+0)
11:03:43.462 - Completed 3.0%. Number of tests started=6 (+0)
11:04:03.477 - Completed 3.0%. Number of tests started=6 (+0)
11:04:23.493 - Completed 3.0%. Number of tests started=6 (+0)
11:04:43.399 - Completed 3.0%. Number of tests started=6 (+0)
11:05:03.415 - Completed 3.0%. Number of tests started=6 (+0)
11:05:23.430 - Completed 3.0%. Number of tests started=6 (+0)
11:05:43.446 - Completed 3.0%. Number of tests started=6 (+0)
11:06:03.462 - Completed 3.0%. Number of tests started=6 (+0)
11:06:23.477 - Completed 3.0%. Number of tests started=6 (+0)
11:06:43.493 - Completed 3.0%. Number of tests started=6 (+0)
11:07:03.399 - Completed 3.0%. Number of tests started=6 (+0)
11:07:23.415 - Completed 3.0%. Number of tests started=6 (+0)
11:07:43.430 - Completed 3.0%. Number of tests started=6 (+0)
11:08:03.446 - Completed 3.0%. Number of tests started=6 (+0)
11:08:23.462 - Completed 3.0%. Number of tests started=6 (+0)
11:08:43.477 - Completed 3.0%. Number of tests started=6 (+0)
11:09:03.493 - Completed 3.0%. Number of tests started=6 (+0)
11:09:23.399 - Completed 3.0%. Number of tests started=6 (+0)
11:09:43.415 - Completed 3.0%. Number of tests started=6 (+0)
11:10:03.430 - Completed 3.0%. Number of tests started=6 (+0)
11:10:23.446 - Completed 3.0%. Number of tests started=6 (+0)
11:10:43.462 - Completed 3.0%. Number of tests started=6 (+0)
11:11:03.477 - Completed 3.0%. Number of tests started=6 (+0)
11:11:23.493 - Completed 3.0%. Number of tests started=6 (+0)
11:11:43.399 - Completed 3.0%. Number of tests started=6 (+0)
11:12:03.415 - Completed 3.0%. Number of tests started=6 (+0)
11:12:23.431 - Completed 3.0%. Number of tests started=6 (+0)
11:12:43.446 - Completed 3.0%. Number of tests started=6 (+0)
11:13:03.462 - Completed 3.0%. Number of tests started=6 (+0)
11:13:23.477 - Completed 3.0%. Number of tests started=6 (+0)
11:13:43.493 - Completed 3.0%. Number of tests started=6 (+0)
11:14:03.399 - Completed 3.0%. Number of tests started=6 (+0)
11:14:23.415 - Completed 3.0%. Number of tests started=6 (+0)
11:14:43.431 - Completed 3.0%. Number of tests started=6 (+0)
11:15:03.446 - Completed 3.0%. Number of tests started=6 (+0)
11:15:23.462 - Completed 3.0%. Number of tests started=6 (+0)
11:15:43.478 - Completed 3.0%. Number of tests started=6 (+0)
11:16:03.493 - Completed 3.0%. Number of tests started=6 (+0)
11:16:23.399 - Completed 3.0%. Number of tests started=6 (+0)
11:16:43.415 - Completed 3.0%. Number of tests started=6 (+0)
11:17:03.431 - Completed 3.0%. Number of tests started=6 (+0)
11:17:23.446 - Completed 3.0%. Number of tests started=6 (+0)
11:17:43.462 - Completed 3.0%. Number of tests started=6 (+0)
11:18:03.478 - Completed 3.0%. Number of tests started=6 (+0)
11:18:23.493 - Completed 3.0%. Number of tests started=6 (+0)
11:18:43.400 - Completed 3.0%. Number of tests started=6 (+0)
11:19:03.415 - Completed 3.0%. Number of tests started=6 (+0)
11:19:23.431 - Completed 3.0%. Number of tests started=6 (+0)
11:19:43.446 - Completed 3.0%. Number of tests started=6 (+0)
11:20:03.462 - Completed 3.0%. Number of tests started=6 (+0)
11:20:23.478 - Completed 3.0%. Number of tests started=6 (+0)
11:20:43.493 - Completed 3.0%. Number of tests started=6 (+0)
11:21:03.400 - Completed 3.0%. Number of tests started=6 (+0)
11:21:23.415 - Completed 3.0%. Number of tests started=6 (+0)
11:21:43.431 - Completed 3.0%. Number of tests started=6 (+0)
11:22:03.446 - Completed 3.0%. Number of tests started=6 (+0)
```",1.0,"Windows LambdaLoadTest hang - https://ci.eclipse.org/openj9/job/Test_openjdk8_j9_special.system_x86-32_windows_Personal/33
LambdaLoadTest_OpenJ9_NonLinux_special_24
variation: Mode107-OSRG
JVM_OPTIONS: -Xgcpolicy:optthruput -Xdebug -Xrunjdwp:transport=dt_socket,address=8888,server=y,onthrow=no.pkg.foo,launch=echo -Xjit:enableOSR,enableOSROnGuardFailure,count=1,disableAsyncCompilation
No diagnostic files generated.
```
10:22:03.554 - Completed 3.0%. Number of tests started=6
10:22:23.882 - Completed 3.0%. Number of tests started=6 (+0)
10:22:43.460 - Completed 3.0%. Number of tests started=6 (+0)
10:23:03.476 - Completed 3.0%. Number of tests started=6 (+0)
10:23:23.492 - Completed 3.0%. Number of tests started=6 (+0)
10:23:43.398 - Completed 3.0%. Number of tests started=6 (+0)
10:24:03.413 - Completed 3.0%. Number of tests started=6 (+0)
10:24:23.429 - Completed 3.0%. Number of tests started=6 (+0)
10:24:43.445 - Completed 3.0%. Number of tests started=6 (+0)
10:25:03.460 - Completed 3.0%. Number of tests started=6 (+0)
10:25:23.476 - Completed 3.0%. Number of tests started=6 (+0)
10:25:43.492 - Completed 3.0%. Number of tests started=6 (+0)
10:26:03.398 - Completed 3.0%. Number of tests started=6 (+0)
10:26:23.414 - Completed 3.0%. Number of tests started=6 (+0)
10:26:43.429 - Completed 3.0%. Number of tests started=6 (+0)
10:27:03.445 - Completed 3.0%. Number of tests started=6 (+0)
10:27:23.460 - Completed 3.0%. Number of tests started=6 (+0)
10:27:43.476 - Completed 3.0%. Number of tests started=6 (+0)
10:28:03.492 - Completed 3.0%. Number of tests started=6 (+0)
10:28:23.398 - Completed 3.0%. Number of tests started=6 (+0)
10:28:43.414 - Completed 3.0%. Number of tests started=6 (+0)
10:29:03.429 - Completed 3.0%. Number of tests started=6 (+0)
10:29:23.445 - Completed 3.0%. Number of tests started=6 (+0)
10:29:43.461 - Completed 3.0%. Number of tests started=6 (+0)
10:30:03.476 - Completed 3.0%. Number of tests started=6 (+0)
10:30:23.492 - Completed 3.0%. Number of tests started=6 (+0)
10:30:43.398 - Completed 3.0%. Number of tests started=6 (+0)
10:31:03.414 - Completed 3.0%. Number of tests started=6 (+0)
10:31:23.429 - Completed 3.0%. Number of tests started=6 (+0)
10:31:43.445 - Completed 3.0%. Number of tests started=6 (+0)
10:32:03.461 - Completed 3.0%. Number of tests started=6 (+0)
10:32:23.414 - Completed 3.0%. Number of tests started=6 (+0)
10:32:43.398 - Completed 3.0%. Number of tests started=6 (+0)
10:33:03.429 - Completed 3.0%. Number of tests started=6 (+0)
10:33:23.445 - Completed 3.0%. Number of tests started=6 (+0)
10:33:43.461 - Completed 3.0%. Number of tests started=6 (+0)
10:34:03.476 - Completed 3.0%. Number of tests started=6 (+0)
10:34:23.492 - Completed 3.0%. Number of tests started=6 (+0)
10:34:43.398 - Completed 3.0%. Number of tests started=6 (+0)
10:35:03.414 - Completed 3.0%. Number of tests started=6 (+0)
10:35:23.429 - Completed 3.0%. Number of tests started=6 (+0)
10:35:43.445 - Completed 3.0%. Number of tests started=6 (+0)
10:36:03.461 - Completed 3.0%. Number of tests started=6 (+0)
10:36:23.476 - Completed 3.0%. Number of tests started=6 (+0)
10:36:43.492 - Completed 3.0%. Number of tests started=6 (+0)
10:37:03.398 - Completed 3.0%. Number of tests started=6 (+0)
10:37:23.414 - Completed 3.0%. Number of tests started=6 (+0)
10:37:43.430 - Completed 3.0%. Number of tests started=6 (+0)
10:38:03.445 - Completed 3.0%. Number of tests started=6 (+0)
10:38:23.461 - Completed 3.0%. Number of tests started=6 (+0)
10:38:43.476 - Completed 3.0%. Number of tests started=6 (+0)
10:39:03.492 - Completed 3.0%. Number of tests started=6 (+0)
10:39:23.398 - Completed 3.0%. Number of tests started=6 (+0)
10:39:43.414 - Completed 3.0%. Number of tests started=6 (+0)
10:40:03.430 - Completed 3.0%. Number of tests started=6 (+0)
10:40:23.445 - Completed 3.0%. Number of tests started=6 (+0)
10:40:43.461 - Completed 3.0%. Number of tests started=6 (+0)
10:41:03.476 - Completed 3.0%. Number of tests started=6 (+0)
10:41:23.492 - Completed 3.0%. Number of tests started=6 (+0)
10:41:43.398 - Completed 3.0%. Number of tests started=6 (+0)
10:42:03.414 - Completed 3.0%. Number of tests started=6 (+0)
10:42:23.430 - Completed 3.0%. Number of tests started=6 (+0)
10:42:43.445 - Completed 3.0%. Number of tests started=6 (+0)
10:43:03.461 - Completed 3.0%. Number of tests started=6 (+0)
10:43:23.477 - Completed 3.0%. Number of tests started=6 (+0)
10:43:43.492 - Completed 3.0%. Number of tests started=6 (+0)
10:44:03.398 - Completed 3.0%. Number of tests started=6 (+0)
10:44:23.414 - Completed 3.0%. Number of tests started=6 (+0)
10:44:43.430 - Completed 3.0%. Number of tests started=6 (+0)
10:45:03.445 - Completed 3.0%. Number of tests started=6 (+0)
10:45:23.461 - Completed 3.0%. Number of tests started=6 (+0)
10:45:43.477 - Completed 3.0%. Number of tests started=6 (+0)
10:46:03.492 - Completed 3.0%. Number of tests started=6 (+0)
10:46:23.398 - Completed 3.0%. Number of tests started=6 (+0)
10:46:43.414 - Completed 3.0%. Number of tests started=6 (+0)
10:47:03.430 - Completed 3.0%. Number of tests started=6 (+0)
10:47:23.508 - Completed 3.0%. Number of tests started=6 (+0)
10:47:43.430 - Completed 3.0%. Number of tests started=6 (+0)
10:48:03.461 - Completed 3.0%. Number of tests started=6 (+0)
10:48:23.399 - Completed 3.0%. Number of tests started=6 (+0)
10:48:43.414 - Completed 3.0%. Number of tests started=6 (+0)
10:49:03.430 - Completed 3.0%. Number of tests started=6 (+0)
10:49:23.446 - Completed 3.0%. Number of tests started=6 (+0)
10:49:43.461 - Completed 3.0%. Number of tests started=6 (+0)
10:50:03.477 - Completed 3.0%. Number of tests started=6 (+0)
10:50:23.492 - Completed 3.0%. Number of tests started=6 (+0)
10:50:43.399 - Completed 3.0%. Number of tests started=6 (+0)
10:51:03.414 - Completed 3.0%. Number of tests started=6 (+0)
10:51:23.430 - Completed 3.0%. Number of tests started=6 (+0)
10:51:43.446 - Completed 3.0%. Number of tests started=6 (+0)
10:52:03.461 - Completed 3.0%. Number of tests started=6 (+0)
10:52:23.477 - Completed 3.0%. Number of tests started=6 (+0)
10:52:43.492 - Completed 3.0%. Number of tests started=6 (+0)
10:53:03.399 - Completed 3.0%. Number of tests started=6 (+0)
10:53:23.414 - Completed 3.0%. Number of tests started=6 (+0)
10:53:43.430 - Completed 3.0%. Number of tests started=6 (+0)
10:54:03.446 - Completed 3.0%. Number of tests started=6 (+0)
10:54:23.461 - Completed 3.0%. Number of tests started=6 (+0)
10:54:43.477 - Completed 3.0%. Number of tests started=6 (+0)
10:55:03.493 - Completed 3.0%. Number of tests started=6 (+0)
10:55:23.399 - Completed 3.0%. Number of tests started=6 (+0)
10:55:43.414 - Completed 3.0%. Number of tests started=6 (+0)
10:56:03.430 - Completed 3.0%. Number of tests started=6 (+0)
10:56:23.446 - Completed 3.0%. Number of tests started=6 (+0)
10:56:43.461 - Completed 3.0%. Number of tests started=6 (+0)
10:57:03.477 - Completed 3.0%. Number of tests started=6 (+0)
10:57:23.493 - Completed 3.0%. Number of tests started=6 (+0)
10:57:43.399 - Completed 3.0%. Number of tests started=6 (+0)
10:58:03.414 - Completed 3.0%. Number of tests started=6 (+0)
10:58:23.430 - Completed 3.0%. Number of tests started=6 (+0)
10:58:43.446 - Completed 3.0%. Number of tests started=6 (+0)
10:59:03.461 - Completed 3.0%. Number of tests started=6 (+0)
10:59:23.477 - Completed 3.0%. Number of tests started=6 (+0)
10:59:43.493 - Completed 3.0%. Number of tests started=6 (+0)
11:00:03.399 - Completed 3.0%. Number of tests started=6 (+0)
11:00:23.415 - Completed 3.0%. Number of tests started=6 (+0)
11:00:43.430 - Completed 3.0%. Number of tests started=6 (+0)
11:01:03.446 - Completed 3.0%. Number of tests started=6 (+0)
11:01:23.461 - Completed 3.0%. Number of tests started=6 (+0)
11:01:43.477 - Completed 3.0%. Number of tests started=6 (+0)
11:02:03.493 - Completed 3.0%. Number of tests started=6 (+0)
11:02:23.399 - Completed 3.0%. Number of tests started=6 (+0)
11:02:43.415 - Completed 3.0%. Number of tests started=6 (+0)
11:03:03.430 - Completed 3.0%. Number of tests started=6 (+0)
11:03:23.446 - Completed 3.0%. Number of tests started=6 (+0)
11:03:43.462 - Completed 3.0%. Number of tests started=6 (+0)
11:04:03.477 - Completed 3.0%. Number of tests started=6 (+0)
11:04:23.493 - Completed 3.0%. Number of tests started=6 (+0)
11:04:43.399 - Completed 3.0%. Number of tests started=6 (+0)
11:05:03.415 - Completed 3.0%. Number of tests started=6 (+0)
11:05:23.430 - Completed 3.0%. Number of tests started=6 (+0)
11:05:43.446 - Completed 3.0%. Number of tests started=6 (+0)
11:06:03.462 - Completed 3.0%. Number of tests started=6 (+0)
11:06:23.477 - Completed 3.0%. Number of tests started=6 (+0)
11:06:43.493 - Completed 3.0%. Number of tests started=6 (+0)
11:07:03.399 - Completed 3.0%. Number of tests started=6 (+0)
11:07:23.415 - Completed 3.0%. Number of tests started=6 (+0)
11:07:43.430 - Completed 3.0%. Number of tests started=6 (+0)
11:08:03.446 - Completed 3.0%. Number of tests started=6 (+0)
11:08:23.462 - Completed 3.0%. Number of tests started=6 (+0)
11:08:43.477 - Completed 3.0%. Number of tests started=6 (+0)
11:09:03.493 - Completed 3.0%. Number of tests started=6 (+0)
11:09:23.399 - Completed 3.0%. Number of tests started=6 (+0)
11:09:43.415 - Completed 3.0%. Number of tests started=6 (+0)
11:10:03.430 - Completed 3.0%. Number of tests started=6 (+0)
11:10:23.446 - Completed 3.0%. Number of tests started=6 (+0)
11:10:43.462 - Completed 3.0%. Number of tests started=6 (+0)
11:11:03.477 - Completed 3.0%. Number of tests started=6 (+0)
11:11:23.493 - Completed 3.0%. Number of tests started=6 (+0)
11:11:43.399 - Completed 3.0%. Number of tests started=6 (+0)
11:12:03.415 - Completed 3.0%. Number of tests started=6 (+0)
11:12:23.431 - Completed 3.0%. Number of tests started=6 (+0)
11:12:43.446 - Completed 3.0%. Number of tests started=6 (+0)
11:13:03.462 - Completed 3.0%. Number of tests started=6 (+0)
11:13:23.477 - Completed 3.0%. Number of tests started=6 (+0)
11:13:43.493 - Completed 3.0%. Number of tests started=6 (+0)
11:14:03.399 - Completed 3.0%. Number of tests started=6 (+0)
11:14:23.415 - Completed 3.0%. Number of tests started=6 (+0)
11:14:43.431 - Completed 3.0%. Number of tests started=6 (+0)
11:15:03.446 - Completed 3.0%. Number of tests started=6 (+0)
11:15:23.462 - Completed 3.0%. Number of tests started=6 (+0)
11:15:43.478 - Completed 3.0%. Number of tests started=6 (+0)
11:16:03.493 - Completed 3.0%. Number of tests started=6 (+0)
11:16:23.399 - Completed 3.0%. Number of tests started=6 (+0)
11:16:43.415 - Completed 3.0%. Number of tests started=6 (+0)
11:17:03.431 - Completed 3.0%. Number of tests started=6 (+0)
11:17:23.446 - Completed 3.0%. Number of tests started=6 (+0)
11:17:43.462 - Completed 3.0%. Number of tests started=6 (+0)
11:18:03.478 - Completed 3.0%. Number of tests started=6 (+0)
11:18:23.493 - Completed 3.0%. Number of tests started=6 (+0)
11:18:43.400 - Completed 3.0%. Number of tests started=6 (+0)
11:19:03.415 - Completed 3.0%. Number of tests started=6 (+0)
11:19:23.431 - Completed 3.0%. Number of tests started=6 (+0)
11:19:43.446 - Completed 3.0%. Number of tests started=6 (+0)
11:20:03.462 - Completed 3.0%. Number of tests started=6 (+0)
11:20:23.478 - Completed 3.0%. Number of tests started=6 (+0)
11:20:43.493 - Completed 3.0%. Number of tests started=6 (+0)
11:21:03.400 - Completed 3.0%. Number of tests started=6 (+0)
11:21:23.415 - Completed 3.0%. Number of tests started=6 (+0)
11:21:43.431 - Completed 3.0%. Number of tests started=6 (+0)
11:22:03.446 - Completed 3.0%. Number of tests started=6 (+0)
```",0,windows lambdaloadtest hang lambdaloadtest nonlinux special variation osrg jvm options xgcpolicy optthruput xdebug xrunjdwp transport dt socket address server y onthrow no pkg foo launch echo xjit enableosr enableosronguardfailure count disableasynccompilation no diagnostic files generated completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started completed number of tests started ,0
252074,18987878954.0,IssuesEvent,2021-11-22 00:47:54,the-eric-kwok/HP-Pavilion-bc015tx-Hackintosh,https://api.github.com/repos/the-eric-kwok/HP-Pavilion-bc015tx-Hackintosh,closed,[Request]请求修复从 Windows 重启进入 macOS 后扬声器没有声音的问题,documentation enhancement,"**Is your feature request related to a problem? Please describe.**
从 Windows 重启进入 macOS 后扬声器没有声音
**Describe alternatives you've considered**
卸载 Realtek 声卡驱动,切换为 Windows 自带的 HDA 驱动:https://www.jianshu.com/p/1584d5fbee09
**Additional context**
这个问题好像是 Windows 下的声卡驱动导致的。
搜到一个 HP cb073tx 的引导文件,把热重启替换成断电重启修复的:https://github.com/zty199/HP_Pavilion_15-cb073tx_Hackintosh/commit/af6ed1c925348d013b578b6729c7c77da401009b
",1.0,"[Request]请求修复从 Windows 重启进入 macOS 后扬声器没有声音的问题 - **Is your feature request related to a problem? Please describe.**
从 Windows 重启进入 macOS 后扬声器没有声音
**Describe alternatives you've considered**
卸载 Realtek 声卡驱动,切换为 Windows 自带的 HDA 驱动:https://www.jianshu.com/p/1584d5fbee09
**Additional context**
这个问题好像是 Windows 下的声卡驱动导致的。
搜到一个 HP cb073tx 的引导文件,把热重启替换成断电重启修复的:https://github.com/zty199/HP_Pavilion_15-cb073tx_Hackintosh/commit/af6ed1c925348d013b578b6729c7c77da401009b
",0, 请求修复从 windows 重启进入 macos 后扬声器没有声音的问题 is your feature request related to a problem please describe 从 windows 重启进入 macos 后扬声器没有声音 describe alternatives you ve considered 卸载 realtek 声卡驱动,切换为 windows 自带的 hda 驱动: additional context 这个问题好像是 windows 下的声卡驱动导致的。 搜到一个 hp 的引导文件,把热重启替换成断电重启修复的: ,0
15139,3927169028.0,IssuesEvent,2016-04-23 11:28:21,MarlinFirmware/Marlin,https://api.github.com/repos/MarlinFirmware/Marlin,closed,Synchronized vs unsynchronized commands,Documentation Issue Inactive,"Hello!
Is there any documentation about what commands are synchronized/buffered and which ones are executed immediately?
I see a lot of confusion about that point. The [documentation for M400](http://www.marlinfirmware.org/index.php/M400) states:
This command should rarely be needed since non-movement commands should already wait,
but M400 can be useful as a workaround for badly-behaved commands.
But looking at the code most of the non-movement commands such as `M104`, `M106`, `M42`, `M280` don't call `st_synchronize()` thus appear to be executed immediately (well, as soon as the command buffer is processed but not waiting the motion queue to be finished).
In this GitHub issue tracker I found several conflicting statements about `M106` being synchronized and not.
What's the situation? Can this be documented clearly? Thank you! :)",1.0,"Synchronized vs unsynchronized commands - Hello!
Is there any documentation about what commands are synchronized/buffered and which ones are executed immediately?
I see a lot of confusion about that point. The [documentation for M400](http://www.marlinfirmware.org/index.php/M400) states:
This command should rarely be needed since non-movement commands should already wait,
but M400 can be useful as a workaround for badly-behaved commands.
But looking at the code most of the non-movement commands such as `M104`, `M106`, `M42`, `M280` don't call `st_synchronize()` thus appear to be executed immediately (well, as soon as the command buffer is processed but not waiting the motion queue to be finished).
In this GitHub issue tracker I found several conflicting statements about `M106` being synchronized and not.
What's the situation? Can this be documented clearly? Thank you! :)",0,synchronized vs unsynchronized commands hello is there any documentation about what commands are synchronized buffered and which ones are executed immediately i see a lot of confusion about that point the states this command should rarely be needed since non movement commands should already wait but can be useful as a workaround for badly behaved commands but looking at the code most of the non movement commands such as don t call st synchronize thus appear to be executed immediately well as soon as the command buffer is processed but not waiting the motion queue to be finished in this github issue tracker i found several conflicting statements about being synchronized and not what s the situation can this be documented clearly thank you ,0
311267,26779324108.0,IssuesEvent,2023-01-31 19:44:52,elastic/kibana,https://api.github.com/repos/elastic/kibana,closed,Failing test: Chrome UI Functional Tests.test/functional/apps/dashboard_elements/controls/options_list·ts - dashboard elements dashboard elements Controls Dashboard options list integration Interactions between options list and dashboard Selections made in control apply to dashboard Shows available options in options list,Team:Presentation failed-test,"A test failed on a tracked branch
```
Error: retry.try timeout: Error: expected [ 'No options found' ] to sort of equal [ 'hiss',
'ruff',
'bark',
'grrr',
'meow',
'growl',
'grr',
'bow ow ow' ]
at Assertion.assert (node_modules/@kbn/expect/expect.js:100:11)
at Assertion.eql (node_modules/@kbn/expect/expect.js:244:8)
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-2-1a3ea35e6a60a407/elastic/kibana-on-merge/kibana/test/functional/apps/dashboard_elements/controls/options_list.ts:180:86
at runMicrotasks ()
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at runAttempt (test/common/services/retry/retry_for_success.ts:29:15)
at retryForSuccess (test/common/services/retry/retry_for_success.ts:68:21)
at RetryService.try (test/common/services/retry/retry.ts:31:12)
at ensureAvailableOptionsEql (test/functional/apps/dashboard_elements/controls/options_list.ts:179:9)
at Context. (test/functional/apps/dashboard_elements/controls/options_list.ts:269:11)
at onFailure (test/common/services/retry/retry_for_success.ts:17:9)
at retryForSuccess (test/common/services/retry/retry_for_success.ts:59:13)
at RetryService.try (test/common/services/retry/retry.ts:31:12)
at ensureAvailableOptionsEql (test/functional/apps/dashboard_elements/controls/options_list.ts:179:9)
at Context. (test/functional/apps/dashboard_elements/controls/options_list.ts:269:11)
at Object.apply (node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/15884#e44788dd-b40c-4e52-adfd-fc3140d097e1)
",1.0,"Failing test: Chrome UI Functional Tests.test/functional/apps/dashboard_elements/controls/options_list·ts - dashboard elements dashboard elements Controls Dashboard options list integration Interactions between options list and dashboard Selections made in control apply to dashboard Shows available options in options list - A test failed on a tracked branch
```
Error: retry.try timeout: Error: expected [ 'No options found' ] to sort of equal [ 'hiss',
'ruff',
'bark',
'grrr',
'meow',
'growl',
'grr',
'bow ow ow' ]
at Assertion.assert (node_modules/@kbn/expect/expect.js:100:11)
at Assertion.eql (node_modules/@kbn/expect/expect.js:244:8)
at /var/lib/buildkite-agent/builds/kb-n2-4-spot-2-1a3ea35e6a60a407/elastic/kibana-on-merge/kibana/test/functional/apps/dashboard_elements/controls/options_list.ts:180:86
at runMicrotasks ()
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at runAttempt (test/common/services/retry/retry_for_success.ts:29:15)
at retryForSuccess (test/common/services/retry/retry_for_success.ts:68:21)
at RetryService.try (test/common/services/retry/retry.ts:31:12)
at ensureAvailableOptionsEql (test/functional/apps/dashboard_elements/controls/options_list.ts:179:9)
at Context. (test/functional/apps/dashboard_elements/controls/options_list.ts:269:11)
at onFailure (test/common/services/retry/retry_for_success.ts:17:9)
at retryForSuccess (test/common/services/retry/retry_for_success.ts:59:13)
at RetryService.try (test/common/services/retry/retry.ts:31:12)
at ensureAvailableOptionsEql (test/functional/apps/dashboard_elements/controls/options_list.ts:179:9)
at Context. (test/functional/apps/dashboard_elements/controls/options_list.ts:269:11)
at Object.apply (node_modules/@kbn/test/target_node/functional_test_runner/lib/mocha/wrap_function.js:87:16)
```
First failure: [CI Build - main](https://buildkite.com/elastic/kibana-on-merge/builds/15884#e44788dd-b40c-4e52-adfd-fc3140d097e1)
",0,failing test chrome ui functional tests test functional apps dashboard elements controls options list·ts dashboard elements dashboard elements controls dashboard options list integration interactions between options list and dashboard selections made in control apply to dashboard shows available options in options list a test failed on a tracked branch error retry try timeout error expected to sort of equal hiss ruff bark grrr meow growl grr bow ow ow at assertion assert node modules kbn expect expect js at assertion eql node modules kbn expect expect js at var lib buildkite agent builds kb spot elastic kibana on merge kibana test functional apps dashboard elements controls options list ts at runmicrotasks at processticksandrejections node internal process task queues at runattempt test common services retry retry for success ts at retryforsuccess test common services retry retry for success ts at retryservice try test common services retry retry ts at ensureavailableoptionseql test functional apps dashboard elements controls options list ts at context test functional apps dashboard elements controls options list ts at onfailure test common services retry retry for success ts at retryforsuccess test common services retry retry for success ts at retryservice try test common services retry retry ts at ensureavailableoptionseql test functional apps dashboard elements controls options list ts at context test functional apps dashboard elements controls options list ts at object apply node modules kbn test target node functional test runner lib mocha wrap function js first failure ,0
269550,23448647604.0,IssuesEvent,2022-08-15 22:44:29,pytorch/pytorch,https://api.github.com/repos/pytorch/pytorch,closed,DISABLED test_aot_autograd_exhaustive_special_erfcx_cpu_float32 (__main__.TestEagerFusionOpInfoCPU),module: flaky-tests skipped module: functorch,"Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_aot_autograd_exhaustive_special_erfcx_cpu_float32&suite=TestEagerFusionOpInfoCPU&file=/var/lib/jenkins/workspace/functorch/test/test_pythonkey.py) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/7840370218).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 2 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT BE ALARMED THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_aot_autograd_exhaustive_special_erfcx_cpu_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
cc @zou3519 @Chillee @samdow",1.0,"DISABLED test_aot_autograd_exhaustive_special_erfcx_cpu_float32 (__main__.TestEagerFusionOpInfoCPU) - Platforms: linux
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_aot_autograd_exhaustive_special_erfcx_cpu_float32&suite=TestEagerFusionOpInfoCPU&file=/var/lib/jenkins/workspace/functorch/test/test_pythonkey.py) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/7840370218).
Over the past 3 hours, it has been determined flaky in 2 workflow(s) with 2 failures and 2 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT BE ALARMED THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_aot_autograd_exhaustive_special_erfcx_cpu_float32`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
cc @zou3519 @Chillee @samdow",0,disabled test aot autograd exhaustive special erfcx cpu main testeagerfusionopinfocpu platforms linux this test was disabled because it is failing in ci see and the most recent trunk over the past hours it has been determined flaky in workflow s with failures and successes debugging instructions after clicking on the recent samples link do not be alarmed the ci is green we now shield flaky tests from developers so ci will thus be green but it will be harder to parse the logs to find relevant log snippets click on the workflow logs linked above click on the test step of the job so that it is expanded otherwise the grepping will not work grep for test aot autograd exhaustive special erfcx cpu there should be several instances run as flaky tests are rerun in ci from which you can study the logs cc chillee samdow,0
2393,25120014167.0,IssuesEvent,2022-11-09 07:12:37,fieldenms/tg,https://api.github.com/repos/fieldenms/tg,closed,PostgreSQL and SQL Server support for running TG tests,EQL Reliability,"### Description
Currently, H2 is the standard RDBMS for db-driven tests at the platform level. However, as we enhance EQL capabilities, the resultant SQL becomes more sophisticated and there are multiple situation now that H2 is not capable to execute the generated SQL statements. Also, real life TG-based systems use either PostgreSQL or SQL Server.
This is why it is required to support both PostgreSQL and SQL Server for running tests in `platform-dao`.
### Expected outcome
Ability to execute db-driven TG tests against PostgreSQL and SQL Server.
",True,"PostgreSQL and SQL Server support for running TG tests - ### Description
Currently, H2 is the standard RDBMS for db-driven tests at the platform level. However, as we enhance EQL capabilities, the resultant SQL becomes more sophisticated and there are multiple situation now that H2 is not capable to execute the generated SQL statements. Also, real life TG-based systems use either PostgreSQL or SQL Server.
This is why it is required to support both PostgreSQL and SQL Server for running tests in `platform-dao`.
### Expected outcome
Ability to execute db-driven TG tests against PostgreSQL and SQL Server.
",1,postgresql and sql server support for running tg tests description currently is the standard rdbms for db driven tests at the platform level however as we enhance eql capabilities the resultant sql becomes more sophisticated and there are multiple situation now that is not capable to execute the generated sql statements also real life tg based systems use either postgresql or sql server this is why it is required to support both postgresql and sql server for running tests in platform dao expected outcome ability to execute db driven tg tests against postgresql and sql server ,1
706,9978788331.0,IssuesEvent,2019-07-09 20:49:04,crossplaneio/crossplane,https://api.github.com/repos/crossplaneio/crossplane,closed,Stale Object Modification Error,bug reliability,"* Bug Report
It appears we hitting concurrent update issue on a given object:
```
""error"": ""failed to update status of CRD instance postgresql-b00a1c60-5d60-11e9-9440-9cb6d08bde99:
Operation cannot be fulfilled on cloudsqlinstances.database.gcp.crossplane.io \""postgresql-b00a1c60-5d60-11e9-9440-9cb6d08bde99\"":
the object has been modified; please apply your changes to the latest version and try again"",
""stacktrace"":
""github.com/crossplaneio/crossplane/vendor/github.com/go-logr/zapr.(*zapLogger).Error
/home/illya/go/src/github.com/crossplaneio/crossplane/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/crossplaneio/crossplane/pkg/controller/gcp/database.(*Reconciler).Reconcile
/home/illya/go/src/github.com/crossplaneio/crossplane/pkg/controller/gcp/database/cloudsql_instance.go:214\ngithub.com/crossplaneio/crossplane/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/home/illya/go/src/github.com/crossplaneio/crossplane/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:215\ngithub.com/crossplaneio/crossplane/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
/home/illya/go/src/github.com/crossplaneio/crossplane/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\ngithub.com/crossplaneio/crossplane/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1
/home/illya/go/src/github.com/crossplaneio/crossplane/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/crossplaneio/crossplane/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil
/home/illya/go/src/github.com/crossplaneio/crossplane/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/crossplaneio/crossplane/vendor/k8s.io/apimachinery/pkg/util/wait.Until
/home/illya/go/src/github.com/crossplaneio/crossplane/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88""
```
I stumbled over this issue few times on different resources.
This could be an intermittent issue on my dev environment.
",True,"Stale Object Modification Error - * Bug Report
It appears we hitting concurrent update issue on a given object:
```
""error"": ""failed to update status of CRD instance postgresql-b00a1c60-5d60-11e9-9440-9cb6d08bde99:
Operation cannot be fulfilled on cloudsqlinstances.database.gcp.crossplane.io \""postgresql-b00a1c60-5d60-11e9-9440-9cb6d08bde99\"":
the object has been modified; please apply your changes to the latest version and try again"",
""stacktrace"":
""github.com/crossplaneio/crossplane/vendor/github.com/go-logr/zapr.(*zapLogger).Error
/home/illya/go/src/github.com/crossplaneio/crossplane/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/crossplaneio/crossplane/pkg/controller/gcp/database.(*Reconciler).Reconcile
/home/illya/go/src/github.com/crossplaneio/crossplane/pkg/controller/gcp/database/cloudsql_instance.go:214\ngithub.com/crossplaneio/crossplane/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/home/illya/go/src/github.com/crossplaneio/crossplane/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:215\ngithub.com/crossplaneio/crossplane/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1
/home/illya/go/src/github.com/crossplaneio/crossplane/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\ngithub.com/crossplaneio/crossplane/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1
/home/illya/go/src/github.com/crossplaneio/crossplane/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/crossplaneio/crossplane/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil
/home/illya/go/src/github.com/crossplaneio/crossplane/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/crossplaneio/crossplane/vendor/k8s.io/apimachinery/pkg/util/wait.Until
/home/illya/go/src/github.com/crossplaneio/crossplane/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88""
```
I stumbled over this issue few times on different resources.
This could be an intermittent issue on my dev environment.
",1,stale object modification error bug report it appears we hitting concurrent update issue on a given object error failed to update status of crd instance postgresql operation cannot be fulfilled on cloudsqlinstances database gcp crossplane io postgresql the object has been modified please apply your changes to the latest version and try again stacktrace github com crossplaneio crossplane vendor github com go logr zapr zaplogger error home illya go src github com crossplaneio crossplane vendor github com go logr zapr zapr go ngithub com crossplaneio crossplane pkg controller gcp database reconciler reconcile home illya go src github com crossplaneio crossplane pkg controller gcp database cloudsql instance go ngithub com crossplaneio crossplane vendor sigs io controller runtime pkg internal controller controller processnextworkitem home illya go src github com crossplaneio crossplane vendor sigs io controller runtime pkg internal controller controller go ngithub com crossplaneio crossplane vendor sigs io controller runtime pkg internal controller controller start home illya go src github com crossplaneio crossplane vendor sigs io controller runtime pkg internal controller controller go ngithub com crossplaneio crossplane vendor io apimachinery pkg util wait jitteruntil home illya go src github com crossplaneio crossplane vendor io apimachinery pkg util wait wait go ngithub com crossplaneio crossplane vendor io apimachinery pkg util wait jitteruntil home illya go src github com crossplaneio crossplane vendor io apimachinery pkg util wait wait go ngithub com crossplaneio crossplane vendor io apimachinery pkg util wait until home illya go src github com crossplaneio crossplane vendor io apimachinery pkg util wait wait go i stumbled over this issue few times on different resources this could be an intermittent issue on my dev environment ,1
323076,23932864873.0,IssuesEvent,2022-09-10 20:17:59,neulab/explainaboard_web,https://api.github.com/repos/neulab/explainaboard_web,closed,realpath and wget commands not found,documentation,"I am following the instructions on README for installation.
On Step 2 `npm run gen-api-code`, after installing node 14+ and npm,
the following error occurs:
```
> explainaboard_web@0.2.0 gen-api-code
> bash openapi/gen_api_layer.sh project
openapi/gen_api_layer.sh: line 6: realpath: command not found
openapi/gen_api_layer.sh: line 16: wget: command not found
Error: Unable to access jarfile openapi/swagger-codegen-cli-3.0.29.jar
rm: Dockerfile: No such file or directory
rm: .gitignore: No such file or directory
rm: .travis.yml: No such file or directory
rm: git_push.sh: No such file or directory
rm: tox.ini: No such file or directory
rm: test-requirements.txt: No such file or directory
rm: .dockerignore: No such file or directory
rm: setup.py: No such file or directory
Error: Unable to access jarfile openapi/swagger-codegen-cli-3.0.29.jar
openapi/gen_api_layer.sh: line 51: cd: frontend/src/clients/openapi: No such file or directory
```",1.0,"realpath and wget commands not found - I am following the instructions on README for installation.
On Step 2 `npm run gen-api-code`, after installing node 14+ and npm,
the following error occurs:
```
> explainaboard_web@0.2.0 gen-api-code
> bash openapi/gen_api_layer.sh project
openapi/gen_api_layer.sh: line 6: realpath: command not found
openapi/gen_api_layer.sh: line 16: wget: command not found
Error: Unable to access jarfile openapi/swagger-codegen-cli-3.0.29.jar
rm: Dockerfile: No such file or directory
rm: .gitignore: No such file or directory
rm: .travis.yml: No such file or directory
rm: git_push.sh: No such file or directory
rm: tox.ini: No such file or directory
rm: test-requirements.txt: No such file or directory
rm: .dockerignore: No such file or directory
rm: setup.py: No such file or directory
Error: Unable to access jarfile openapi/swagger-codegen-cli-3.0.29.jar
openapi/gen_api_layer.sh: line 51: cd: frontend/src/clients/openapi: No such file or directory
```",0,realpath and wget commands not found i am following the instructions on readme for installation on step npm run gen api code after installing node and npm the following error occurs explainaboard web gen api code bash openapi gen api layer sh project openapi gen api layer sh line realpath command not found openapi gen api layer sh line wget command not found error unable to access jarfile openapi swagger codegen cli jar rm dockerfile no such file or directory rm gitignore no such file or directory rm travis yml no such file or directory rm git push sh no such file or directory rm tox ini no such file or directory rm test requirements txt no such file or directory rm dockerignore no such file or directory rm setup py no such file or directory error unable to access jarfile openapi swagger codegen cli jar openapi gen api layer sh line cd frontend src clients openapi no such file or directory ,0
155808,13633492873.0,IssuesEvent,2020-09-24 21:32:25,Programming-Engineering-Pmi-33/Music-Quiz,https://api.github.com/repos/Programming-Engineering-Pmi-33/Music-Quiz,closed,[UI] Створити Wireframes,documentation,"**Мотивація:**
Для розуміння всіх аспектів, що стосуються зовнішнього вигляду застосунку, нам необхідні шаблони дизайну (Wireframes) для кожної сторінки.
**Опис:**
Наступні сторінки повинні бути описані:
- [x] Авторизація (+ реєстрація).
- [x] Створення тесту.
- [x] Список тестів.
- [x] Налаштування тесту.
- [x] Рейтингова таблиця (по конкретному тесту та загальна).
- [x] Дошка оцінок застосунку.
**Критерій прийняття**
1. Всі сторінки містять елементи, які описані у [вимогах](https://github.com/Programming-Engineering-Pmi-33/Music-Quiz/wiki/%D0%A4%D1%83%D0%BD%D0%BA%D1%86%D1%96%D0%BE%D0%BD%D0%B0%D0%BB%D1%8C%D0%BD%D1%96-%D1%82%D0%B0-%D0%BD%D0%B5%D1%84%D1%83%D0%BD%D0%BA%D1%86%D1%96%D0%BE%D0%BD%D0%B0%D0%BB%D1%8C%D0%BD%D1%96-%D0%B2%D0%B8%D0%BC%D0%BE%D0%B3%D0%B8-%D0%B4%D0%BE-%D0%BF%D1%80%D0%BE%D1%94%D0%BA%D1%82%D1%83) та [usecase діаграмі](https://github.com/Programming-Engineering-Pmi-33/Music-Quiz/wiki/Usecase-%D0%B4%D1%96%D0%B0%D0%B3%D1%80%D0%B0%D0%BC%D0%B0).
1. Шаблони дизайну є прикріплені у коментарях у формі скріншотів або посилання до документ.
**Додаткова інформація**
1. Після рев'ю прикріплені матеріали перемістити у [wiki](https://github.com/Programming-Engineering-Pmi-33/Music-Quiz/wiki/Wireframes).
1. Якщо є деякі сторінки, відсутні у списку в описі, то їх прикріпити у коментарі, та додати відповідні шаблони.",1.0,"[UI] Створити Wireframes - **Мотивація:**
Для розуміння всіх аспектів, що стосуються зовнішнього вигляду застосунку, нам необхідні шаблони дизайну (Wireframes) для кожної сторінки.
**Опис:**
Наступні сторінки повинні бути описані:
- [x] Авторизація (+ реєстрація).
- [x] Створення тесту.
- [x] Список тестів.
- [x] Налаштування тесту.
- [x] Рейтингова таблиця (по конкретному тесту та загальна).
- [x] Дошка оцінок застосунку.
**Критерій прийняття**
1. Всі сторінки містять елементи, які описані у [вимогах](https://github.com/Programming-Engineering-Pmi-33/Music-Quiz/wiki/%D0%A4%D1%83%D0%BD%D0%BA%D1%86%D1%96%D0%BE%D0%BD%D0%B0%D0%BB%D1%8C%D0%BD%D1%96-%D1%82%D0%B0-%D0%BD%D0%B5%D1%84%D1%83%D0%BD%D0%BA%D1%86%D1%96%D0%BE%D0%BD%D0%B0%D0%BB%D1%8C%D0%BD%D1%96-%D0%B2%D0%B8%D0%BC%D0%BE%D0%B3%D0%B8-%D0%B4%D0%BE-%D0%BF%D1%80%D0%BE%D1%94%D0%BA%D1%82%D1%83) та [usecase діаграмі](https://github.com/Programming-Engineering-Pmi-33/Music-Quiz/wiki/Usecase-%D0%B4%D1%96%D0%B0%D0%B3%D1%80%D0%B0%D0%BC%D0%B0).
1. Шаблони дизайну є прикріплені у коментарях у формі скріншотів або посилання до документ.
**Додаткова інформація**
1. Після рев'ю прикріплені матеріали перемістити у [wiki](https://github.com/Programming-Engineering-Pmi-33/Music-Quiz/wiki/Wireframes).
1. Якщо є деякі сторінки, відсутні у списку в описі, то їх прикріпити у коментарі, та додати відповідні шаблони.",0, створити wireframes мотивація для розуміння всіх аспектів що стосуються зовнішнього вигляду застосунку нам необхідні шаблони дизайну wireframes для кожної сторінки опис наступні сторінки повинні бути описані авторизація реєстрація створення тесту список тестів налаштування тесту рейтингова таблиця по конкретному тесту та загальна дошка оцінок застосунку критерій прийняття всі сторінки містять елементи які описані у та шаблони дизайну є прикріплені у коментарях у формі скріншотів або посилання до документ додаткова інформація після рев ю прикріплені матеріали перемістити у якщо є деякі сторінки відсутні у списку в описі то їх прикріпити у коментарі та додати відповідні шаблони ,0
130687,12452932026.0,IssuesEvent,2020-05-27 13:06:59,codesquad-member-2020/airbnb-10,https://api.github.com/repos/codesquad-member-2020/airbnb-10,closed,[2020.05.27] 데일리 스크럼,documentation,"## 어제 한 일
- 예약하기 모달창 목업 api 구현
- 목업 단계에서 필요한 domain과 dto 정의
## 오늘 할 일
- 예약하기 목업 api 구현
- 숙소 조회 / 필터링 실제 api 구현
",1.0,"[2020.05.27] 데일리 스크럼 - ## 어제 한 일
- 예약하기 모달창 목업 api 구현
- 목업 단계에서 필요한 domain과 dto 정의
## 오늘 할 일
- 예약하기 목업 api 구현
- 숙소 조회 / 필터링 실제 api 구현
",0, 데일리 스크럼 어제 한 일 예약하기 모달창 목업 api 구현 목업 단계에서 필요한 domain과 dto 정의 오늘 할 일 예약하기 목업 api 구현 숙소 조회 필터링 실제 api 구현 ,0
2687,27063419851.0,IssuesEvent,2023-02-13 21:45:56,NVIDIA/spark-rapids,https://api.github.com/repos/NVIDIA/spark-rapids,closed,[BUG] CUDA error when casting large column vector from long to string,bug reliability,"**Describe the bug**
I was working on a repro case for https://github.com/NVIDIA/spark-rapids/issues/6431 and ran into a CUDA error when casting longs to strings.
```
ai.rapids.cudf.CudaFatalException: for_each: failed to synchronize: cudaErrorIllegalAddress: an illegal memory access was encountered
at ai.rapids.cudf.ColumnView.castTo(Native Method)
at ai.rapids.cudf.ColumnView.castTo(ColumnView.java:1876)
at ai.rapids.cudf.ColumnVector.castTo(ColumnVector.java:790)
at com.nvidia.spark.rapids.jni.CastStringsTest.castLongToString(CastStringsTest.java:43)
```
**Steps/Code to reproduce bug**
Add this test in `spark-rapids-jni` project.
``` java
@Test
void castLongToString() {
int n = 1_000_000_000;
long array[] = new long[n];
for (int i = 0; i < n; i++) {
array[i] = i;
}
try (ColumnVector cv = ColumnVector.fromLongs(array);
ColumnVector cv2 = cv.castTo(DType.STRING)) {
// success
}
}
```
**Expected behavior**
Should not fail in this way. I would understand getting an OOM error instead.
**Environment details (please complete the following information)**
Desktop.
```
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 495.29.05 Driver Version: 495.29.05 CUDA Version: 11.5 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Quadro RTX 6000 On | 00000000:17:00.0 Off | Off |
| 44% 67C P2 110W / 260W | 970MiB / 24220MiB | 100% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
```
**Additional context**
None
",True,"[BUG] CUDA error when casting large column vector from long to string - **Describe the bug**
I was working on a repro case for https://github.com/NVIDIA/spark-rapids/issues/6431 and ran into a CUDA error when casting longs to strings.
```
ai.rapids.cudf.CudaFatalException: for_each: failed to synchronize: cudaErrorIllegalAddress: an illegal memory access was encountered
at ai.rapids.cudf.ColumnView.castTo(Native Method)
at ai.rapids.cudf.ColumnView.castTo(ColumnView.java:1876)
at ai.rapids.cudf.ColumnVector.castTo(ColumnVector.java:790)
at com.nvidia.spark.rapids.jni.CastStringsTest.castLongToString(CastStringsTest.java:43)
```
**Steps/Code to reproduce bug**
Add this test in `spark-rapids-jni` project.
``` java
@Test
void castLongToString() {
int n = 1_000_000_000;
long array[] = new long[n];
for (int i = 0; i < n; i++) {
array[i] = i;
}
try (ColumnVector cv = ColumnVector.fromLongs(array);
ColumnVector cv2 = cv.castTo(DType.STRING)) {
// success
}
}
```
**Expected behavior**
Should not fail in this way. I would understand getting an OOM error instead.
**Environment details (please complete the following information)**
Desktop.
```
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 495.29.05 Driver Version: 495.29.05 CUDA Version: 11.5 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Quadro RTX 6000 On | 00000000:17:00.0 Off | Off |
| 44% 67C P2 110W / 260W | 970MiB / 24220MiB | 100% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
```
**Additional context**
None
",1, cuda error when casting large column vector from long to string describe the bug i was working on a repro case for and ran into a cuda error when casting longs to strings ai rapids cudf cudafatalexception for each failed to synchronize cudaerrorillegaladdress an illegal memory access was encountered at ai rapids cudf columnview castto native method at ai rapids cudf columnview castto columnview java at ai rapids cudf columnvector castto columnvector java at com nvidia spark rapids jni caststringstest castlongtostring caststringstest java steps code to reproduce bug add this test in spark rapids jni project java test void castlongtostring int n long array new long for int i i n i array i try columnvector cv columnvector fromlongs array columnvector cv castto dtype string success expected behavior should not fail in this way i would understand getting an oom error instead environment details please complete the following information desktop nvidia smi driver version cuda version gpu name persistence m bus id disp a volatile uncorr ecc fan temp perf pwr usage cap memory usage gpu util compute m mig m quadro rtx on off off default n a additional context none ,1
193783,6888086470.0,IssuesEvent,2017-11-22 03:24:46,xcat2/xcat-core,https://api.github.com/repos/xcat2/xcat-core,closed,[OpenBMC] bmcdiscover --check does not seem to work,component: openbmc priority:normal sprint1,"
```
[root@briggs01 ~]# bmcdiscover -i 172.12.139.117 --check -u root -p 0penBmc
Warning: Wrong BMC password
```
If this is not supported from BMC, we need to block this function and say it's not supported. This is misleading and indicates that the password is incorrect.
@zet809 We should add this as a known issue for 2.13.8 ",1.0,"[OpenBMC] bmcdiscover --check does not seem to work -
```
[root@briggs01 ~]# bmcdiscover -i 172.12.139.117 --check -u root -p 0penBmc
Warning: Wrong BMC password
```
If this is not supported from BMC, we need to block this function and say it's not supported. This is misleading and indicates that the password is incorrect.
@zet809 We should add this as a known issue for 2.13.8 ",0, bmcdiscover check does not seem to work bmcdiscover i check u root p warning wrong bmc password if this is not supported from bmc we need to block this function and say it s not supported this is misleading and indicates that the password is incorrect we should add this as a known issue for ,0
12930,8041770259.0,IssuesEvent,2018-07-31 05:09:06,OctopusDeploy/Issues,https://api.github.com/repos/OctopusDeploy/Issues,closed,The project dashboard is slow due to the DeploymentSummary view,area/performance kind/bug,"CPU Perf tracking issue: #4681
The other half of this particular API performance #4762
When querying the `DeploymentSummary` view, the SQL server sometimes decides to calculate the latest and previous deployments on the whole deployments table instead of just those that are required according to the filter. This is a problem when there are many deployments.
This view is used by the project dashboard and accounts for 45% of the time and DB load.",True,"The project dashboard is slow due to the DeploymentSummary view - CPU Perf tracking issue: #4681
The other half of this particular API performance #4762
When querying the `DeploymentSummary` view, the SQL server sometimes decides to calculate the latest and previous deployments on the whole deployments table instead of just those that are required according to the filter. This is a problem when there are many deployments.
This view is used by the project dashboard and accounts for 45% of the time and DB load.",0,the project dashboard is slow due to the deploymentsummary view cpu perf tracking issue the other half of this particular api performance when querying the deploymentsummary view the sql server sometimes decides to calculate the latest and previous deployments on the whole deployments table instead of just those that are required according to the filter this is a problem when there are many deployments this view is used by the project dashboard and accounts for of the time and db load ,0
82154,7818792063.0,IssuesEvent,2018-06-13 13:23:28,cmu-phil/tetrad,https://api.github.com/repos/cmu-phil/tetrad,closed,Tabular Comparison: Null Names,in progress testing,"Names for target graphs and true graphs are null in comparison tab.

",1.0,"Tabular Comparison: Null Names - Names for target graphs and true graphs are null in comparison tab.

",0,tabular comparison null names names for target graphs and true graphs are null in comparison tab ,0
112476,9576251639.0,IssuesEvent,2019-05-07 08:39:57,nucypher/nucypher,https://api.github.com/repos/nucypher/nucypher,closed,Simplify test workflow in CircleCI,Enhancement Test 🔍,"Current approach of defining a different CI job for each test module, although semantically correct, seems very inefficient. I propose to unify all the CI jobs after the pip/pipenv phase, until the CLI job (included) in a single ""tests"" job, using all the parallelism currently available (14x).
The workflow would be something like this:
```
pip/pipenv --> tests --> docs/demos/gas --> test_build
```
Bonus: This way, we prevent that some tests are silently not executed. This has happened at least twice (one with contract tests, another with learning tests), and it's very easy to go undetected. ",1.0,"Simplify test workflow in CircleCI - Current approach of defining a different CI job for each test module, although semantically correct, seems very inefficient. I propose to unify all the CI jobs after the pip/pipenv phase, until the CLI job (included) in a single ""tests"" job, using all the parallelism currently available (14x).
The workflow would be something like this:
```
pip/pipenv --> tests --> docs/demos/gas --> test_build
```
Bonus: This way, we prevent that some tests are silently not executed. This has happened at least twice (one with contract tests, another with learning tests), and it's very easy to go undetected. ",0,simplify test workflow in circleci current approach of defining a different ci job for each test module although semantically correct seems very inefficient i propose to unify all the ci jobs after the pip pipenv phase until the cli job included in a single tests job using all the parallelism currently available the workflow would be something like this pip pipenv tests docs demos gas test build bonus this way we prevent that some tests are silently not executed this has happened at least twice one with contract tests another with learning tests and it s very easy to go undetected ,0
11957,7747781860.0,IssuesEvent,2018-05-30 05:36:22,pingcap/tikv,https://api.github.com/repos/pingcap/tikv,opened,sending messages in batch ,performance raft,"I wrote a simple test for gRPC - send 10 msgs with buffer hint and send one msg which contains 10 msgs in batch.
The proto is:
```
syntax = ""proto3"";
package raft;
message Peer {
uint64 id = 1;
uint64 store_id = 2;
bool is_learner = 3;
}
message RegionEpoch {
uint64 conf_ver = 1;
uint64 version = 2;
}
message Heartbeat {
uint64 to = 1;
uint64 term = 2;
uint64 log_term = 3;
uint64 index = 4;
uint64 commit = 5;
}
message Message {
uint64 region_id = 1;
Peer from_peer = 2;
Peer to_peer = 3;
RegionEpoch epch = 4;
Heartbeat msg = 5;
}
message Messages {
repeated Message msgs = 1;
}
message Done {
}
service Raft {
rpc One(stream Message) returns (Done) {}
rpc Multi(stream Messages) returns (Done) {}
}
```
The server implementation is very easy - receive all messages and reply one Done msg.
The client looks:
```rust
fn test_one(num: usize, client: &RaftClient) {
let t = Instant::now();
let (mut sink, receiver) = client.one().unwrap();
for _ in 0..num {
for _ in 0..9 {
sink = sink.send((new_msg(), WriteFlags::default().buffer_hint(true)))
.wait()
.unwrap();
}
sink = sink.send((new_msg(), WriteFlags::default()))
.wait()
.unwrap();
}
future::poll_fn(|| sink.close()).wait().unwrap();
receiver.wait().unwrap();
println!(""one time {:?}"", t.elapsed())
}
fn test_multi(num: usize, client: &RaftClient) {
let t = Instant::now();
let (mut sink, receiver) = client.multi().unwrap();
for _ in 0..num {
sink = sink.send((new_msgs(), WriteFlags::default()))
.wait()
.unwrap();
}
future::poll_fn(|| sink.close()).wait().unwrap();
receiver.wait().unwrap();
println!(""multi time {:?}"", t.elapsed())
}
```
Then I use `num = 100000` for test and get the result:
```
multi time Duration { secs: 3, nanos: 135709348 }
one time Duration { secs: 18, nanos: 973024439 }
```
As you can see, using batch can reduce the total time too much, maybe 5 times shorter. So I think it has a big benefit to send msgs in batch, especially for TiKV <-> TiKV.
But we also need to do a benchmark for it, and we must also consider backward compatibility. ",True,"sending messages in batch - I wrote a simple test for gRPC - send 10 msgs with buffer hint and send one msg which contains 10 msgs in batch.
The proto is:
```
syntax = ""proto3"";
package raft;
message Peer {
uint64 id = 1;
uint64 store_id = 2;
bool is_learner = 3;
}
message RegionEpoch {
uint64 conf_ver = 1;
uint64 version = 2;
}
message Heartbeat {
uint64 to = 1;
uint64 term = 2;
uint64 log_term = 3;
uint64 index = 4;
uint64 commit = 5;
}
message Message {
uint64 region_id = 1;
Peer from_peer = 2;
Peer to_peer = 3;
RegionEpoch epch = 4;
Heartbeat msg = 5;
}
message Messages {
repeated Message msgs = 1;
}
message Done {
}
service Raft {
rpc One(stream Message) returns (Done) {}
rpc Multi(stream Messages) returns (Done) {}
}
```
The server implementation is very easy - receive all messages and reply one Done msg.
The client looks:
```rust
fn test_one(num: usize, client: &RaftClient) {
let t = Instant::now();
let (mut sink, receiver) = client.one().unwrap();
for _ in 0..num {
for _ in 0..9 {
sink = sink.send((new_msg(), WriteFlags::default().buffer_hint(true)))
.wait()
.unwrap();
}
sink = sink.send((new_msg(), WriteFlags::default()))
.wait()
.unwrap();
}
future::poll_fn(|| sink.close()).wait().unwrap();
receiver.wait().unwrap();
println!(""one time {:?}"", t.elapsed())
}
fn test_multi(num: usize, client: &RaftClient) {
let t = Instant::now();
let (mut sink, receiver) = client.multi().unwrap();
for _ in 0..num {
sink = sink.send((new_msgs(), WriteFlags::default()))
.wait()
.unwrap();
}
future::poll_fn(|| sink.close()).wait().unwrap();
receiver.wait().unwrap();
println!(""multi time {:?}"", t.elapsed())
}
```
Then I use `num = 100000` for test and get the result:
```
multi time Duration { secs: 3, nanos: 135709348 }
one time Duration { secs: 18, nanos: 973024439 }
```
As you can see, using batch can reduce the total time too much, maybe 5 times shorter. So I think it has a big benefit to send msgs in batch, especially for TiKV <-> TiKV.
But we also need to do a benchmark for it, and we must also consider backward compatibility. ",0,sending messages in batch i wrote a simple test for grpc send msgs with buffer hint and send one msg which contains msgs in batch the proto is syntax package raft message peer id store id bool is learner message regionepoch conf ver version message heartbeat to term log term index commit message message region id peer from peer peer to peer regionepoch epch heartbeat msg message messages repeated message msgs message done service raft rpc one stream message returns done rpc multi stream messages returns done the server implementation is very easy receive all messages and reply one done msg the client looks rust fn test one num usize client raftclient let t instant now let mut sink receiver client one unwrap for in num for in sink sink send new msg writeflags default buffer hint true wait unwrap sink sink send new msg writeflags default wait unwrap future poll fn sink close wait unwrap receiver wait unwrap println one time t elapsed fn test multi num usize client raftclient let t instant now let mut sink receiver client multi unwrap for in num sink sink send new msgs writeflags default wait unwrap future poll fn sink close wait unwrap receiver wait unwrap println multi time t elapsed then i use num for test and get the result multi time duration secs nanos one time duration secs nanos as you can see using batch can reduce the total time too much maybe times shorter so i think it has a big benefit to send msgs in batch especially for tikv tikv but we also need to do a benchmark for it and we must also consider backward compatibility ,0
1299,14705127835.0,IssuesEvent,2021-01-04 17:35:48,argoproj/argo,https://api.github.com/repos/argoproj/argo,closed,v2.12: `pod deleted` + `re-apply` error = errored workflow,bug epic/reliability regression,"## Summary
In v2.12, we can get a `pod deleted` error under high load. I believe this is caused by factors interplaying:
1. The workflow completes successfully.
2. A pod is then deleted during clean-up, we'll get a re-queue of the workflow.
3. On the next reconciliation the informer returns the same (and now out of date) workflow as the last reconciliation.
4. The pod has been deleted, so the reconciliation marks the pod as `Error: pod deleted`. The workflow is marked as errored
5. Update fails due to resource version check.
6. Re-apply overwrites the previously successful workflow with an error workflow.
7. If pod GC strategy is on-success, then the TTL controller will error.
Causes:
* `reapplyUpdate` will happily overwrite a completed workflow or node.
* v2.12 added indexers. I _think_ when any of these errors, the workflow update is lost
* I _think_ that `DEFAULT_REQUEUE_TIME` should be longer, up to 10s.
Solution
* Modify `reapplyUpdate` to check to see it is overwriting a successful workflow or any successful nodes. Error out. This will prevent any future cases of succeeded workflows being marked as error.
* Modify the indexers so that they can never return errors. This will prevent conflict error.
* I don't think that the grace period for recently created pods is needed after these changes. Mark it with at `TODO` to remove.
Relates to #4795, #4634, #4794
",True,"v2.12: `pod deleted` + `re-apply` error = errored workflow - ## Summary
In v2.12, we can get a `pod deleted` error under high load. I believe this is caused by factors interplaying:
1. The workflow completes successfully.
2. A pod is then deleted during clean-up, we'll get a re-queue of the workflow.
3. On the next reconciliation the informer returns the same (and now out of date) workflow as the last reconciliation.
4. The pod has been deleted, so the reconciliation marks the pod as `Error: pod deleted`. The workflow is marked as errored
5. Update fails due to resource version check.
6. Re-apply overwrites the previously successful workflow with an error workflow.
7. If pod GC strategy is on-success, then the TTL controller will error.
Causes:
* `reapplyUpdate` will happily overwrite a completed workflow or node.
* v2.12 added indexers. I _think_ when any of these errors, the workflow update is lost
* I _think_ that `DEFAULT_REQUEUE_TIME` should be longer, up to 10s.
Solution
* Modify `reapplyUpdate` to check to see it is overwriting a successful workflow or any successful nodes. Error out. This will prevent any future cases of succeeded workflows being marked as error.
* Modify the indexers so that they can never return errors. This will prevent conflict error.
* I don't think that the grace period for recently created pods is needed after these changes. Mark it with at `TODO` to remove.
Relates to #4795, #4634, #4794
",1, pod deleted re apply error errored workflow summary in we can get a pod deleted error under high load i believe this is caused by factors interplaying the workflow completes successfully a pod is then deleted during clean up we ll get a re queue of the workflow on the next reconciliation the informer returns the same and now out of date workflow as the last reconciliation the pod has been deleted so the reconciliation marks the pod as error pod deleted the workflow is marked as errored update fails due to resource version check re apply overwrites the previously successful workflow with an error workflow if pod gc strategy is on success then the ttl controller will error causes reapplyupdate will happily overwrite a completed workflow or node added indexers i think when any of these errors the workflow update is lost i think that default requeue time should be longer up to solution modify reapplyupdate to check to see it is overwriting a successful workflow or any successful nodes error out this will prevent any future cases of succeeded workflows being marked as error modify the indexers so that they can never return errors this will prevent conflict error i don t think that the grace period for recently created pods is needed after these changes mark it with at todo to remove relates to ,1
549630,16096091509.0,IssuesEvent,2021-04-27 00:06:15,googleapis/synthtool,https://api.github.com/repos/googleapis/synthtool,closed,Snippet parser cannot handle duplicated region tags,priority: p2 type: bug,Duplicated region tags should append to the same tag (as an alternative to using the exclusion tags).,1.0,Snippet parser cannot handle duplicated region tags - Duplicated region tags should append to the same tag (as an alternative to using the exclusion tags).,0,snippet parser cannot handle duplicated region tags duplicated region tags should append to the same tag as an alternative to using the exclusion tags ,0
2651,26863878128.0,IssuesEvent,2023-02-03 21:11:22,Azure/azure-sdk-for-java,https://api.github.com/repos/Azure/azure-sdk-for-java,closed,Investigate Registering Strongly-Typed Header Constructors to azure-core,Client Azure.Core pillar-reliability,"Investigate adding a concept of registering constructors for strongly-typed header classes into an `azure-core` maintained constructor cache. The strongly-typed header class would register its constructor to `azure-core` when the class is loaded, in the static initializer. Having the strongly-typed header class register its constructor will remove the need for `azure-core` to use reflection to find the constructor and create the class, therefore removing a location where restrictive reflection access could result in an exception being thrown (for example when a SecurityManager is being used). Only strongly-typed headers that have picked up the changes for using `HttpHeaders` directly instead of using jackson-databind should be registered (this issue outlines that https://github.com/Azure/azure-sdk-for-java/issues/27961). The following is a potential example of how this would look:
```java
public final class StronglyTypedHeadersHandler {
private static final Map, Function> CACHE = new ConcurrentHashMap<>();
public static void register(Class> type, Function constructor) {
CACHE.put(type, constructor);
}
@SuppressWarnings(""unchecked"")
public static T construct(Class type, HttpHeaders headers) {
var constructor = CACHE.get(type);
if (constructor == null) {
throw new IllegalStateException(""No constructor found for class '"" + type + ""'."");
}
return (T) constructor.apply(headers);
}
}
public final class TestClass {
static {
StronglyTypedHeadersHandler.register(TestClass.class, TestClass::new);
}
TestClass(HttpHeaders rawHeaders) {
// Do stuff.
}
}
```
This concept could be extended to other locations where reflection usage could be reduced by registering common constructors or methods.",True,"Investigate Registering Strongly-Typed Header Constructors to azure-core - Investigate adding a concept of registering constructors for strongly-typed header classes into an `azure-core` maintained constructor cache. The strongly-typed header class would register its constructor to `azure-core` when the class is loaded, in the static initializer. Having the strongly-typed header class register its constructor will remove the need for `azure-core` to use reflection to find the constructor and create the class, therefore removing a location where restrictive reflection access could result in an exception being thrown (for example when a SecurityManager is being used). Only strongly-typed headers that have picked up the changes for using `HttpHeaders` directly instead of using jackson-databind should be registered (this issue outlines that https://github.com/Azure/azure-sdk-for-java/issues/27961). The following is a potential example of how this would look:
```java
public final class StronglyTypedHeadersHandler {
private static final Map, Function> CACHE = new ConcurrentHashMap<>();
public static void register(Class> type, Function constructor) {
CACHE.put(type, constructor);
}
@SuppressWarnings(""unchecked"")
public static T construct(Class type, HttpHeaders headers) {
var constructor = CACHE.get(type);
if (constructor == null) {
throw new IllegalStateException(""No constructor found for class '"" + type + ""'."");
}
return (T) constructor.apply(headers);
}
}
public final class TestClass {
static {
StronglyTypedHeadersHandler.register(TestClass.class, TestClass::new);
}
TestClass(HttpHeaders rawHeaders) {
// Do stuff.
}
}
```
This concept could be extended to other locations where reflection usage could be reduced by registering common constructors or methods.",1,investigate registering strongly typed header constructors to azure core investigate adding a concept of registering constructors for strongly typed header classes into an azure core maintained constructor cache the strongly typed header class would register its constructor to azure core when the class is loaded in the static initializer having the strongly typed header class register its constructor will remove the need for azure core to use reflection to find the constructor and create the class therefore removing a location where restrictive reflection access could result in an exception being thrown for example when a securitymanager is being used only strongly typed headers that have picked up the changes for using httpheaders directly instead of using jackson databind should be registered this issue outlines that the following is a potential example of how this would look java public final class stronglytypedheadershandler private static final map function cache new concurrenthashmap public static void register class type function constructor cache put type constructor suppresswarnings unchecked public static t construct class type httpheaders headers var constructor cache get type if constructor null throw new illegalstateexception no constructor found for class type return t constructor apply headers public final class testclass static stronglytypedheadershandler register testclass class testclass new testclass httpheaders rawheaders do stuff this concept could be extended to other locations where reflection usage could be reduced by registering common constructors or methods ,1
2047,22948073759.0,IssuesEvent,2022-07-19 03:30:13,ppy/osu-framework,https://api.github.com/repos/ppy/osu-framework,closed,"Java.Lang.LinkageError: no non-static method ""Landroid/opengl/GLSurfaceView;.setDefaultFocusHighlightEnabled(Z)V""",platform:android good first issue type:reliability,"`setDefaultFocusHighlightEnabled` [is only available on API level 26 or higher.](https://developer.android.com/reference/android/view/View#setDefaultFocusHighlightEnabled(boolean)) The following set should therefore be guarded with an appropriate version check.
https://github.com/ppy/osu-framework/blob/582170eec6c01e345a5178a1e55f3f686a1a30cd/osu.Framework.Android/AndroidGameView.cs#L104
---
Sentry Issue: [OSU-35Q](https://sentry.ppy.sh/organizations/ppy/issues/3718/?referrer=github_integration)
```
Java.Lang.LinkageError: no non-static method ""Landroid/opengl/GLSurfaceView;.setDefaultFocusHighlightEnabled(Z)V""
?, in JniMethodInfo InstanceMethods.GetMethodID(JniObjectReference type, string name, string signature)
?, in JniMethodInfo JniType.GetInstanceMethod(string name, string signature)
?, in JniMethodInfo JniInstanceMethods.GetMethodInfo(string encodedMember)
?, in void JniInstanceMethods.InvokeVirtualVoidMethod(string encodedMember, IJavaPeerable self, JniArgumentValue* parameters)
?, in void AndroidGameView.init()
...
(5 additional frame(s) were not displayed)
```",True,"Java.Lang.LinkageError: no non-static method ""Landroid/opengl/GLSurfaceView;.setDefaultFocusHighlightEnabled(Z)V"" - `setDefaultFocusHighlightEnabled` [is only available on API level 26 or higher.](https://developer.android.com/reference/android/view/View#setDefaultFocusHighlightEnabled(boolean)) The following set should therefore be guarded with an appropriate version check.
https://github.com/ppy/osu-framework/blob/582170eec6c01e345a5178a1e55f3f686a1a30cd/osu.Framework.Android/AndroidGameView.cs#L104
---
Sentry Issue: [OSU-35Q](https://sentry.ppy.sh/organizations/ppy/issues/3718/?referrer=github_integration)
```
Java.Lang.LinkageError: no non-static method ""Landroid/opengl/GLSurfaceView;.setDefaultFocusHighlightEnabled(Z)V""
?, in JniMethodInfo InstanceMethods.GetMethodID(JniObjectReference type, string name, string signature)
?, in JniMethodInfo JniType.GetInstanceMethod(string name, string signature)
?, in JniMethodInfo JniInstanceMethods.GetMethodInfo(string encodedMember)
?, in void JniInstanceMethods.InvokeVirtualVoidMethod(string encodedMember, IJavaPeerable self, JniArgumentValue* parameters)
?, in void AndroidGameView.init()
...
(5 additional frame(s) were not displayed)
```",1,java lang linkageerror no non static method landroid opengl glsurfaceview setdefaultfocushighlightenabled z v setdefaultfocushighlightenabled the following set should therefore be guarded with an appropriate version check sentry issue java lang linkageerror no non static method landroid opengl glsurfaceview setdefaultfocushighlightenabled z v in jnimethodinfo instancemethods getmethodid jniobjectreference type string name string signature in jnimethodinfo jnitype getinstancemethod string name string signature in jnimethodinfo jniinstancemethods getmethodinfo string encodedmember in void jniinstancemethods invokevirtualvoidmethod string encodedmember ijavapeerable self jniargumentvalue parameters in void androidgameview init additional frame s were not displayed ,1
152286,5843681442.0,IssuesEvent,2017-05-10 09:47:24,kubernetes/kubernetes,https://api.github.com/repos/kubernetes/kubernetes,closed,kubernetes thinks pod is running even after the node was deleted explicitly,kind/support priority/awaiting-more-evidence sig/scheduling team/control-plane (deprecated - do not use),"```
kubectl delete node 192.168.78.14
kubectl get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE NODE
default collectd-9pafi 1/1 Running 0 23h 192.168.78.14
default collectd-3dslw 1/1 Running 0 23h 192.168.78.15
default collectd-ja6p7 1/1 Running 0 23h 192.168.78.16
default graphite-zruml 1/1 Running 0 1d 192.168.78.15
default ha-service-loadbalancer-a7ssn 1/1 Running 0 1d 192.168.78.15
default ha-service-loadbalancer-k3hq2 1/1 Running 0 1d 192.168.78.16
kube-system kube-dns-v11-4qoi8 4/4 Running 0 1d 192.168.78.16
kube-system kube-registry-v0-69k0f 1/1 Running 0 1d 192.168.78.16
kube-system kubernetes-dashboard-v1.0.0-cwg7k 1/1 Running 0 1d 192.168.78.15
```
",1.0,"kubernetes thinks pod is running even after the node was deleted explicitly - ```
kubectl delete node 192.168.78.14
kubectl get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE NODE
default collectd-9pafi 1/1 Running 0 23h 192.168.78.14
default collectd-3dslw 1/1 Running 0 23h 192.168.78.15
default collectd-ja6p7 1/1 Running 0 23h 192.168.78.16
default graphite-zruml 1/1 Running 0 1d 192.168.78.15
default ha-service-loadbalancer-a7ssn 1/1 Running 0 1d 192.168.78.15
default ha-service-loadbalancer-k3hq2 1/1 Running 0 1d 192.168.78.16
kube-system kube-dns-v11-4qoi8 4/4 Running 0 1d 192.168.78.16
kube-system kube-registry-v0-69k0f 1/1 Running 0 1d 192.168.78.16
kube-system kubernetes-dashboard-v1.0.0-cwg7k 1/1 Running 0 1d 192.168.78.15
```
",0,kubernetes thinks pod is running even after the node was deleted explicitly kubectl delete node kubectl get pods o wide all namespaces namespace name ready status restarts age node default collectd running default collectd running default collectd running default graphite zruml running default ha service loadbalancer running default ha service loadbalancer running kube system kube dns running kube system kube registry running kube system kubernetes dashboard running ,0
399835,11761451225.0,IssuesEvent,2020-03-13 21:54:31,azerothcore/azerothcore-wotlk,https://api.github.com/repos/azerothcore/azerothcore-wotlk,opened,"Passive spell ""thorns"" can MISS",CORE Class - Druid Priority - High,"
##### SMALL DESCRIPTION:
Druid's passive spell ""thorns"" can miss https://wotlk.evowow.com/?search=thorns#abilities
##### EXPECTED BLIZZLIKE BEHAVIOUR:
It should never miss
##### CURRENT BEHAVIOUR:
There is a low chance that instead of dealing damage, it misses, as if casting a spell.
##### STEPS TO REPRODUCE THE PROBLEM:
1. .cast 9910
2. Fight a melee mob with few levels higher than you, and check the combat log for the miss to happen
##### BRANCH(ES):
master
##### AC HASH/COMMIT:
2eeab4c72b0383d4611a17bbb925a1f2c412082d
",1.0,"Passive spell ""thorns"" can MISS -
##### SMALL DESCRIPTION:
Druid's passive spell ""thorns"" can miss https://wotlk.evowow.com/?search=thorns#abilities
##### EXPECTED BLIZZLIKE BEHAVIOUR:
It should never miss
##### CURRENT BEHAVIOUR:
There is a low chance that instead of dealing damage, it misses, as if casting a spell.
##### STEPS TO REPRODUCE THE PROBLEM:
1. .cast 9910
2. Fight a melee mob with few levels higher than you, and check the combat log for the miss to happen
##### BRANCH(ES):
master
##### AC HASH/COMMIT:
2eeab4c72b0383d4611a17bbb925a1f2c412082d
",0,passive spell thorns can miss this template is for problem reports for feature suggestion etc feel free to edit it if this is a crash report upload the crashlog on for issues containing a fix please create a pull request following this tutorial small description druid s passive spell thorns can miss expected blizzlike behaviour it should never miss current behaviour there is a low chance that instead of dealing damage it misses as if casting a spell steps to reproduce the problem describe precisely how to reproduce the bug so we can fix it or confirm its existence which commands to use which npc to teleport to do we need to have debug flags on cmake do we need to look at the console while the bug happens other steps cast fight a melee mob with few levels higher than you and check the combat log for the miss to happen branch es master ac hash commit if you do not fill this out we will close your issue never write latest always put the actual value instead find the commit hash unique identifier by running git log on your own clone of azerothcore or by looking at here ,0
28945,11706037212.0,IssuesEvent,2020-03-07 19:31:43,vlaship/spark-streaming,https://api.github.com/repos/vlaship/spark-streaming,opened,CVE-2018-14720 (High) detected in jackson-databind-2.6.5.jar,security vulnerability,"## CVE-2018-14720 - High Severity Vulnerability
Vulnerable Library - jackson-databind-2.6.5.jar
General data-binding functionality for Jackson: works on core streaming API
Path to dependency file: /tmp/ws-scm/spark-streaming/build.gradle
Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.6.5/d50be1723a09befd903887099ff2014ea9020333/jackson-databind-2.6.5.jar,/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.6.5/d50be1723a09befd903887099ff2014ea9020333/jackson-databind-2.6.5.jar
FasterXML jackson-databind 2.x before 2.9.7 might allow attackers to conduct external XML entity (XXE) attacks by leveraging failure to block unspecified JDK classes from polymorphic deserialization.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2018-14720 (High) detected in jackson-databind-2.6.5.jar - ## CVE-2018-14720 - High Severity Vulnerability
Vulnerable Library - jackson-databind-2.6.5.jar
General data-binding functionality for Jackson: works on core streaming API
Path to dependency file: /tmp/ws-scm/spark-streaming/build.gradle
Path to vulnerable library: /root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.6.5/d50be1723a09befd903887099ff2014ea9020333/jackson-databind-2.6.5.jar,/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.6.5/d50be1723a09befd903887099ff2014ea9020333/jackson-databind-2.6.5.jar
FasterXML jackson-databind 2.x before 2.9.7 might allow attackers to conduct external XML entity (XXE) attacks by leveraging failure to block unspecified JDK classes from polymorphic deserialization.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in jackson databind jar cve high severity vulnerability vulnerable library jackson databind jar general data binding functionality for jackson works on core streaming api library home page a href path to dependency file tmp ws scm spark streaming build gradle path to vulnerable library root gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar root gradle caches modules files com fasterxml jackson core jackson databind jackson databind jar dependency hierarchy spark streaming jar root library spark core jar x jackson databind jar vulnerable library found in head commit a href vulnerability details fasterxml jackson databind x before might allow attackers to conduct external xml entity xxe attacks by leveraging failure to block unspecified jdk classes from polymorphic deserialization publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource ,0
133832,5215375160.0,IssuesEvent,2017-01-26 04:32:37,imrogues/angularjs,https://api.github.com/repos/imrogues/angularjs,opened,Directive Linking and Scopes,[priority] low [status] accepted [type] feature,"### Description
Implement both of the core processes of the directive system implemented: _Compilation_ and _linking_. Understand how directives and scopes interact and how the directive system creates new scopes.
---
### Issue checklist
- [ ] How linking is built into the functions returned by the compile functions.
- [ ] How the _public link function_, the _composite link functions_, the _node link functions_, and the _directive link functions_ are all return values of their respective compile functions, and how they are chained together during linking.
- [ ] That a directive’s compile function should return its link function.
- [ ] That you can omit a directive’s compile function and supply the link function directly.
- [ ] How child nodes get linked whether the parent nodes have directives or not.
- [ ] That pre-link functions are invoked before child node linking, and post-link functions after it.
- [ ] That a link function is always post-link function unless explicitly defined otherwise.
- [ ] How the linking process protects itself from DOM mutations that occur during linking.
- [ ] How the nodes for multi-element directives are resolved during linking.
- [ ] How directives can request new, inherited scopes.
- [ ] That inherited scopes are shared by all the directives in the same element and its children.
- [ ] How CSS classes and jQuery data are added to elements that have directives with inherited scopes.
- [ ] How directives can request new isolate scopes.
- [ ] That isolate scopes are not shared between directives in the same element or its children.
- [ ] That there can only be one isolate scope directive per element.
- [ ] That there cannot be inherited scope directives on an element when there is an isolate scope directive.
- [ ] How element attributes can be bound as observed values on an isolate scope.
- [ ] How _one_ and two–way data bindings can be attached on an isolate scope.
- [ ] How one–way data bindings watch the parent expression, and two–way data bindings watch both; the parent expression and the child scope attribute.
- [ ] That went both, the parent and child change simultaneously in a two–way data binding, the parent takes precedence.
- [ ] How collections are supported in two–way data bindings.
- [ ] How invokable expressions can be attached on an isolate scope.
- [ ] How named arguments can be used with invokable expressions.
All issues in milestone: [6 Directives](https://github.com/imrogues/angularjs/milestone/6)
---
### Assignees
- [ ] Final assign @imrogues",1.0,"Directive Linking and Scopes - ### Description
Implement both of the core processes of the directive system implemented: _Compilation_ and _linking_. Understand how directives and scopes interact and how the directive system creates new scopes.
---
### Issue checklist
- [ ] How linking is built into the functions returned by the compile functions.
- [ ] How the _public link function_, the _composite link functions_, the _node link functions_, and the _directive link functions_ are all return values of their respective compile functions, and how they are chained together during linking.
- [ ] That a directive’s compile function should return its link function.
- [ ] That you can omit a directive’s compile function and supply the link function directly.
- [ ] How child nodes get linked whether the parent nodes have directives or not.
- [ ] That pre-link functions are invoked before child node linking, and post-link functions after it.
- [ ] That a link function is always post-link function unless explicitly defined otherwise.
- [ ] How the linking process protects itself from DOM mutations that occur during linking.
- [ ] How the nodes for multi-element directives are resolved during linking.
- [ ] How directives can request new, inherited scopes.
- [ ] That inherited scopes are shared by all the directives in the same element and its children.
- [ ] How CSS classes and jQuery data are added to elements that have directives with inherited scopes.
- [ ] How directives can request new isolate scopes.
- [ ] That isolate scopes are not shared between directives in the same element or its children.
- [ ] That there can only be one isolate scope directive per element.
- [ ] That there cannot be inherited scope directives on an element when there is an isolate scope directive.
- [ ] How element attributes can be bound as observed values on an isolate scope.
- [ ] How _one_ and two–way data bindings can be attached on an isolate scope.
- [ ] How one–way data bindings watch the parent expression, and two–way data bindings watch both; the parent expression and the child scope attribute.
- [ ] That went both, the parent and child change simultaneously in a two–way data binding, the parent takes precedence.
- [ ] How collections are supported in two–way data bindings.
- [ ] How invokable expressions can be attached on an isolate scope.
- [ ] How named arguments can be used with invokable expressions.
All issues in milestone: [6 Directives](https://github.com/imrogues/angularjs/milestone/6)
---
### Assignees
- [ ] Final assign @imrogues",0,directive linking and scopes description implement both of the core processes of the directive system implemented compilation and linking understand how directives and scopes interact and how the directive system creates new scopes issue checklist how linking is built into the functions returned by the compile functions how the public link function the composite link functions the node link functions and the directive link functions are all return values of their respective compile functions and how they are chained together during linking that a directive’s compile function should return its link function that you can omit a directive’s compile function and supply the link function directly how child nodes get linked whether the parent nodes have directives or not that pre link functions are invoked before child node linking and post link functions after it that a link function is always post link function unless explicitly defined otherwise how the linking process protects itself from dom mutations that occur during linking how the nodes for multi element directives are resolved during linking how directives can request new inherited scopes that inherited scopes are shared by all the directives in the same element and its children how css classes and jquery data are added to elements that have directives with inherited scopes how directives can request new isolate scopes that isolate scopes are not shared between directives in the same element or its children that there can only be one isolate scope directive per element that there cannot be inherited scope directives on an element when there is an isolate scope directive how element attributes can be bound as observed values on an isolate scope how one and two–way data bindings can be attached on an isolate scope how one–way data bindings watch the parent expression and two–way data bindings watch both the parent expression and the child scope attribute that went both the parent and child change simultaneously in a two–way data binding the parent takes precedence how collections are supported in two–way data bindings how invokable expressions can be attached on an isolate scope how named arguments can be used with invokable expressions all issues in milestone assignees final assign imrogues,0
690913,23677229037.0,IssuesEvent,2022-08-28 09:11:57,fredo-ai/Fredo-Public,https://api.github.com/repos/fredo-ai/Fredo-Public,closed,Successful transcribe should also save the text as a new text item + link to the audio file,priority-1,"Link to Miro: https://miro.com/app/board/o9J_lttkfEA=/?moveToWidget=3458764529560852927&cot=14
@JerryVDP ",1.0,"Successful transcribe should also save the text as a new text item + link to the audio file - Link to Miro: https://miro.com/app/board/o9J_lttkfEA=/?moveToWidget=3458764529560852927&cot=14
@JerryVDP ",0,successful transcribe should also save the text as a new text item link to the audio file link to miro jerryvdp ,0
110485,13909495473.0,IssuesEvent,2020-10-20 14:58:48,ToxicBot-Discord/ToxicBot,https://api.github.com/repos/ToxicBot-Discord/ToxicBot,closed,Changing the responses given by the bot,beginner design good first issue hacktoberfest up-for-grabs,Currently the bot responds to the user using normal texts. A more elegant approach would be the usage of discord' Embed. Your task will be to convert each of the responses provided by the bot to a discord Embed.,1.0,Changing the responses given by the bot - Currently the bot responds to the user using normal texts. A more elegant approach would be the usage of discord' Embed. Your task will be to convert each of the responses provided by the bot to a discord Embed.,0,changing the responses given by the bot currently the bot responds to the user using normal texts a more elegant approach would be the usage of discord embed your task will be to convert each of the responses provided by the bot to a discord embed ,0
328058,9985518965.0,IssuesEvent,2019-07-10 16:45:25,CentOS-PaaS-SIG/linchpin,https://api.github.com/repos/CentOS-PaaS-SIG/linchpin,closed,RFE: Linchpin AppImage (Preview),appimage ci low priority packaging,"Linchpin distribution is currently done only with pip and containers. Both options are not simple and require knowledge and experience. AppImage is a single binary that contains all the required files to run a program, the only downside is it requires installing fuse on the system host (available on most distrobutions) and the file size. However it could be a good alternative in some cases such as getting started.",1.0,"RFE: Linchpin AppImage (Preview) - Linchpin distribution is currently done only with pip and containers. Both options are not simple and require knowledge and experience. AppImage is a single binary that contains all the required files to run a program, the only downside is it requires installing fuse on the system host (available on most distrobutions) and the file size. However it could be a good alternative in some cases such as getting started.",0,rfe linchpin appimage preview linchpin distribution is currently done only with pip and containers both options are not simple and require knowledge and experience appimage is a single binary that contains all the required files to run a program the only downside is it requires installing fuse on the system host available on most distrobutions and the file size however it could be a good alternative in some cases such as getting started ,0
12135,4369748780.0,IssuesEvent,2016-08-04 01:48:07,dotnet/coreclr,https://api.github.com/repos/dotnet/coreclr,closed,ARM64: Assertion failed 'fieldSeq != FieldSeqStore::NotAField()',ARM64 CodeGen,"Assertion in a corefx test, System.Security.Cryptography.Pkcs.Tests
```
Assert failure(PID 9484 [0x0000250c], Thread: 18696 [0x4908]): Assertion failed 'fieldSeq != FieldSeqStore::NotAField()' in 'EncodeHelpers:EncodeRecipientId(ref,ref,long,long,ref):struct' (IL size 169)
File: e:\github\coreclr\src\jit\valuenum.cpp Line: 2354
Image: E:\Github\corefx\bin\tests\Windows_NT.AnyCPU.Release\System.Security.Cryptography.Pkcs.Tests\netcoreapp1.0\CoreRun.exe
```",1.0,"ARM64: Assertion failed 'fieldSeq != FieldSeqStore::NotAField()' - Assertion in a corefx test, System.Security.Cryptography.Pkcs.Tests
```
Assert failure(PID 9484 [0x0000250c], Thread: 18696 [0x4908]): Assertion failed 'fieldSeq != FieldSeqStore::NotAField()' in 'EncodeHelpers:EncodeRecipientId(ref,ref,long,long,ref):struct' (IL size 169)
File: e:\github\coreclr\src\jit\valuenum.cpp Line: 2354
Image: E:\Github\corefx\bin\tests\Windows_NT.AnyCPU.Release\System.Security.Cryptography.Pkcs.Tests\netcoreapp1.0\CoreRun.exe
```",0, assertion failed fieldseq fieldseqstore notafield assertion in a corefx test system security cryptography pkcs tests assert failure pid thread assertion failed fieldseq fieldseqstore notafield in encodehelpers encoderecipientid ref ref long long ref struct il size file e github coreclr src jit valuenum cpp line image e github corefx bin tests windows nt anycpu release system security cryptography pkcs tests corerun exe ,0
604206,18679374823.0,IssuesEvent,2021-11-01 02:05:25,boostcampwm-2021/web14-salondesrefuses,https://api.github.com/repos/boostcampwm-2021/web14-salondesrefuses,closed,(높음)[FE] 전시회 제작 버튼을 클릭하여 제작 페이지로 이동,🚀 Front Priority: High,"## 📃 이슈 내용
상단의 버튼을 통해 사용자는 나만의 전시회를 만들 수 있다.
## ✅ 체크 리스트
- [ ] 전시회 제작 버튼 클릭 시 제작 페이지로 라우팅
## 📌 레퍼런스
",1.0,"(높음)[FE] 전시회 제작 버튼을 클릭하여 제작 페이지로 이동 - ## 📃 이슈 내용
상단의 버튼을 통해 사용자는 나만의 전시회를 만들 수 있다.
## ✅ 체크 리스트
- [ ] 전시회 제작 버튼 클릭 시 제작 페이지로 라우팅
## 📌 레퍼런스
",0, 높음 전시회 제작 버튼을 클릭하여 제작 페이지로 이동 📃 이슈 내용 상단의 버튼을 통해 사용자는 나만의 전시회를 만들 수 있다 ✅ 체크 리스트 전시회 제작 버튼 클릭 시 제작 페이지로 라우팅 📌 레퍼런스 ,0
2371,24963695196.0,IssuesEvent,2022-11-01 17:32:55,dotnet/roslyn,https://api.github.com/repos/dotnet/roslyn,closed,Opening a _very_ large file causes Visual Studio to crash,Bug help wanted Area-IDE Tenet-Reliability Developer Community,"_This issue has been moved from [a ticket on Developer Community](https://developercommunity.visualstudio.com/content/problem/421417/devenv-unknown-hard-error-ctorchararraystartlength.html)._
---
When working with big code files the IDE frequently crashes with Unknown Hard Error. See attached screenshot.
The stacktrace in the event log shows CtorCharArrayStartLength as the failure point. See attached full stacktrace for more info.
Application: devenv.exe
Framework Version: v4.0.30319
Description: The application requested process termination through System.Environment.FailFast(string message).
Message: System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.
at System.String.CtorCharArrayStartLength(Char[] value, Int32 startIndex, Int32 length)
I created a repro to simulate the problem. Open the attached code file, notice that the call to the EndsWith method has a start parenthesis but is not closed. Add and remove the close parenthesis character a few times to trigger the crash.
---
### Original Comments
#### Visual Studio Feedback System on 6/24/2019, 00:44 AM:
This issue is currently being investigated. Our team will get back to you if either more information is needed, a workaround is available, or the issue is resolved.
#### Sam Harwell [MSFT] on 8/8/2019, 03:58 PM:
Thank you for providing feedback, and we’re sorry to hear it’s not behaving as you expect. Based on your description so far, it sounds like you are experiencing a problem which is historically hard to diagnose and resolve using the normal “steps to reproduce”. We created a set of instructions for providing additional information which will help us track down the true source of the problems.
#### bugreporter5367 on 8/8/2019, 05:16 PM:
@sharwell How is it hard to diagnose? I attached a code sample with repro steps to crash the IDE. VS 16.2 is even easier, just opening the file causes an immediate crash. What more would you like me to provide?
---
### Original Solutions
(no solutions)",True,"Opening a _very_ large file causes Visual Studio to crash - _This issue has been moved from [a ticket on Developer Community](https://developercommunity.visualstudio.com/content/problem/421417/devenv-unknown-hard-error-ctorchararraystartlength.html)._
---
When working with big code files the IDE frequently crashes with Unknown Hard Error. See attached screenshot.
The stacktrace in the event log shows CtorCharArrayStartLength as the failure point. See attached full stacktrace for more info.
Application: devenv.exe
Framework Version: v4.0.30319
Description: The application requested process termination through System.Environment.FailFast(string message).
Message: System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.
at System.String.CtorCharArrayStartLength(Char[] value, Int32 startIndex, Int32 length)
I created a repro to simulate the problem. Open the attached code file, notice that the call to the EndsWith method has a start parenthesis but is not closed. Add and remove the close parenthesis character a few times to trigger the crash.
---
### Original Comments
#### Visual Studio Feedback System on 6/24/2019, 00:44 AM:
This issue is currently being investigated. Our team will get back to you if either more information is needed, a workaround is available, or the issue is resolved.
#### Sam Harwell [MSFT] on 8/8/2019, 03:58 PM:
Thank you for providing feedback, and we’re sorry to hear it’s not behaving as you expect. Based on your description so far, it sounds like you are experiencing a problem which is historically hard to diagnose and resolve using the normal “steps to reproduce”. We created a set of instructions for providing additional information which will help us track down the true source of the problems.
#### bugreporter5367 on 8/8/2019, 05:16 PM:
@sharwell How is it hard to diagnose? I attached a code sample with repro steps to crash the IDE. VS 16.2 is even easier, just opening the file causes an immediate crash. What more would you like me to provide?
---
### Original Solutions
(no solutions)",1,opening a very large file causes visual studio to crash this issue has been moved from when working with big code files the ide frequently crashes with unknown hard error see attached screenshot the stacktrace in the event log shows ctorchararraystartlength as the failure point see attached full stacktrace for more info application devenv exe framework version description the application requested process termination through system environment failfast string message message system outofmemoryexception exception of type system outofmemoryexception was thrown at system string ctorchararraystartlength char value startindex length i created a repro to simulate the problem open the attached code file notice that the call to the endswith method has a start parenthesis but is not closed add and remove the close parenthesis character a few times to trigger the crash original comments visual studio feedback system on am this issue is currently being investigated our team will get back to you if either more information is needed a workaround is available or the issue is resolved sam harwell on pm thank you for providing feedback and we’re sorry to hear it’s not behaving as you expect based on your description so far it sounds like you are experiencing a problem which is historically hard to diagnose and resolve using the normal “steps to reproduce” we created a set of instructions for providing additional information which will help us track down the true source of the problems based on the information provided so far the most likely scenario to follow is for “crashes” please take a look at the following document to provide the feedback most relevant for the problems you would like to see fixed a target blank href on pm sharwell how is it hard to diagnose i attached a code sample with repro steps to crash the ide vs is even easier just opening the file causes an immediate crash what more would you like me to provide original solutions no solutions ,1
441049,12707051633.0,IssuesEvent,2020-06-23 08:16:47,ballerina-platform/ballerina-lang,https://api.github.com/repos/ballerina-platform/ballerina-lang,opened,Compiler plugins needs to be run when it is necessary,Area/Language Priority/Blocker Type/Bug,"**Description:**
There are some scenarios that compiler plugins is not needed to compile when the source code is compiled.
",1.0,"Compiler plugins needs to be run when it is necessary - **Description:**
There are some scenarios that compiler plugins is not needed to compile when the source code is compiled.
",0,compiler plugins needs to be run when it is necessary description there are some scenarios that compiler plugins is not needed to compile when the source code is compiled ,0
250787,27111540230.0,IssuesEvent,2023-02-15 15:38:05,EliyaC/JAVA-DEMO,https://api.github.com/repos/EliyaC/JAVA-DEMO,closed,CVE-2022-28367 (Medium) detected in antisamy-1.5.3.jar - autoclosed,security vulnerability,"## CVE-2022-28367 - Medium Severity Vulnerability
Vulnerable Library - antisamy-1.5.3.jar
The OWASP AntiSamy project is a collection of APIs for safely allowing users to supply their own HTML
and CSS without exposing the site to XSS vulnerabilities.
OWASP AntiSamy before 1.6.6 allows XSS via HTML tag smuggling on STYLE content with crafted input. The output serializer does not properly encode the supposed Cascading Style Sheets (CSS) content.
Direct dependency fix Resolution (org.owasp.esapi:esapi): 2.3.0.0
***
:rescue_worker_helmet: Automatic Remediation is available for this issue",True,"CVE-2022-28367 (Medium) detected in antisamy-1.5.3.jar - autoclosed - ## CVE-2022-28367 - Medium Severity Vulnerability
Vulnerable Library - antisamy-1.5.3.jar
The OWASP AntiSamy project is a collection of APIs for safely allowing users to supply their own HTML
and CSS without exposing the site to XSS vulnerabilities.
OWASP AntiSamy before 1.6.6 allows XSS via HTML tag smuggling on STYLE content with crafted input. The output serializer does not properly encode the supposed Cascading Style Sheets (CSS) content.
Direct dependency fix Resolution (org.owasp.esapi:esapi): 2.3.0.0
***
:rescue_worker_helmet: Automatic Remediation is available for this issue",0,cve medium detected in antisamy jar autoclosed cve medium severity vulnerability vulnerable library antisamy jar the owasp antisamy project is a collection of apis for safely allowing users to supply their own html and css without exposing the site to xss vulnerabilities library home page a href path to dependency file pom xml path to vulnerable library home wss scanner repository org owasp antisamy antisamy antisamy jar dependency hierarchy esapi jar root library x antisamy jar vulnerable library found in head commit a href found in base branch main vulnerability details owasp antisamy before allows xss via html tag smuggling on style content with crafted input the output serializer does not properly encode the supposed cascading style sheets css content publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org owasp antisamy antisamy direct dependency fix resolution org owasp esapi esapi rescue worker helmet automatic remediation is available for this issue,0
77960,15569906849.0,IssuesEvent,2021-03-17 01:16:11,jrrk/riscv-linux,https://api.github.com/repos/jrrk/riscv-linux,opened,"CVE-2020-1749 (High) detected in linux-amlogicv4.18, aspeedaspeed-4.19-devicetree-no-fsi",security vulnerability,"## CVE-2020-1749 - High Severity Vulnerability
Vulnerable Libraries - linux-amlogicv4.18, aspeedaspeed-4.19-devicetree-no-fsi
Vulnerability Details
A flaw was found in the Linux kernel's implementation of some networking protocols in IPsec, such as VXLAN and GENEVE tunnels over IPv6. When an encrypted tunnel is created between two hosts, the kernel isn't correctly routing tunneled data over the encrypted link; rather sending the data unencrypted. This would allow anyone in between the two endpoints to read the traffic unencrypted. The main threat from this vulnerability is to data confidentiality.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2020-1749 (High) detected in linux-amlogicv4.18, aspeedaspeed-4.19-devicetree-no-fsi - ## CVE-2020-1749 - High Severity Vulnerability
Vulnerable Libraries - linux-amlogicv4.18, aspeedaspeed-4.19-devicetree-no-fsi
Vulnerability Details
A flaw was found in the Linux kernel's implementation of some networking protocols in IPsec, such as VXLAN and GENEVE tunnels over IPv6. When an encrypted tunnel is created between two hosts, the kernel isn't correctly routing tunneled data over the encrypted link; rather sending the data unencrypted. This would allow anyone in between the two endpoints to read the traffic unencrypted. The main threat from this vulnerability is to data confidentiality.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in linux aspeedaspeed devicetree no fsi cve high severity vulnerability vulnerable libraries linux aspeedaspeed devicetree no fsi vulnerability details a flaw was found in the linux kernel s implementation of some networking protocols in ipsec such as vxlan and geneve tunnels over when an encrypted tunnel is created between two hosts the kernel isn t correctly routing tunneled data over the encrypted link rather sending the data unencrypted this would allow anyone in between the two endpoints to read the traffic unencrypted the main threat from this vulnerability is to data confidentiality publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource ,0
373970,11053557440.0,IssuesEvent,2019-12-10 11:38:33,bounswe/bounswe2019group4,https://api.github.com/repos/bounswe/bounswe2019group4,opened,Backend Additional Trading Equipment,Back-End Priority: Medium Type: Development,"Since we define trading equipment as ""Indices, stocks, ETFs, commodities, currencies, funds, bonds and cryptocurrencies"" in our requirements glossary, we need to provide more trading equipment types. Currently we only have currencies.
We fetch currency values from [Alpha Vantage](https://www.alphavantage.co/documentation/) 3rd party API. This API also provides cryptocurrencies and stocks. We can use it.",1.0,"Backend Additional Trading Equipment - Since we define trading equipment as ""Indices, stocks, ETFs, commodities, currencies, funds, bonds and cryptocurrencies"" in our requirements glossary, we need to provide more trading equipment types. Currently we only have currencies.
We fetch currency values from [Alpha Vantage](https://www.alphavantage.co/documentation/) 3rd party API. This API also provides cryptocurrencies and stocks. We can use it.",0,backend additional trading equipment since we define trading equipment as indices stocks etfs commodities currencies funds bonds and cryptocurrencies in our requirements glossary we need to provide more trading equipment types currently we only have currencies we fetch currency values from party api this api also provides cryptocurrencies and stocks we can use it ,0
1580,17242663702.0,IssuesEvent,2021-07-21 02:24:21,dotnet/runtime,https://api.github.com/repos/dotnet/runtime,closed,System.Collections.Concurrent.Tests crashing in CI,area-VM-coreclr needs more info tenet-reliability,"Build: https://dev.azure.com/dnceng/public/_build/results?buildId=905607&view=ms.vss-test-web.build-test-results-tab&runId=28901114&resultId=182589&paneView=attachments
Configuration: `net6.0-Linux-Release-x64-CoreCLR_release-RedHat.7.Amd64.Open`
how-to-debug-dump.md:
https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-heads-master-b6482f3963824bb38a/System.Collections.Concurrent.Tests/how-to-debug-dump.md?sv=2019-07-07&se=2020-12-22T10%3A40%3A07Z&sr=c&sp=rl&sig=l5N76%2FlDXHLoRkWIFox8OOiSkZPdUawXGM9N0cBe86A%3D
core.1000.22024:
https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-heads-master-b6482f3963824bb38a/System.Collections.Concurrent.Tests/core.1000.22024?sv=2019-07-07&se=2020-12-22T10%3A40%3A07Z&sr=c&sp=rl&sig=l5N76%2FlDXHLoRkWIFox8OOiSkZPdUawXGM9N0cBe86A%3D
console.8cc118e5.log:
https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-heads-master-b6482f3963824bb38a/System.Collections.Concurrent.Tests/console.8cc118e5.log?sv=2019-07-07&se=2020-12-22T10%3A40%3A07Z&sr=c&sp=rl&sig=l5N76%2FlDXHLoRkWIFox8OOiSkZPdUawXGM9N0cBe86A%3D
Runfo Tracking Issue: [system.collections.concurrent.tests crashes](https://runfo.azurewebsites.net/tracking/issue/145)
|Build|Definition|Kind|Run Name|Console|Core Dump|Test Results|Run Client|
|---|---|---|---|---|---|---|---|
|[1082899](https://dev.azure.com/dnceng/public/_build/results?buildId=1082899)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 51099](https://github.com/dotnet/runtime/pull/51099)|net6.0-Linux-Release-arm-CoreCLR_checked-(Alpine.312.Arm32.Open)Ubuntu.1804.ArmArch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:alpine-3.12-helix-arm32v7-20200908125213-5bece88|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-51099-merge-d6024ba89abf4f44b9/System.Collections.Concurrent.Tests/console.72f93a48.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-05-02T10%2525253A09%2525253A29Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DAfGzZ10r%2525252BbKc5ClEW3Dac%2525252FPX0urUKP7TZmW72za8sGk%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-51099-merge-d6024ba89abf4f44b9/System.Collections.Concurrent.Tests/core.1001.163?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-05-02T10%2525253A09%2525253A29Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DAfGzZ10r%2525252BbKc5ClEW3Dac%2525252FPX0urUKP7TZmW72za8sGk%2525253D)|||
|[1072066](https://dev.azure.com/dnceng/public/_build/results?buildId=1072066)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50364](https://github.com/dotnet/runtime/pull/50364)|net6.0-Linux-Release-arm-CoreCLR_checked-(Alpine.312.Arm32.Open)Ubuntu.1804.ArmArch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:alpine-3.12-helix-arm32v7-20200908125213-5bece88|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50364-merge-81ec4a7852984fd3bf/System.Collections.Concurrent.Tests/console.afcca185.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-24T16%2525253A27%2525253A52Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DfSIplwNGfpow%2525252BdkjBX2WSP0lU0SxLVroL7GY12wsNsE%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50364-merge-81ec4a7852984fd3bf/System.Collections.Concurrent.Tests/core.1001.163?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-24T16%2525253A27%2525253A52Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DfSIplwNGfpow%2525252BdkjBX2WSP0lU0SxLVroL7GY12wsNsE%2525253D)||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50364-merge-81ec4a7852984fd3bf/System.Collections.Concurrent.Tests/0b193330-6958-4fc8-8d73-f8703acfb49a.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-24T16%2525253A27%2525253A52Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DfSIplwNGfpow%2525252BdkjBX2WSP0lU0SxLVroL7GY12wsNsE%2525253D)|
|[1072066](https://dev.azure.com/dnceng/public/_build/results?buildId=1072066)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50364](https://github.com/dotnet/runtime/pull/50364)|net6.0-Linux-Release-arm-CoreCLR_checked-(Ubuntu.1804.Arm32.Open)Ubuntu.1804.Armarch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:ubuntu-18.04-helix-arm32v7-bfcd90a-20200121150440|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50364-merge-edfb95f980a148dc9e/System.Collections.Concurrent.Tests/console.5ec15241.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-24T16%2525253A29%2525253A11Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D0NC7E9Gax7ZW5nyPylXI%2525252BTsAr2ruO5IKOtOmy%2525252BGgp3A%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50364-merge-edfb95f980a148dc9e/System.Collections.Concurrent.Tests/core.1001.58?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-24T16%2525253A29%2525253A11Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D0NC7E9Gax7ZW5nyPylXI%2525252BTsAr2ruO5IKOtOmy%2525252BGgp3A%2525253D)|||
|[1071508](https://dev.azure.com/dnceng/public/_build/results?buildId=1071508)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50364](https://github.com/dotnet/runtime/pull/50364)|net6.0-Linux-Release-arm-CoreCLR_checked-(Alpine.312.Arm32.Open)Ubuntu.1804.ArmArch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:alpine-3.12-helix-arm32v7-20200908125213-5bece88|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50364-merge-7e065a43acbe42ad83/System.Collections.Concurrent.Tests/console.d18c208c.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-23T19%2525253A09%2525253A40Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DP7d7bOVv1AMEq4fUlcLIHqzCRqRuH07pY%2525252FdTi2rEQxQ%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50364-merge-7e065a43acbe42ad83/System.Collections.Concurrent.Tests/core.1001.163?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-23T19%2525253A09%2525253A40Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DP7d7bOVv1AMEq4fUlcLIHqzCRqRuH07pY%2525252FdTi2rEQxQ%2525253D)|||
|[1071508](https://dev.azure.com/dnceng/public/_build/results?buildId=1071508)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50364](https://github.com/dotnet/runtime/pull/50364)|net6.0-Linux-Release-arm-CoreCLR_checked-(Ubuntu.1804.Arm32.Open)Ubuntu.1804.Armarch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:ubuntu-18.04-helix-arm32v7-bfcd90a-20200121150440|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50364-merge-084729cc26c64be190/System.Collections.Concurrent.Tests/console.6c1d6d38.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-23T19%2525253A08%2525253A01Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DQbvjmP8ZLieZpcZyTS%2525252FW5grO8OMjRMGFfii6akwqDOk%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50364-merge-084729cc26c64be190/System.Collections.Concurrent.Tests/core.1001.59?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-23T19%2525253A08%2525253A01Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DQbvjmP8ZLieZpcZyTS%2525252FW5grO8OMjRMGFfii6akwqDOk%2525253D)|||
|[1067051](https://dev.azure.com/dnceng/public/_build/results?buildId=1067051)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50479](https://github.com/dotnet/runtime/pull/50479)|net6.0-OSX-Debug-x64-Mono_release-OSX.1014.Amd64.Open|[console.log](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-17fe7ae6119e488f8c/System.Collections.Concurrent.Tests/console.32e89931.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A27%2525253A27Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DNcC2ynKHdY1KHXlGAKD9ffarQ0qUcm1dHZ7PsSnK1jA%2525253D)|[core dump](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-17fe7ae6119e488f8c/System.Collections.Concurrent.Tests/core.20776?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A27%2525253A27Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DNcC2ynKHdY1KHXlGAKD9ffarQ0qUcm1dHZ7PsSnK1jA%2525253D)||[runclient.py](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-17fe7ae6119e488f8c/System.Collections.Concurrent.Tests/8ddf8720-4f54-4923-a81d-6ca37fa74d92.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A27%2525253A27Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DNcC2ynKHdY1KHXlGAKD9ffarQ0qUcm1dHZ7PsSnK1jA%2525253D)|
|[1067051](https://dev.azure.com/dnceng/public/_build/results?buildId=1067051)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50479](https://github.com/dotnet/runtime/pull/50479)|net6.0-OSX-Debug-x64-Mono_release-OSX.1015.Amd64.Open|[console.log](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-183d2cde3f494fc6b0/System.Collections.Concurrent.Tests/console.25bf98b5.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A27%2525253A29Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D2dGWmMxS%2525252B9XGhjftWbCIq7qjGBp%2525252Fh2N50jE0F2%2525252Bzu8w%2525253D)|[core dump](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-183d2cde3f494fc6b0/System.Collections.Concurrent.Tests/core.45256?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A27%2525253A29Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D2dGWmMxS%2525252B9XGhjftWbCIq7qjGBp%2525252Fh2N50jE0F2%2525252Bzu8w%2525253D)|||
|[1067051](https://dev.azure.com/dnceng/public/_build/results?buildId=1067051)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50479](https://github.com/dotnet/runtime/pull/50479)|net6.0-Linux-Debug-x64-mono_interpreter_release-Debian.9.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-d11930fec57940d58b/System.Collections.Concurrent.Tests/console.6a7903d5.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A30%2525253A04Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DS0nIVbPf43mT4FTz5UzCmOaWipDga%2525252B3cxP%2525252FjcOJhmHw%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-d11930fec57940d58b/System.Collections.Concurrent.Tests/core.1000.2046?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A30%2525253A04Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DS0nIVbPf43mT4FTz5UzCmOaWipDga%2525252B3cxP%2525252FjcOJhmHw%2525253D)|||
|[1067051](https://dev.azure.com/dnceng/public/_build/results?buildId=1067051)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50479](https://github.com/dotnet/runtime/pull/50479)|net6.0-Linux-Debug-x64-Mono_release-(Centos.8.Amd64.Open)Ubuntu.1604.Amd64.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:centos-8-helix-20201229003624-c1bf759|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-0e7455b5c41a4d64b2/System.Collections.Concurrent.Tests/console.7eb9db9e.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A31%2525253A03Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D6%2525252B84D7E2nJ1j72b413fYgbZDlbFuXlBIAuIFsIiVqP4%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-0e7455b5c41a4d64b2/System.Collections.Concurrent.Tests/core.1000.25?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A31%2525253A03Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D6%2525252B84D7E2nJ1j72b413fYgbZDlbFuXlBIAuIFsIiVqP4%2525253D)|||
|[1067051](https://dev.azure.com/dnceng/public/_build/results?buildId=1067051)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50479](https://github.com/dotnet/runtime/pull/50479)|net6.0-Linux-Debug-x64-Mono_release-RedHat.7.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-4561dc023a4342c787/System.Collections.Concurrent.Tests/console.1104b657.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A31%2525253A04Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D%2525252BxNJdP%2525252BxQJ6N65x%2525252FF3lg4pSLsiCRxaMSg24jhdQrLik%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-4561dc023a4342c787/System.Collections.Concurrent.Tests/core.1000.2057?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A31%2525253A04Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D%2525252BxNJdP%2525252BxQJ6N65x%2525252FF3lg4pSLsiCRxaMSg24jhdQrLik%2525253D)|||
|[1067051](https://dev.azure.com/dnceng/public/_build/results?buildId=1067051)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50479](https://github.com/dotnet/runtime/pull/50479)|net6.0-Linux-Debug-x64-Mono_release-(Debian.10.Amd64.Open)ubuntu.1604.amd64.open@mcr.microsoft.com/dotnet-buildtools/prereqs:debian-10-helix-amd64-bfcd90a-20200121150006|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-f98eed6cbc78418581/System.Collections.Concurrent.Tests/console.074090ee.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A31%2525253A05Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dyz8nQHEJ6WhStswrpO2CBdoDNxZU0ci7evJRHb%2525252BOQVs%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-f98eed6cbc78418581/System.Collections.Concurrent.Tests/core.1000.23?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A31%2525253A05Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dyz8nQHEJ6WhStswrpO2CBdoDNxZU0ci7evJRHb%2525252BOQVs%2525253D)|||
|[1067051](https://dev.azure.com/dnceng/public/_build/results?buildId=1067051)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50479](https://github.com/dotnet/runtime/pull/50479)|net6.0-Linux-Debug-x64-Mono_release-Ubuntu.1604.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-3f1b189bacc14db8bd/System.Collections.Concurrent.Tests/console.441c1bb2.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A31%2525253A06Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DRXpe5PKH0LLdjd3djMMidYRbkqF3OXgL5Lr6aYzC3vg%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-3f1b189bacc14db8bd/System.Collections.Concurrent.Tests/core.1000.12478?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A31%2525253A06Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DRXpe5PKH0LLdjd3djMMidYRbkqF3OXgL5Lr6aYzC3vg%2525253D)|||
|[1067051](https://dev.azure.com/dnceng/public/_build/results?buildId=1067051)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50479](https://github.com/dotnet/runtime/pull/50479)|net6.0-Linux-Debug-x64-Mono_release-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-915e3591e5304a0dbf/System.Collections.Concurrent.Tests/console.f1343d62.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A31%2525253A07Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DcqmHncj5YCi438sEdFS0ecaTjznwyKvxvefvRaIeHCU%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-915e3591e5304a0dbf/System.Collections.Concurrent.Tests/core.1000.19791?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A31%2525253A07Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DcqmHncj5YCi438sEdFS0ecaTjznwyKvxvefvRaIeHCU%2525253D)|||
|[1067051](https://dev.azure.com/dnceng/public/_build/results?buildId=1067051)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50479](https://github.com/dotnet/runtime/pull/50479)|net6.0-Linux-Debug-x64-Mono_release-SLES.15.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-6b70c9a451e3455591/System.Collections.Concurrent.Tests/console.523bf176.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A31%2525253A08Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D1gVKcRHgg5JwcERVlQgqzbtwrep2yiEdaPVSkV%2525252Bi0%2525252FI%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-6b70c9a451e3455591/System.Collections.Concurrent.Tests/core.1000.7848?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A31%2525253A08Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D1gVKcRHgg5JwcERVlQgqzbtwrep2yiEdaPVSkV%2525252Bi0%2525252FI%2525253D)|||
|[1067051](https://dev.azure.com/dnceng/public/_build/results?buildId=1067051)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50479](https://github.com/dotnet/runtime/pull/50479)|net6.0-Linux-Debug-x64-Mono_release-(Fedora.30.Amd64.Open)ubuntu.1604.amd64.open@mcr.microsoft.com/dotnet-buildtools/prereqs:fedora-30-helix-20200512010621-4f8cef7|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-6f5f06232de042a399/System.Collections.Concurrent.Tests/console.a9ac2303.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A31%2525253A09Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DZJDGfBSwn7Ym3kdxPMKSLf8cigksk85HcAWNG%2525252BgA8p0%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-6f5f06232de042a399/System.Collections.Concurrent.Tests/core.1000.23?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A31%2525253A09Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DZJDGfBSwn7Ym3kdxPMKSLf8cigksk85HcAWNG%2525252BgA8p0%2525253D)|||
|[1067051](https://dev.azure.com/dnceng/public/_build/results?buildId=1067051)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50479](https://github.com/dotnet/runtime/pull/50479)|net6.0-Linux-Debug-arm64-Mono_release-(Ubuntu.1804.ArmArch.Open)Ubuntu.1804.ArmArch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:ubuntu-16.04-helix-arm64v8-20210106155927-56c6673|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-652aaf60286040c5b6/System.Collections.Concurrent.Tests/console.b1900fa7.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A31%2525253A45Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DDn7%2525252BV%2525252BccK3XqhtC4ReOWH7juHWem8YeVzG6U2%2525252FTOcus%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-652aaf60286040c5b6/System.Collections.Concurrent.Tests/core.1001.100?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A31%2525253A45Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DDn7%2525252BV%2525252BccK3XqhtC4ReOWH7juHWem8YeVzG6U2%2525252FTOcus%2525253D)|||
|[1066426](https://dev.azure.com/dnceng/public/_build/results?buildId=1066426)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50364](https://github.com/dotnet/runtime/pull/50364)|net6.0-Linux-Release-arm-CoreCLR_checked-(Alpine.312.Arm32.Open)Ubuntu.1804.ArmArch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:alpine-3.12-helix-arm32v7-20200908125213-5bece88|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50364-merge-2fea74b42351452d96/System.Collections.Concurrent.Tests/console.294d3bc6.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-20T22%2525253A13%2525253A09Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DQq%2525252BYfDDhvijZy2jZrT4K22NK%2525252FrWibMRCX%2525252B%2525252BahY1xm3E%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50364-merge-2fea74b42351452d96/System.Collections.Concurrent.Tests/core.1001.165?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-20T22%2525253A13%2525253A09Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DQq%2525252BYfDDhvijZy2jZrT4K22NK%2525252FrWibMRCX%2525252B%2525252BahY1xm3E%2525253D)|||
|[1066426](https://dev.azure.com/dnceng/public/_build/results?buildId=1066426)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50364](https://github.com/dotnet/runtime/pull/50364)|net6.0-Linux-Release-arm-CoreCLR_checked-(Ubuntu.1804.Arm32.Open)Ubuntu.1804.Armarch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:ubuntu-18.04-helix-arm32v7-bfcd90a-20200121150440|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50364-merge-d825e7c25b474e0896/System.Collections.Concurrent.Tests/console.3fc2bb35.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-20T22%2525253A13%2525253A30Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D8o05ZHhDWxYgtA0WoN3F15KRea7Nil7b56el3S6IUoQ%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50364-merge-d825e7c25b474e0896/System.Collections.Concurrent.Tests/core.1001.58?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-20T22%2525253A13%2525253A30Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D8o05ZHhDWxYgtA0WoN3F15KRea7Nil7b56el3S6IUoQ%2525253D)|||
|[1059736](https://dev.azure.com/dnceng/public/_build/results?buildId=1059736)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50232](https://github.com/dotnet/runtime/pull/50232)|net6.0-OSX-Debug-x64-CoreCLR_checked-OSX.1013.Amd64.Open|[console.log](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-45a3ef1099a245fb90/System.Collections.Concurrent.Tests/console.e797ad01.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T12%2525253A54%2525253A13Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DkhiiU6Zodb%2525252FiVHGII6LmPNybQDqbnzFUsKgWCqKDvr4%2525253D)|[core dump](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-45a3ef1099a245fb90/System.Collections.Concurrent.Tests/core.71663?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T12%2525253A54%2525253A13Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DkhiiU6Zodb%2525252FiVHGII6LmPNybQDqbnzFUsKgWCqKDvr4%2525253D)||[runclient.py](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-45a3ef1099a245fb90/System.Collections.Concurrent.Tests/565863ce-56fe-4584-88f7-a67563dca23b.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T12%2525253A54%2525253A13Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DkhiiU6Zodb%2525252FiVHGII6LmPNybQDqbnzFUsKgWCqKDvr4%2525253D)|
|[1059736](https://dev.azure.com/dnceng/public/_build/results?buildId=1059736)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50232](https://github.com/dotnet/runtime/pull/50232)|net6.0-windows-Debug-x64-CoreCLR_checked-Windows.10.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-346029e93a2a428abf/System.Collections.Concurrent.Tests/console.728800fa.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A14%2525253A18Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D0wOX%2525252BTl49UOcMEn%2525252BhqhCQo0EtGWdHzzqopQleIsGd44%2525253D)|||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-346029e93a2a428abf/System.Collections.Concurrent.Tests/b9737fbd-080b-4f91-9a18-f5bd0be7488e.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A14%2525253A18Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D0wOX%2525252BTl49UOcMEn%2525252BhqhCQo0EtGWdHzzqopQleIsGd44%2525253D)|
|[1059736](https://dev.azure.com/dnceng/public/_build/results?buildId=1059736)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50232](https://github.com/dotnet/runtime/pull/50232)|net6.0-windows-Release-x86-CoreCLR_checked-Windows.10.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-2b13d8fee8004555a6/System.Collections.Concurrent.Tests/console.86326ae7.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A14%2525253A32Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DJHhbgl9rkvuoHwMQb5TFgMRGUXZ29xIGIDWPm41ETuM%2525253D)|||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-2b13d8fee8004555a6/System.Collections.Concurrent.Tests/884f772d-e1fa-44df-9c4f-3f7fcef92fd9.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A14%2525253A32Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DJHhbgl9rkvuoHwMQb5TFgMRGUXZ29xIGIDWPm41ETuM%2525253D)|
|[1059736](https://dev.azure.com/dnceng/public/_build/results?buildId=1059736)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50232](https://github.com/dotnet/runtime/pull/50232)|net6.0-Linux-Debug-x64-CoreCLR_checked-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-48c772cb4f944f8cb7/System.Collections.Concurrent.Tests/console.24acbaa9.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A18%2525253A31Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DcGRTrWHFTGBguimZ2gQFOy0YrxFNn%2525252BSMi11EZvywDo0%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-48c772cb4f944f8cb7/System.Collections.Concurrent.Tests/core.1000.24283?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A18%2525253A31Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DcGRTrWHFTGBguimZ2gQFOy0YrxFNn%2525252BSMi11EZvywDo0%2525253D)||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-48c772cb4f944f8cb7/System.Collections.Concurrent.Tests/3887d772-ff1b-4ec7-bc96-06ee9b6a6781.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A18%2525253A31Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DcGRTrWHFTGBguimZ2gQFOy0YrxFNn%2525252BSMi11EZvywDo0%2525253D)|
|[1059736](https://dev.azure.com/dnceng/public/_build/results?buildId=1059736)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50232](https://github.com/dotnet/runtime/pull/50232)|net6.0-Linux-Debug-x64-CoreCLR_checked-(Alpine.312.Amd64.Open)ubuntu.1604.amd64.open@mcr.microsoft.com/dotnet-buildtools/prereqs:alpine-3.12-helix-20200602002622-e06dc59|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-90fbf38a0d52471e8e/System.Collections.Concurrent.Tests/console.abd022d6.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A20%2525253A07Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DoG23dOIEq3wxiusAHZhzlFg1t5g5otVkPtZ5bZ6jsbU%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-90fbf38a0d52471e8e/System.Collections.Concurrent.Tests/core.1000.93?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A20%2525253A07Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DoG23dOIEq3wxiusAHZhzlFg1t5g5otVkPtZ5bZ6jsbU%2525253D)||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-90fbf38a0d52471e8e/System.Collections.Concurrent.Tests/155e4e65-06f4-4a40-b9c8-5222e4c004a4.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A20%2525253A07Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DoG23dOIEq3wxiusAHZhzlFg1t5g5otVkPtZ5bZ6jsbU%2525253D)|
|[1059736](https://dev.azure.com/dnceng/public/_build/results?buildId=1059736)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50232](https://github.com/dotnet/runtime/pull/50232)|net6.0-Linux-Release-arm-CoreCLR_checked-(Ubuntu.1804.Arm32.Open)Ubuntu.1804.Armarch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:ubuntu-18.04-helix-arm32v7-bfcd90a-20200121150440|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-aaa100ee744341ddb8/System.Collections.Concurrent.Tests/console.a05c9995.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A19%2525253A44Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DOe32JhKCEXYGdT%2525252BOaWrnjyukkIOeVtN9ALSWoQHiAwg%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-aaa100ee744341ddb8/System.Collections.Concurrent.Tests/core.1001.58?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A19%2525253A44Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DOe32JhKCEXYGdT%2525252BOaWrnjyukkIOeVtN9ALSWoQHiAwg%2525253D)||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-aaa100ee744341ddb8/System.Collections.Concurrent.Tests/12b3f7ab-c334-4d2c-9854-247e3c2aae9f.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A19%2525253A44Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DOe32JhKCEXYGdT%2525252BOaWrnjyukkIOeVtN9ALSWoQHiAwg%2525253D)|
|[1059736](https://dev.azure.com/dnceng/public/_build/results?buildId=1059736)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50232](https://github.com/dotnet/runtime/pull/50232)|net6.0-Linux-Release-arm-CoreCLR_checked-(Alpine.312.Arm32.Open)Ubuntu.1804.ArmArch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:alpine-3.12-helix-arm32v7-20200908125213-5bece88|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-26f36e9598524ef2ad/System.Collections.Concurrent.Tests/console.653041c4.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A17%2525253A55Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DUln8DGilwvxu4e5aCgFbba%2525252FRChYwFcjF6pJXgyiIxmo%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-26f36e9598524ef2ad/System.Collections.Concurrent.Tests/core.1001.163?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A17%2525253A55Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DUln8DGilwvxu4e5aCgFbba%2525252FRChYwFcjF6pJXgyiIxmo%2525253D)||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-26f36e9598524ef2ad/System.Collections.Concurrent.Tests/51e6b97a-78b2-40b2-ab78-11f450596718.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A17%2525253A55Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DUln8DGilwvxu4e5aCgFbba%2525252FRChYwFcjF6pJXgyiIxmo%2525253D)|
|[1059736](https://dev.azure.com/dnceng/public/_build/results?buildId=1059736)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50232](https://github.com/dotnet/runtime/pull/50232)|net6.0-Linux-Release-arm64-CoreCLR_checked-(Alpine.312.Arm64.Open)Ubuntu.1804.ArmArch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:alpine-3.12-helix-arm64v8-20200602002604-25f8a3e|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-da57be80caf54c15ba/System.Collections.Concurrent.Tests/console.80450663.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A19%2525253A59Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dt7EZuehnBAHRHhZRuBfR9aL9MeKcz9gi8mo7Y0i0Ce4%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-da57be80caf54c15ba/System.Collections.Concurrent.Tests/core.1001.92?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A19%2525253A59Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dt7EZuehnBAHRHhZRuBfR9aL9MeKcz9gi8mo7Y0i0Ce4%2525253D)||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-da57be80caf54c15ba/System.Collections.Concurrent.Tests/554a91c5-7ffb-4846-999d-ae58cf103a40.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A19%2525253A59Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dt7EZuehnBAHRHhZRuBfR9aL9MeKcz9gi8mo7Y0i0Ce4%2525253D)|
|[1050603](https://dev.azure.com/dnceng/public/_build/results?buildId=1050603)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49990](https://github.com/dotnet/runtime/pull/49990)|net6.0-windows-Debug-x64-CoreCLR_checked-Windows.10.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49990-merge-02cbebde102743849c/System.Collections.Concurrent.Tests/console.fcac267c.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-11T15%2525253A14%2525253A32Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D8%2525252BmbqoXRxfq8v2dbaWyxmU8QeC%2525252BKaYaFZZQSa9ZVzek%2525253D)||||
|[1050314](https://dev.azure.com/dnceng/public/_build/results?buildId=1050314)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 48601](https://github.com/dotnet/runtime/pull/48601)|net6.0-windows-Debug-x64-CoreCLR_checked-Windows.10.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48601-merge-fb17b485b75f405ab1/System.Collections.Concurrent.Tests/console.79fe19ea.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-11T10%2525253A48%2525253A01Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DSKzqkPxU7bUcunNzfT462wE2dI2gC%2525252FPZb8DavL2WVRo%2525253D)|||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48601-merge-fb17b485b75f405ab1/System.Collections.Concurrent.Tests/12f07b9a-80a8-4604-b7ab-6e2f8557c1ec.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-11T10%2525253A48%2525253A01Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DSKzqkPxU7bUcunNzfT462wE2dI2gC%2525252FPZb8DavL2WVRo%2525253D)|
|[1050243](https://dev.azure.com/dnceng/public/_build/results?buildId=1050243)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49635](https://github.com/dotnet/runtime/pull/49635)|net6.0-Browser-Release-wasm-Mono_Release-normal-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49635-merge-1671eec60ca84d4aa9/System.Collections.Concurrent.Tests/console.7c959233.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-11T08%2525253A22%2525253A10Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DnL8yS8O13PmDdSZgQ99vTyJyilki27iBbcluFyOG8dk%2525253D)||||
|[1050243](https://dev.azure.com/dnceng/public/_build/results?buildId=1050243)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49635](https://github.com/dotnet/runtime/pull/49635)|net6.0-Browser-Release-wasm-Mono_Release-wasmtestonbrowser-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49635-merge-413023f2336e44a3bd/System.Collections.Concurrent.Tests/console.fd119df1.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-11T08%2525253A22%2525253A10Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DXiXe6plx4fk5Q38IDpe8V5qfdfmZ8eA%2525252FSkW9rw7WHB4%2525253D)||||
|[1047146](https://dev.azure.com/dnceng/public/_build/results?buildId=1047146)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49635](https://github.com/dotnet/runtime/pull/49635)|net6.0-Browser-Release-wasm-Mono_Release-normal-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49635-merge-e6462f4ace674a96ad/System.Collections.Concurrent.Tests/console.bf6c0eca.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-08T08%2525253A21%2525253A50Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D%2525252Bu0D87zQ26fBUoL2r1U5BbUCURdHW9UZ8M7zgvvqAGA%2525253D)|||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49635-merge-e6462f4ace674a96ad/System.Collections.Concurrent.Tests/ff605c9e-8bbf-4d1a-97da-2e33930a8d78.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-08T08%2525253A21%2525253A50Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D%2525252Bu0D87zQ26fBUoL2r1U5BbUCURdHW9UZ8M7zgvvqAGA%2525253D)|
|[1047146](https://dev.azure.com/dnceng/public/_build/results?buildId=1047146)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49635](https://github.com/dotnet/runtime/pull/49635)|net6.0-Browser-Release-wasm-Mono_Release-wasmtestonbrowser-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49635-merge-239d9422f70f43d79e/System.Collections.Concurrent.Tests/console.1cdae36e.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-08T08%2525253A21%2525253A50Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dhn%2525252B4Is1%2525252BQZDe%2525252FcVYFxGNZAcX1%2525252FgQfSP4ifrJVi9zjT4%2525253D)||||
|[1047146](https://dev.azure.com/dnceng/public/_build/results?buildId=1047146)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49635](https://github.com/dotnet/runtime/pull/49635)|net6.0-Browser-Release-wasm-Mono_Release-normal-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49635-merge-de72bd7c485f45ac83/System.Collections.Concurrent.Tests/console.6f8b2525.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-08T13%2525253A43%2525253A54Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DG%2525252BQaO5Ct8AXLe5xNJJZpd8Mrp%2525252FU8WjLlFX5C%2525252FM43dAM%2525253D)||||
|[1047146](https://dev.azure.com/dnceng/public/_build/results?buildId=1047146)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49635](https://github.com/dotnet/runtime/pull/49635)|net6.0-Browser-Release-wasm-Mono_Release-wasmtestonbrowser-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49635-merge-a559c9bbb4274f66b6/System.Collections.Concurrent.Tests/console.b1482bd7.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-08T13%2525253A43%2525253A54Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dl6tZjN3FQH0%2525252FiWCfe2%2525252FInY2y04U5p1ePI9VqkQYo7Bg%2525253D)||||
|[1046026](https://dev.azure.com/dnceng/public/_build/results?buildId=1046026)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49740](https://github.com/dotnet/runtime/pull/49740)|net6.0-Linux-Debug-x64-mono_interpreter_release-Debian.9.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-f281da3bddda4cba81/System.Collections.Concurrent.Tests/console.8bd80016.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A57%2525253A22Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Do1yNnl%2525252Bu%2525252FZUA1NbQhhbnRJwgXcJ2uisO0aYMxwg6yQE%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-f281da3bddda4cba81/System.Collections.Concurrent.Tests/core.1000.2570?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A57%2525253A22Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Do1yNnl%2525252Bu%2525252FZUA1NbQhhbnRJwgXcJ2uisO0aYMxwg6yQE%2525253D)|||
|[1046026](https://dev.azure.com/dnceng/public/_build/results?buildId=1046026)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49740](https://github.com/dotnet/runtime/pull/49740)|net6.0-Linux-Debug-x64-Mono_release-(Centos.8.Amd64.Open)Ubuntu.1604.Amd64.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:centos-8-helix-20201229003624-c1bf759|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-fb75fad0ca24470bbf/System.Collections.Concurrent.Tests/console.825537d2.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A56%2525253A25Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DysGhSLzWwXnlAbaQJK3Gufc%2525252BC8AyPjWpaPSQnDvHzLc%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-fb75fad0ca24470bbf/System.Collections.Concurrent.Tests/core.1000.24?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A56%2525253A25Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DysGhSLzWwXnlAbaQJK3Gufc%2525252BC8AyPjWpaPSQnDvHzLc%2525253D)|||
|[1046026](https://dev.azure.com/dnceng/public/_build/results?buildId=1046026)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49740](https://github.com/dotnet/runtime/pull/49740)|net6.0-Linux-Debug-x64-Mono_release-RedHat.7.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-be80e3e7fca543f687/System.Collections.Concurrent.Tests/console.053ed2b5.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A56%2525253A26Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D8iNuuUoICMCNvRjvljfQRFg53yc69ygKgsAkj6sf2uU%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-be80e3e7fca543f687/System.Collections.Concurrent.Tests/core.1000.28037?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A56%2525253A26Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D8iNuuUoICMCNvRjvljfQRFg53yc69ygKgsAkj6sf2uU%2525253D)||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-be80e3e7fca543f687/System.Collections.Concurrent.Tests/914c8fd5-fcb4-45cb-8ea0-e9ff8c266de9.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A56%2525253A26Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D8iNuuUoICMCNvRjvljfQRFg53yc69ygKgsAkj6sf2uU%2525253D)|
|[1046026](https://dev.azure.com/dnceng/public/_build/results?buildId=1046026)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49740](https://github.com/dotnet/runtime/pull/49740)|net6.0-Linux-Debug-x64-Mono_release-(Debian.10.Amd64.Open)ubuntu.1604.amd64.open@mcr.microsoft.com/dotnet-buildtools/prereqs:debian-10-helix-amd64-bfcd90a-20200121150006|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-4e3e35667cd8459f92/System.Collections.Concurrent.Tests/console.47a4d8d5.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A56%2525253A27Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dg3w4mACBzmwGpUROwMiumeTDdhsAqI4lRbiQWj%2525252B282M%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-4e3e35667cd8459f92/System.Collections.Concurrent.Tests/core.1000.24?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A56%2525253A27Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dg3w4mACBzmwGpUROwMiumeTDdhsAqI4lRbiQWj%2525252B282M%2525253D)|||
|[1046026](https://dev.azure.com/dnceng/public/_build/results?buildId=1046026)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49740](https://github.com/dotnet/runtime/pull/49740)|net6.0-Linux-Debug-x64-Mono_release-Ubuntu.1604.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-f59584358e094c80a9/System.Collections.Concurrent.Tests/console.74c422fc.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A56%2525253A28Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DpMNuM8rmTvj3iUOv1iYa1%2525252Fd616LM1PIpjiw9WnPWBxk%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-f59584358e094c80a9/System.Collections.Concurrent.Tests/core.1000.29818?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A56%2525253A28Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DpMNuM8rmTvj3iUOv1iYa1%2525252Fd616LM1PIpjiw9WnPWBxk%2525253D)|||
|[1046026](https://dev.azure.com/dnceng/public/_build/results?buildId=1046026)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49740](https://github.com/dotnet/runtime/pull/49740)|net6.0-Linux-Debug-x64-Mono_release-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-ed16de0ec9cf4e23ac/System.Collections.Concurrent.Tests/console.222b49ec.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A56%2525253A28Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DU60tx4bLxRsaArR9EGRUpA5Sc6olNKxVzhONcwosRJo%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-ed16de0ec9cf4e23ac/System.Collections.Concurrent.Tests/core.1000.28689?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A56%2525253A28Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DU60tx4bLxRsaArR9EGRUpA5Sc6olNKxVzhONcwosRJo%2525253D)|||
|[1046026](https://dev.azure.com/dnceng/public/_build/results?buildId=1046026)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49740](https://github.com/dotnet/runtime/pull/49740)|net6.0-Linux-Debug-x64-Mono_release-SLES.15.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-ee75153850d743fb93/System.Collections.Concurrent.Tests/console.b6a0fa43.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A56%2525253A29Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DIq1qb3FqADnzzRvZQPSy4TiivUGZ1R%2525252FdL6EID0wlGg4%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-ee75153850d743fb93/System.Collections.Concurrent.Tests/core.1000.30228?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A56%2525253A29Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DIq1qb3FqADnzzRvZQPSy4TiivUGZ1R%2525252FdL6EID0wlGg4%2525253D)|||
|[1046026](https://dev.azure.com/dnceng/public/_build/results?buildId=1046026)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49740](https://github.com/dotnet/runtime/pull/49740)|net6.0-Linux-Debug-x64-Mono_release-(Fedora.30.Amd64.Open)ubuntu.1604.amd64.open@mcr.microsoft.com/dotnet-buildtools/prereqs:fedora-30-helix-20200512010621-4f8cef7|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-32d847ae8652449a8e/System.Collections.Concurrent.Tests/console.6ba52942.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A56%2525253A30Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DEiYs9Oqq6XNJjJRAHG2HGNv%2525252BVtOcWLLSzHCjQ97iQHg%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-32d847ae8652449a8e/System.Collections.Concurrent.Tests/core.1000.24?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A56%2525253A30Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DEiYs9Oqq6XNJjJRAHG2HGNv%2525252BVtOcWLLSzHCjQ97iQHg%2525253D)|||
|[1046026](https://dev.azure.com/dnceng/public/_build/results?buildId=1046026)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49740](https://github.com/dotnet/runtime/pull/49740)|net6.0-Linux-Debug-arm64-Mono_release-(Ubuntu.1804.ArmArch.Open)Ubuntu.1804.ArmArch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:ubuntu-16.04-helix-arm64v8-20210106155927-56c6673|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-b59fe3db196841458a/System.Collections.Concurrent.Tests/console.8276161e.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A57%2525253A16Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dhx1p02%2525252BlhjWuT3jJqZMRg0MvIZWv6%2525252BSlkPwzVhYEzq4%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-b59fe3db196841458a/System.Collections.Concurrent.Tests/core.1001.99?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A57%2525253A16Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dhx1p02%2525252BlhjWuT3jJqZMRg0MvIZWv6%2525252BSlkPwzVhYEzq4%2525253D)|||
|[1046026](https://dev.azure.com/dnceng/public/_build/results?buildId=1046026)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49740](https://github.com/dotnet/runtime/pull/49740)|net6.0-OSX-Debug-x64-Mono_release-OSX.1014.Amd64.Open|[console.log](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-6666a53618ef4f8684/System.Collections.Concurrent.Tests/console.c8f64f93.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T23%2525253A43%2525253A58Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DNm7nq%2525252FzROt%2525252FEiXoWUIQAvUXLkfsLHbYELiAXGNCIYXg%2525253D)||||
|[1046026](https://dev.azure.com/dnceng/public/_build/results?buildId=1046026)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49740](https://github.com/dotnet/runtime/pull/49740)|net6.0-OSX-Debug-x64-Mono_release-OSX.1015.Amd64.Open|[console.log](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-b8b640178d6a4489b7/System.Collections.Concurrent.Tests/console.369e08fd.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T23%2525253A43%2525253A59Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DAiugKC1u0Na1Y6PYjWxx2AeE7Tng4cYWB3AvPxYqOos%2525253D)|[core dump](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-b8b640178d6a4489b7/System.Collections.Concurrent.Tests/core.56932?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T23%2525253A43%2525253A59Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DAiugKC1u0Na1Y6PYjWxx2AeE7Tng4cYWB3AvPxYqOos%2525253D)|||
|[1045420](https://dev.azure.com/dnceng/public/_build/results?buildId=1045420)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49635](https://github.com/dotnet/runtime/pull/49635)|net6.0-Browser-Release-wasm-Mono_Release-normal-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49635-merge-e56bd622f0114af4a6/System.Collections.Concurrent.Tests/console.7a8685f3.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T13%2525253A57%2525253A39Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D4gF3jitfjc9ymLDnqRZ2JZKumBH8Vww6m8eVXVC%2525252FyEw%2525253D)|||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49635-merge-e56bd622f0114af4a6/System.Collections.Concurrent.Tests/600bb2a2-d01d-4d3e-9ee5-a51faed64166.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T13%2525253A57%2525253A39Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D4gF3jitfjc9ymLDnqRZ2JZKumBH8Vww6m8eVXVC%2525252FyEw%2525253D)|
|[1045420](https://dev.azure.com/dnceng/public/_build/results?buildId=1045420)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49635](https://github.com/dotnet/runtime/pull/49635)|net6.0-Browser-Release-wasm-Mono_Release-wasmtestonbrowser-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49635-merge-e621e15fe64e4d2394/System.Collections.Concurrent.Tests/console.d74dfacb.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T13%2525253A57%2525253A40Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DMVGcFuI9hZj%2525252FeKwknCz%2525252ByfrL6Pb%2525252BGtpkAL7iym%2525252Bp580%2525253D)||||
|[1042619](https://dev.azure.com/dnceng/public/_build/results?buildId=1042619)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|Rolling|net5.0-Linux-Release-x64-Mono_release-SLES.15.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-heads-release-50-048ab36c301b4f458b/System.Collections.Concurrent.Tests/console.9c1d4203.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-06T00%2525253A45%2525253A36Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DKbPhmGQI5ebLP8KBzrujplKrHEdMTR%2525252BRNoJd3zCShOc%2525253D)|||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-heads-release-50-048ab36c301b4f458b/System.Collections.Concurrent.Tests/a18ac2d5-3408-4a15-894b-5edc8d5322c8.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-06T00%2525253A45%2525253A36Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DKbPhmGQI5ebLP8KBzrujplKrHEdMTR%2525252BRNoJd3zCShOc%2525253D)|
|[1041040](https://dev.azure.com/dnceng/public/_build/results?buildId=1041040)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49635](https://github.com/dotnet/runtime/pull/49635)|net6.0-Browser-Release-wasm-Mono_Release-normal-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49635-merge-3ce70eb6f67a4a6eb4/System.Collections.Concurrent.Tests/console.59770a16.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-05T09%2525253A33%2525253A04Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DThqQZfZuZfvw3huuCCzKLRc%2525252BSWfQYYxpeJXAKoZfNdA%2525253D)||||
|[1041040](https://dev.azure.com/dnceng/public/_build/results?buildId=1041040)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49635](https://github.com/dotnet/runtime/pull/49635)|net6.0-Browser-Release-wasm-Mono_Release-wasmtestonbrowser-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49635-merge-6d316523caa94b43b8/System.Collections.Concurrent.Tests/console.029c7c93.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-05T09%2525253A33%2525253A04Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D4%2525252BNj%2525252BoXAPgf16TuOvGdASuBj5a9HffqGgynw5JWM1f4%2525253D)||||
|[1041040](https://dev.azure.com/dnceng/public/_build/results?buildId=1041040)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49635](https://github.com/dotnet/runtime/pull/49635)|net6.0-Browser-Release-wasm-Mono_Release-normal-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49635-merge-9d5d21ea67274a4f9d/System.Collections.Concurrent.Tests/console.c099c826.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-05T12%2525253A10%2525253A30Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DDgNvSXBtobyoWwkrwTtSq2MMhdnegvQqFR0U8rSGmJo%2525253D)||||
|[1041040](https://dev.azure.com/dnceng/public/_build/results?buildId=1041040)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49635](https://github.com/dotnet/runtime/pull/49635)|net6.0-Browser-Release-wasm-Mono_Release-wasmtestonbrowser-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49635-merge-01e1cfb39a9f42628a/System.Collections.Concurrent.Tests/console.59f100ba.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-05T12%2525253A10%2525253A30Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DYnpKz7iSZ4PvYOHC8KtguZNd0yTsZUwPeuz8KmAFcYA%2525253D)||||
|[1041040](https://dev.azure.com/dnceng/public/_build/results?buildId=1041040)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49635](https://github.com/dotnet/runtime/pull/49635)|net6.0-Browser-Release-wasm-Mono_Release-normal-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49635-merge-9ef2020a5224485384/System.Collections.Concurrent.Tests/console.fb491c44.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-05T13%2525253A56%2525253A34Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DSOAYh3V8Ln3hklX%2525252FzojpLhcW3juK19PIepwdBSA%2525252BsMY%2525253D)||||
|[1041040](https://dev.azure.com/dnceng/public/_build/results?buildId=1041040)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49635](https://github.com/dotnet/runtime/pull/49635)|net6.0-Browser-Release-wasm-Mono_Release-wasmtestonbrowser-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49635-merge-16e5cdde089747e29a/System.Collections.Concurrent.Tests/console.d0de0ebd.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-05T13%2525253A56%2525253A34Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DW84xhojaDHon3eon0MQa0FIuoqxjd8mroJGU7tKgeVg%2525253D)||||
|[1039858](https://dev.azure.com/dnceng/public/_build/results?buildId=1039858)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 43706](https://github.com/dotnet/runtime/pull/43706)|net6.0-Linux-Release-arm-CoreCLR_checked-(Alpine.312.Arm32.Open)Ubuntu.1804.ArmArch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:alpine-3.12-helix-arm32v7-20200908125213-5bece88|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-43706-merge-0991f9705d6942948b/System.Collections.Concurrent.Tests/console.d9bf0c7d.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-04T19%2525253A53%2525253A43Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DE65F1We3YtKAtBYU%2525252FGb%2525252FRSS891yKYv8XbLCbFjtYOf8%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-43706-merge-0991f9705d6942948b/System.Collections.Concurrent.Tests/core.1001.21?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-04T19%2525253A53%2525253A43Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DE65F1We3YtKAtBYU%2525252FGb%2525252FRSS891yKYv8XbLCbFjtYOf8%2525253D)|||
|[1039858](https://dev.azure.com/dnceng/public/_build/results?buildId=1039858)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 43706](https://github.com/dotnet/runtime/pull/43706)|net6.0-Linux-Release-arm-CoreCLR_checked-(Ubuntu.1804.Arm32.Open)Ubuntu.1804.Armarch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:ubuntu-18.04-helix-arm32v7-bfcd90a-20200121150440|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-43706-merge-4c99d9400f144427a4/System.Collections.Concurrent.Tests/console.c84cc306.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-04T19%2525253A55%2525253A11Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D5VzkrWNivcL7EY8H6%2525252B%2525252FhWyKtKLPkNQhFUwzaIxgYoOI%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-43706-merge-4c99d9400f144427a4/System.Collections.Concurrent.Tests/core.1001.23?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-04T19%2525253A55%2525253A11Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D5VzkrWNivcL7EY8H6%2525252B%2525252FhWyKtKLPkNQhFUwzaIxgYoOI%2525253D)|||
|[1038213](https://dev.azure.com/dnceng/public/_build/results?buildId=1038213)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49511](https://github.com/dotnet/runtime/pull/49511)|net6.0-Browser-Release-wasm-Mono_Release-normal-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49511-merge-ba72b9f43b514e1c93/System.Collections.Concurrent.Tests/console.9ea1688c.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-03T00%2525253A35%2525253A23Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DTsolav42H%2525252BiQ6soV%2525252F%2525252Fi1Zv%2525252BKlGu0A8jcQu21s4w2y3g%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49511-merge-ba72b9f43b514e1c93/System.Collections.Concurrent.Tests/core.1000.12255?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-03T00%2525253A35%2525253A23Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DTsolav42H%2525252BiQ6soV%2525252F%2525252Fi1Zv%2525252BKlGu0A8jcQu21s4w2y3g%2525253D)|[test results](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49511-merge-ba72b9f43b514e1c93/System.Collections.Concurrent.Tests/xharness-output/testResults.xml?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-03T00%2525253A35%2525253A23Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DTsolav42H%2525252BiQ6soV%2525252F%2525252Fi1Zv%2525252BKlGu0A8jcQu21s4w2y3g%2525253D)||
|[1038213](https://dev.azure.com/dnceng/public/_build/results?buildId=1038213)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49511](https://github.com/dotnet/runtime/pull/49511)|net6.0-Browser-Release-wasm-Mono_Release-normal-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49511-merge-b2df012787dd4d2285/System.Collections.Concurrent.Tests/console.913cd6bf.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-03T05%2525253A11%2525253A36Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DsU12Qr9TlO2puZnNNTJFZw%2525252FJH6LXIbqg8RUBf2OcKzw%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49511-merge-b2df012787dd4d2285/System.Collections.Concurrent.Tests/core.1000.1368?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-03T05%2525253A11%2525253A36Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DsU12Qr9TlO2puZnNNTJFZw%2525252FJH6LXIbqg8RUBf2OcKzw%2525253D)|[test results](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49511-merge-b2df012787dd4d2285/System.Collections.Concurrent.Tests/xharness-output/testResults.xml?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-03T05%2525253A11%2525253A36Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DsU12Qr9TlO2puZnNNTJFZw%2525252FJH6LXIbqg8RUBf2OcKzw%2525253D)||
|[1033540](https://dev.azure.com/dnceng/public/_build/results?buildId=1033540)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 48601](https://github.com/dotnet/runtime/pull/48601)|net6.0-windows-Debug-x64-CoreCLR_checked-Windows.10.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48601-merge-d947adca30b54b7989/System.Collections.Concurrent.Tests/console.d3483445.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-31T14%2525253A17%2525253A53Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DcxByVu6kJwED7XlHCB4xbTPSUXlINqZ21gh3%2525252BfURQLc%2525253D)|||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48601-merge-d947adca30b54b7989/System.Collections.Concurrent.Tests/77e2f011-2bbb-4404-bb51-dd885b00b6c7.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-31T14%2525253A17%2525253A53Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DcxByVu6kJwED7XlHCB4xbTPSUXlINqZ21gh3%2525252BfURQLc%2525253D)|
|[1027303](https://dev.azure.com/dnceng/public/_build/results?buildId=1027303)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49072](https://github.com/dotnet/runtime/pull/49072)|net6.0-Browser-Release-wasm-Mono_Release-normal-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49072-merge-111a38b2be9a4001b0/System.Collections.Concurrent.Tests/console.32b67e25.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-28T17%2525253A52%2525253A14Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dw%2525252BquKmFtcSb0YbfMB3NUYXhOIyerL6hXD%2525252BEuntVOiKc%2525253D)||||
|[1027303](https://dev.azure.com/dnceng/public/_build/results?buildId=1027303)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49072](https://github.com/dotnet/runtime/pull/49072)|net6.0-Browser-Release-wasm-Mono_Release-wasmtestonbrowser-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49072-merge-9417572304c14c40a6/System.Collections.Concurrent.Tests/console.d0ae357c.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-28T17%2525253A52%2525253A14Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DXMt%2525252F3bSIFV4DWDloN8fQJdE%2525252BhF28gEX5GQvALWtkT%2525252FA%2525253D)||||
|[1027001](https://dev.azure.com/dnceng/public/_build/results?buildId=1027001)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49072](https://github.com/dotnet/runtime/pull/49072)|net6.0-Browser-Release-wasm-Mono_Release-normal-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49072-merge-b68cbb53ef004c7ca2/System.Collections.Concurrent.Tests/console.1ad51e1c.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-28T12%2525253A35%2525253A31Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DC0iCjpffpk29c0CeM0U6eDZcSwftk5hzeq5RKE9LfCo%2525253D)|||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49072-merge-b68cbb53ef004c7ca2/System.Collections.Concurrent.Tests/adfcaaaa-58d0-40c2-b430-880a5d0241f3.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-28T12%2525253A35%2525253A31Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DC0iCjpffpk29c0CeM0U6eDZcSwftk5hzeq5RKE9LfCo%2525253D)|
|[1026754](https://dev.azure.com/dnceng/public/_build/results?buildId=1026754)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49256](https://github.com/dotnet/runtime/pull/49256)|net6.0-OSX-Debug-x64-CoreCLR_checked-OSX.1013.Amd64.Open|[console.log](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49256-merge-8961a1dfeb3f49b1af/System.Collections.Concurrent.Tests/console.fa0f61bc.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-28T01%2525253A28%2525253A59Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DZO6FIpPN7%2525252BzTLMQYTbd%2525252FF%2525252B%2525252F45CuXQ4OMqZaHYwrnD4s%2525253D)|||[runclient.py](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49256-merge-8961a1dfeb3f49b1af/System.Collections.Concurrent.Tests/9dc90140-fe04-44b3-b84f-12274bb1e2b6.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-28T01%2525253A28%2525253A59Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DZO6FIpPN7%2525252BzTLMQYTbd%2525252FF%2525252B%2525252F45CuXQ4OMqZaHYwrnD4s%2525253D)|
|[1026754](https://dev.azure.com/dnceng/public/_build/results?buildId=1026754)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49256](https://github.com/dotnet/runtime/pull/49256)|net6.0-Linux-Release-arm64-CoreCLR_checked-(Alpine.312.Arm64.Open)Ubuntu.1804.ArmArch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:alpine-3.12-helix-arm64v8-20200602002604-25f8a3e|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49256-merge-ea3591a1f1b24871b6/System.Collections.Concurrent.Tests/console.35ae230f.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-28T01%2525253A28%2525253A22Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DbIhPJ%2525252FZcF9JQMQtO10Zdd8MMcNvlgjg8CK6oklbEIhE%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49256-merge-ea3591a1f1b24871b6/System.Collections.Concurrent.Tests/core.1001.22?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-28T01%2525253A28%2525253A22Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DbIhPJ%2525252FZcF9JQMQtO10Zdd8MMcNvlgjg8CK6oklbEIhE%2525253D)|||
|[1026754](https://dev.azure.com/dnceng/public/_build/results?buildId=1026754)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49256](https://github.com/dotnet/runtime/pull/49256)|net6.0-Linux-Debug-x64-CoreCLR_checked-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49256-merge-6ddbbdbb37c0468baa/System.Collections.Concurrent.Tests/console.1c815644.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-28T01%2525253A35%2525253A16Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DSGj1PuHPJisMxRZIMHfxZeArhReO9jUZHrepGbjZg2M%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49256-merge-6ddbbdbb37c0468baa/System.Collections.Concurrent.Tests/core.1000.28187?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-28T01%2525253A35%2525253A16Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DSGj1PuHPJisMxRZIMHfxZeArhReO9jUZHrepGbjZg2M%2525253D)|||
|[1026754](https://dev.azure.com/dnceng/public/_build/results?buildId=1026754)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49256](https://github.com/dotnet/runtime/pull/49256)|net6.0-Linux-Debug-x64-CoreCLR_checked-(Alpine.312.Amd64.Open)ubuntu.1604.amd64.open@mcr.microsoft.com/dotnet-buildtools/prereqs:alpine-3.12-helix-20200602002622-e06dc59|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49256-merge-65b68c2c7fcd4d4a88/System.Collections.Concurrent.Tests/console.76b20502.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-28T01%2525253A43%2525253A27Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dzk%2525252FfJC4irMJDNIk8jHJStJXJGPb%2525252FGMSM3WlannkGQws%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49256-merge-65b68c2c7fcd4d4a88/System.Collections.Concurrent.Tests/core.1000.21?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-28T01%2525253A43%2525253A27Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dzk%2525252FfJC4irMJDNIk8jHJStJXJGPb%2525252FGMSM3WlannkGQws%2525253D)|||
|[1025834](https://dev.azure.com/dnceng/public/_build/results?buildId=1025834)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49260](https://github.com/dotnet/runtime/pull/49260)|net6.0-OSX-Debug-x64-Mono_release-OSX.1014.Amd64.Open|[console.log](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-783335307f894e588b/System.Collections.Concurrent.Tests/console.3354011b.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A16%2525253A46Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D%2525252FCIh7rjWoZxlTTmkSTs%2525252F0kWR93z4kuDZnVGvb%2525252FU6V2g%2525253D)|[core dump](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-783335307f894e588b/System.Collections.Concurrent.Tests/core.1369?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A16%2525253A46Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D%2525252FCIh7rjWoZxlTTmkSTs%2525252F0kWR93z4kuDZnVGvb%2525252FU6V2g%2525253D)|||
|[1025834](https://dev.azure.com/dnceng/public/_build/results?buildId=1025834)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49260](https://github.com/dotnet/runtime/pull/49260)|net6.0-OSX-Debug-x64-Mono_release-OSX.1015.Amd64.Open|[console.log](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-2efacd88ec5c41f692/System.Collections.Concurrent.Tests/console.2312a5ee.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A16%2525253A47Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DyBIkzjAszLtu%2525252FUkGzHu04OkxnfKnmvYlBFYt4VRdtJo%2525253D)|[core dump](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-2efacd88ec5c41f692/System.Collections.Concurrent.Tests/core.29623?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A16%2525253A47Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DyBIkzjAszLtu%2525252FUkGzHu04OkxnfKnmvYlBFYt4VRdtJo%2525253D)|||
|[1025834](https://dev.azure.com/dnceng/public/_build/results?buildId=1025834)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49260](https://github.com/dotnet/runtime/pull/49260)|net6.0-Linux-Debug-x64-Mono_release-(Centos.8.Amd64.Open)Ubuntu.1604.Amd64.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:centos-8-helix-20201229003624-c1bf759|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-f90ba428f1a54000a7/System.Collections.Concurrent.Tests/console.1b95d0b2.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A17%2525253A40Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dne%2525252BE1xNmdkrf7tQz8RH0BGJwt8kaq7I5Hk2QkJlR0ZI%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-f90ba428f1a54000a7/System.Collections.Concurrent.Tests/core.1000.25?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A17%2525253A40Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dne%2525252BE1xNmdkrf7tQz8RH0BGJwt8kaq7I5Hk2QkJlR0ZI%2525253D)|||
|[1025834](https://dev.azure.com/dnceng/public/_build/results?buildId=1025834)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49260](https://github.com/dotnet/runtime/pull/49260)|net6.0-Linux-Debug-x64-Mono_release-RedHat.7.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-f0834e388fb24420ab/System.Collections.Concurrent.Tests/console.a769368a.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A17%2525253A41Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DzYn7t%2525252FaCmLvQrRjxxiBYJ%2525252B5no34ugyeRdBNjFdDiu1o%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-f0834e388fb24420ab/System.Collections.Concurrent.Tests/core.1000.8190?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A17%2525253A41Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DzYn7t%2525252FaCmLvQrRjxxiBYJ%2525252B5no34ugyeRdBNjFdDiu1o%2525253D)|||
|[1025834](https://dev.azure.com/dnceng/public/_build/results?buildId=1025834)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49260](https://github.com/dotnet/runtime/pull/49260)|net6.0-Linux-Debug-x64-Mono_release-(Debian.10.Amd64.Open)ubuntu.1604.amd64.open@mcr.microsoft.com/dotnet-buildtools/prereqs:debian-10-helix-amd64-bfcd90a-20200121150006|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-b66e8d5b3c6448c694/System.Collections.Concurrent.Tests/console.d3cdca48.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A17%2525253A41Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DT8u3P6mQ3949TSZsPhPqkAMA8thrEFGr2N2xyd6C5iA%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-b66e8d5b3c6448c694/System.Collections.Concurrent.Tests/core.1000.23?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A17%2525253A41Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DT8u3P6mQ3949TSZsPhPqkAMA8thrEFGr2N2xyd6C5iA%2525253D)|||
|[1025834](https://dev.azure.com/dnceng/public/_build/results?buildId=1025834)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49260](https://github.com/dotnet/runtime/pull/49260)|net6.0-Linux-Debug-x64-Mono_release-Ubuntu.1604.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-cf209af355f146d2b2/System.Collections.Concurrent.Tests/console.a7280af7.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A17%2525253A42Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Db4VlFIX6FYGz6blKBhzBwKvXaOYAlvzt04Cd5BuGAKM%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-cf209af355f146d2b2/System.Collections.Concurrent.Tests/core.1000.15233?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A17%2525253A42Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Db4VlFIX6FYGz6blKBhzBwKvXaOYAlvzt04Cd5BuGAKM%2525253D)|||
|[1025834](https://dev.azure.com/dnceng/public/_build/results?buildId=1025834)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49260](https://github.com/dotnet/runtime/pull/49260)|net6.0-Linux-Debug-x64-Mono_release-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-ab7a010820784dd584/System.Collections.Concurrent.Tests/console.f82c9c3a.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A17%2525253A42Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DG8rAbqKBViTMqVLSwu0XhntEi8CzXu1Uspx5HY8Abtk%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-ab7a010820784dd584/System.Collections.Concurrent.Tests/core.1000.26582?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A17%2525253A42Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DG8rAbqKBViTMqVLSwu0XhntEi8CzXu1Uspx5HY8Abtk%2525253D)|||
|[1025834](https://dev.azure.com/dnceng/public/_build/results?buildId=1025834)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49260](https://github.com/dotnet/runtime/pull/49260)|net6.0-Linux-Debug-x64-Mono_release-SLES.15.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-18fd3dbb2c3d4571af/System.Collections.Concurrent.Tests/console.fd3441c9.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A17%2525253A43Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DqgImJ36aq7gEZrXjSNtlvl9ZMz52VnHRCJBcaq%2525252FZPQw%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-18fd3dbb2c3d4571af/System.Collections.Concurrent.Tests/core.1000.18102?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A17%2525253A43Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DqgImJ36aq7gEZrXjSNtlvl9ZMz52VnHRCJBcaq%2525252FZPQw%2525253D)|||
|[1025834](https://dev.azure.com/dnceng/public/_build/results?buildId=1025834)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49260](https://github.com/dotnet/runtime/pull/49260)|net6.0-Linux-Debug-x64-Mono_release-(Fedora.30.Amd64.Open)ubuntu.1604.amd64.open@mcr.microsoft.com/dotnet-buildtools/prereqs:fedora-30-helix-20200512010621-4f8cef7|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-b66ee9ee98f54bca8e/System.Collections.Concurrent.Tests/console.16bd6427.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A17%2525253A44Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DhVE8yUHziLi4dD%2525252BAd%2525252FWPUXeiQ4GHz8WS8LrZFGApSMA%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-b66ee9ee98f54bca8e/System.Collections.Concurrent.Tests/core.1000.23?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A17%2525253A44Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DhVE8yUHziLi4dD%2525252BAd%2525252FWPUXeiQ4GHz8WS8LrZFGApSMA%2525253D)|||
|[1025834](https://dev.azure.com/dnceng/public/_build/results?buildId=1025834)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49260](https://github.com/dotnet/runtime/pull/49260)|net6.0-windows-Debug-x64-Mono_release-Windows.81.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-36abc307b18a41e59b/System.Collections.Concurrent.Tests/console.83cd51ea.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A27%2525253A51Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DQjX5kOoIXGikk11Mh%2525252BeFnU%2525252B%2525252FY%2525252B%2525252BdUL5he1gjAutgNrc%2525253D)||||
|[1025834](https://dev.azure.com/dnceng/public/_build/results?buildId=1025834)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49260](https://github.com/dotnet/runtime/pull/49260)|net6.0-windows-Debug-x64-Mono_release-Windows.10.Amd64.Server19H1.ES.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-46225614695646ec86/System.Collections.Concurrent.Tests/console.b7c62f44.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A27%2525253A52Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DScmgGGt4K5x9YKspf1rOZgH%2525252BLjYEPUKVTPAWCtFM3mo%2525253D)||||
|[1025677](https://dev.azure.com/dnceng/public/_build/results?buildId=1025677)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49257](https://github.com/dotnet/runtime/pull/49257)|net6.0-windows-Debug-x64-CoreCLR_checked-Windows.10.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49257-merge-eeaea8e2f53645e2ad/System.Collections.Concurrent.Tests/console.af9f88df.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T05%2525253A34%2525253A38Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dr65OcrhT5iRBp1FRD5APYVcMSOUsnrIUu61N%2525252BQha5Rc%2525253D)||||
|[1025677](https://dev.azure.com/dnceng/public/_build/results?buildId=1025677)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49257](https://github.com/dotnet/runtime/pull/49257)|net6.0-windows-Release-x86-CoreCLR_checked-Windows.10.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49257-merge-ff3a300eede04686bb/System.Collections.Concurrent.Tests/console.eb99967f.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T05%2525253A35%2525253A04Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dtv2NMZXBmEJolNfxNqfeQf0vEMNMN2DJyCOLqly%2525252FiQU%2525253D)||||
|[1025677](https://dev.azure.com/dnceng/public/_build/results?buildId=1025677)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49257](https://github.com/dotnet/runtime/pull/49257)|net6.0-OSX-Debug-x64-CoreCLR_checked-OSX.1013.Amd64.Open|[console.log](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49257-merge-107fae19fb11429dac/System.Collections.Concurrent.Tests/console.a027cfd0.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T05%2525253A46%2525253A33Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DFITOpqx4XnI4qwnTMtd5yNWJOyEIXYgEeOH45LCG3Qk%2525253D)||||
|[1025677](https://dev.azure.com/dnceng/public/_build/results?buildId=1025677)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49257](https://github.com/dotnet/runtime/pull/49257)|net6.0-Linux-Release-arm-CoreCLR_checked-(Alpine.312.Arm32.Open)Ubuntu.1804.ArmArch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:alpine-3.12-helix-arm32v7-20200908125213-5bece88|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49257-merge-40d422b49d3f4ffe80/System.Collections.Concurrent.Tests/console.964fbe4d.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T05%2525253A50%2525253A23Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DIetvrvnM83xYeuOAdgDUPvTGBsw11kVAwL7sZ89V6iE%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49257-merge-40d422b49d3f4ffe80/System.Collections.Concurrent.Tests/core.1001.21?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T05%2525253A50%2525253A23Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DIetvrvnM83xYeuOAdgDUPvTGBsw11kVAwL7sZ89V6iE%2525253D)|||
|[1025677](https://dev.azure.com/dnceng/public/_build/results?buildId=1025677)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49257](https://github.com/dotnet/runtime/pull/49257)|net6.0-Linux-Release-arm-CoreCLR_checked-(Ubuntu.1804.Arm32.Open)Ubuntu.1804.Armarch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:ubuntu-18.04-helix-arm32v7-bfcd90a-20200121150440|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49257-merge-e914822b1b70411f80/System.Collections.Concurrent.Tests/console.2672c8ea.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T05%2525253A50%2525253A32Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DY7INdlTzqiJIXxFWxZFtGsiMN2IMzyuDDn0R%2525252BbOlnvU%2525253D)||||
|[1025677](https://dev.azure.com/dnceng/public/_build/results?buildId=1025677)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49257](https://github.com/dotnet/runtime/pull/49257)|net6.0-Linux-Release-arm64-CoreCLR_checked-(Alpine.312.Arm64.Open)Ubuntu.1804.ArmArch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:alpine-3.12-helix-arm64v8-20200602002604-25f8a3e|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49257-merge-007d240f322f4aa0a8/System.Collections.Concurrent.Tests/console.10ee8795.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T05%2525253A51%2525253A08Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D2AhZWG9bxZK%2525252B5U0w6PdVjCYZdvmv1h42ell%2525252FqRp7lNs%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49257-merge-007d240f322f4aa0a8/System.Collections.Concurrent.Tests/core.1001.21?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T05%2525253A51%2525253A08Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D2AhZWG9bxZK%2525252B5U0w6PdVjCYZdvmv1h42ell%2525252FqRp7lNs%2525253D)||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49257-merge-007d240f322f4aa0a8/System.Collections.Concurrent.Tests/1fdf7d71-d569-4a47-92d0-cd0967009c12.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T05%2525253A51%2525253A08Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D2AhZWG9bxZK%2525252B5U0w6PdVjCYZdvmv1h42ell%2525252FqRp7lNs%2525253D)|
|[1025677](https://dev.azure.com/dnceng/public/_build/results?buildId=1025677)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49257](https://github.com/dotnet/runtime/pull/49257)|net6.0-Linux-Debug-x64-CoreCLR_checked-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49257-merge-d666932561e644b992/System.Collections.Concurrent.Tests/console.df3c455c.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T06%2525253A02%2525253A37Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DByCS2TAWkzSWfaj%2525252FC0N9hB6OEljw6eNhG4tGMvYir%2525252FU%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49257-merge-d666932561e644b992/System.Collections.Concurrent.Tests/core.1000.3055?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T06%2525253A02%2525253A37Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DByCS2TAWkzSWfaj%2525252FC0N9hB6OEljw6eNhG4tGMvYir%2525252FU%2525253D)|||
|[1025677](https://dev.azure.com/dnceng/public/_build/results?buildId=1025677)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49257](https://github.com/dotnet/runtime/pull/49257)|net6.0-Linux-Debug-x64-CoreCLR_checked-(Alpine.312.Amd64.Open)ubuntu.1604.amd64.open@mcr.microsoft.com/dotnet-buildtools/prereqs:alpine-3.12-helix-20200602002622-e06dc59|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49257-merge-63d6bad1f5a54787ac/System.Collections.Concurrent.Tests/console.5b78d89c.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T06%2525253A09%2525253A59Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D1%2525252BEj928CR%2525252FasltQJT2KECfDyQWGqJLQWBgMmdFdfA5A%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49257-merge-63d6bad1f5a54787ac/System.Collections.Concurrent.Tests/core.1000.21?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T06%2525253A09%2525253A59Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D1%2525252BEj928CR%2525252FasltQJT2KECfDyQWGqJLQWBgMmdFdfA5A%2525253D)|||
|[1024591](https://dev.azure.com/dnceng/public/_build/results?buildId=1024591)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 48601](https://github.com/dotnet/runtime/pull/48601)|net6.0-windows-Debug-x64-CoreCLR_checked-Windows.10.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48601-merge-7bba3f65c3ed4c9fae/System.Collections.Concurrent.Tests/console.e739d43f.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-25T15%2525253A29%2525253A08Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DCM85nPPgKrvJ2lWpodhrwwXpcsXAHLPzyfY8VHYYE0o%2525253D)|||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48601-merge-7bba3f65c3ed4c9fae/System.Collections.Concurrent.Tests/b4d88ec6-9bbe-4735-8806-1afa90a49b4b.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-25T15%2525253A29%2525253A08Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DCM85nPPgKrvJ2lWpodhrwwXpcsXAHLPzyfY8VHYYE0o%2525253D)|
|[1024540](https://dev.azure.com/dnceng/public/_build/results?buildId=1024540)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 47269](https://github.com/dotnet/runtime/pull/47269)|net6.0-Linux-Release-arm-CoreCLR_checked-(Ubuntu.1804.Arm32.Open)Ubuntu.1804.Armarch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:ubuntu-18.04-helix-arm32v7-bfcd90a-20200121150440|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-47269-merge-3cd69f0f2f4a434181/System.Collections.Concurrent.Tests/console.0badc198.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-25T14%2525253A29%2525253A32Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D1p%2525252BykK7A18qOraMP6wl0vPLf2At4w5moxss0bb8ssNM%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-47269-merge-3cd69f0f2f4a434181/System.Collections.Concurrent.Tests/core.1001.23?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-25T14%2525253A29%2525253A32Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D1p%2525252BykK7A18qOraMP6wl0vPLf2At4w5moxss0bb8ssNM%2525253D)||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-47269-merge-3cd69f0f2f4a434181/System.Collections.Concurrent.Tests/aaa7d526-0b9c-496d-b6f1-cd5b11c85908.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-25T14%2525253A29%2525253A32Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D1p%2525252BykK7A18qOraMP6wl0vPLf2At4w5moxss0bb8ssNM%2525253D)|
|[1024540](https://dev.azure.com/dnceng/public/_build/results?buildId=1024540)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 47269](https://github.com/dotnet/runtime/pull/47269)|net6.0-Linux-Release-arm64-CoreCLR_checked-(Alpine.312.Arm64.Open)Ubuntu.1804.ArmArch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:alpine-3.12-helix-arm64v8-20200602002604-25f8a3e|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-47269-merge-e9dd2df8e4b44b21a5/System.Collections.Concurrent.Tests/console.ad116978.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-25T14%2525253A30%2525253A01Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dv5xRFiTUAa8suP67%2525252BAiysWVd6WEAsMapMx9cwJrZFjE%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-47269-merge-e9dd2df8e4b44b21a5/System.Collections.Concurrent.Tests/core.1001.21?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-25T14%2525253A30%2525253A01Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dv5xRFiTUAa8suP67%2525252BAiysWVd6WEAsMapMx9cwJrZFjE%2525253D)|||
|[1024540](https://dev.azure.com/dnceng/public/_build/results?buildId=1024540)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 47269](https://github.com/dotnet/runtime/pull/47269)|net6.0-Linux-Release-arm-CoreCLR_checked-(Alpine.312.Arm32.Open)Ubuntu.1804.ArmArch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:alpine-3.12-helix-arm32v7-20200908125213-5bece88|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-47269-merge-79165b9f41d2476990/System.Collections.Concurrent.Tests/console.691b5ced.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-25T14%2525253A33%2525253A34Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DJPU1wi35r8Tfd9l%2525252Byc2xy4jJ7VuSuoRocXI3nw3jH7k%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-47269-merge-79165b9f41d2476990/System.Collections.Concurrent.Tests/core.1001.21?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-25T14%2525253A33%2525253A34Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DJPU1wi35r8Tfd9l%2525252Byc2xy4jJ7VuSuoRocXI3nw3jH7k%2525253D)|||
|[1024540](https://dev.azure.com/dnceng/public/_build/results?buildId=1024540)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 47269](https://github.com/dotnet/runtime/pull/47269)|net6.0-windows-Release-x86-CoreCLR_checked-Windows.10.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-47269-merge-4e109542b2db40de8f/System.Collections.Concurrent.Tests/console.b8e259dc.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-25T14%2525253A40%2525253A53Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D3W3m90iIV0EK%2525252FQIczKoKYsfBXJTP0tVtNfnYUG2RPe8%2525253D)||||
|[1022669](https://dev.azure.com/dnceng/public/_build/results?buildId=1022669)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49072](https://github.com/dotnet/runtime/pull/49072)|net6.0-Browser-Release-wasm-Mono_Release-normal-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49072-merge-cf95207ed75b4a9e85/System.Collections.Concurrent.Tests/console.75d96281.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-24T12%2525253A54%2525253A22Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D2kkr4RgbjsSnh3yaD%2525252F%2525252FxZEoulQOSJPjfXfRun%2525252FGiuJ4%2525253D)||||
|[1022669](https://dev.azure.com/dnceng/public/_build/results?buildId=1022669)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49072](https://github.com/dotnet/runtime/pull/49072)|net6.0-Browser-Release-wasm-Mono_Release-wasmtestonbrowser-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49072-merge-4b41f0020c764d6ebe/System.Collections.Concurrent.Tests/console.8b6813cc.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-24T12%2525253A54%2525253A21Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DMGG2zWPkqQPihnoL5q6iw%2525252Fm9oKai7Kub7JnFI4GRYt4%2525253D)||||
|[1019817](https://dev.azure.com/dnceng/public/_build/results?buildId=1019817)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 47864](https://github.com/dotnet/runtime/pull/47864)|net6.0-OSX-Debug-arm64-Mono_release-OSX.1100.ARM64.Open|[console.log](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-47864-merge-39128b90aa9b4eeea3/System.Collections.Concurrent.Tests/console.eb752cfa.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-23T00%2525253A58%2525253A09Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DLZvXPWfSCvX4tm3mUyB5dg90y8%2525252FC4OaSzWG68iDnM%2525252Fc%2525253D)|[core dump](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-47864-merge-39128b90aa9b4eeea3/System.Collections.Concurrent.Tests/core.69513?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-23T00%2525253A58%2525253A09Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DLZvXPWfSCvX4tm3mUyB5dg90y8%2525252FC4OaSzWG68iDnM%2525252Fc%2525253D)||[runclient.py](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-47864-merge-39128b90aa9b4eeea3/System.Collections.Concurrent.Tests/cdb68aac-8bc6-453c-a71f-3450721b28b8.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-23T00%2525253A58%2525253A09Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DLZvXPWfSCvX4tm3mUyB5dg90y8%2525252FC4OaSzWG68iDnM%2525252Fc%2525253D)|
|[1017879](https://dev.azure.com/dnceng/public/_build/results?buildId=1017879)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 48923](https://github.com/dotnet/runtime/pull/48923)|net6.0-windows-Release-x86-CoreCLR_checked-Windows.10.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48923-merge-0560eff46ca14cdb9c/System.Collections.Concurrent.Tests/console.6f9c3193.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-21T23%2525253A53%2525253A42Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DK709QaSfYy8B8erbY4%2525252FZSUzYZ4C1jglPATaqSnemER8%2525253D)|||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48923-merge-0560eff46ca14cdb9c/System.Collections.Concurrent.Tests/92370ec8-1d1c-47fd-8af3-ff8cb01746ed.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-21T23%2525253A53%2525253A42Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DK709QaSfYy8B8erbY4%2525252FZSUzYZ4C1jglPATaqSnemER8%2525253D)|
|[1016780](https://dev.azure.com/dnceng/public/_build/results?buildId=1016780)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 48908](https://github.com/dotnet/runtime/pull/48908)|net6.0-OSX-Debug-x64-Mono_release-OSX.1014.Amd64.Open|[console.log](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48908-merge-2b45e0c6dbe84cba92/System.Collections.Concurrent.Tests/console.f3125e54.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-21T14%2525253A11%2525253A47Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D%2525252B4gPWy%2525252BOCQ2%2525252F2wtrNz%2525252FV2NXWlQWOXZ0GLmo8WidOFFA%2525253D)|||[runclient.py](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48908-merge-2b45e0c6dbe84cba92/System.Collections.Concurrent.Tests/105376b0-e069-4d9b-bf8c-4fe26f9cf311.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-21T14%2525253A11%2525253A47Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D%2525252B4gPWy%2525252BOCQ2%2525252F2wtrNz%2525252FV2NXWlQWOXZ0GLmo8WidOFFA%2525253D)|
|[1016780](https://dev.azure.com/dnceng/public/_build/results?buildId=1016780)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 48908](https://github.com/dotnet/runtime/pull/48908)|net6.0-OSX-Debug-x64-Mono_release-OSX.1015.Amd64.Open|[console.log](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48908-merge-e70db87982964cab86/System.Collections.Concurrent.Tests/console.8a9aa89f.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-21T14%2525253A11%2525253A49Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DGZuxcFqp99UMMO1JQCgUmAsyrVZ%2525252B%2525252FrQNkAf1X%2525252BvN%2525252BKc%2525253D)|[core dump](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48908-merge-e70db87982964cab86/System.Collections.Concurrent.Tests/core.10612?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-21T14%2525253A11%2525253A49Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DGZuxcFqp99UMMO1JQCgUmAsyrVZ%2525252B%2525252FrQNkAf1X%2525252BvN%2525252BKc%2525253D)|||
|[1016780](https://dev.azure.com/dnceng/public/_build/results?buildId=1016780)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 48908](https://github.com/dotnet/runtime/pull/48908)|net6.0-Linux-Debug-x64-Mono_release-(Centos.8.Amd64.Open)Ubuntu.1604.Amd64.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:centos-8-helix-20201229003624-c1bf759|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48908-merge-b6e59375849140d5bc/System.Collections.Concurrent.Tests/console.7952c3f5.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-21T14%2525253A18%2525253A20Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D5Xj0gMLvDgRVKzRu7ns6KxpeikmeE%2525252BlljVvCYwiqlf8%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48908-merge-b6e59375849140d5bc/System.Collections.Concurrent.Tests/core.1000.25?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-21T14%2525253A18%2525253A20Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D5Xj0gMLvDgRVKzRu7ns6KxpeikmeE%2525252BlljVvCYwiqlf8%2525253D)|||
|[1016780](https://dev.azure.com/dnceng/public/_build/results?buildId=1016780)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 48908](https://github.com/dotnet/runtime/pull/48908)|net6.0-Linux-Debug-x64-Mono_release-RedHat.7.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48908-merge-d19302894cf74726b1/System.Collections.Concurrent.Tests/console.61224e79.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-21T14%2525253A18%2525253A21Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DjuAIRX18pRXGPiLwi6rkBR%2525252BBu%2525252BSbUgUgmmlmbYOACK0%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48908-merge-d19302894cf74726b1/System.Collections.Concurrent.Tests/core.1000.14534?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-21T14%2525253A18%2525253A21Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DjuAIRX18pRXGPiLwi6rkBR%2525252BBu%2525252BSbUgUgmmlmbYOACK0%2525253D)|||
|[1016780](https://dev.azure.com/dnceng/public/_build/results?buildId=1016780)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 48908](https://github.com/dotnet/runtime/pull/48908)|net6.0-Linux-Debug-x64-Mono_release-(Debian.10.Amd64.Open)ubuntu.1604.amd64.open@mcr.microsoft.com/dotnet-buildtools/prereqs:debian-10-helix-amd64-bfcd90a-20200121150006|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48908-merge-f0db635fef984ae5a3/System.Collections.Concurrent.Tests/console.2cd938c0.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-21T14%2525253A18%2525253A22Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DkpGYTmiosMf25R8mtIOy%2525252FnzHDeCjbqdO7rSVFmP8IOc%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48908-merge-f0db635fef984ae5a3/System.Collections.Concurrent.Tests/core.1000.23?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-21T14%2525253A18%2525253A22Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DkpGYTmiosMf25R8mtIOy%2525252FnzHDeCjbqdO7rSVFmP8IOc%2525253D)|||
|[1016780](https://dev.azure.com/dnceng/public/_build/results?buildId=1016780)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 48908](https://github.com/dotnet/runtime/pull/48908)|net6.0-Linux-Debug-x64-Mono_release-Ubuntu.1604.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48908-merge-7b457c82769044eea2/System.Collections.Concurrent.Tests/console.949729aa.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-21T14%2525253A18%2525253A22Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D0p8BGMdppcAJgUqjUEKHXQtJ61%2525252BxvPUkGcTupF7Te6M%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48908-merge-7b457c82769044eea2/System.Collections.Concurrent.Tests/core.1000.12063?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-21T14%2525253A18%2525253A22Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D0p8BGMdppcAJgUqjUEKHXQtJ61%2525252BxvPUkGcTupF7Te6M%2525253D)|||
Displaying 100 of 143 results
Build Result Summary
|Day Hit Count|Week Hit Count|Month Hit Count|
|---|---|---|
|1|1|16|
",True,"System.Collections.Concurrent.Tests crashing in CI - Build: https://dev.azure.com/dnceng/public/_build/results?buildId=905607&view=ms.vss-test-web.build-test-results-tab&runId=28901114&resultId=182589&paneView=attachments
Configuration: `net6.0-Linux-Release-x64-CoreCLR_release-RedHat.7.Amd64.Open`
how-to-debug-dump.md:
https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-heads-master-b6482f3963824bb38a/System.Collections.Concurrent.Tests/how-to-debug-dump.md?sv=2019-07-07&se=2020-12-22T10%3A40%3A07Z&sr=c&sp=rl&sig=l5N76%2FlDXHLoRkWIFox8OOiSkZPdUawXGM9N0cBe86A%3D
core.1000.22024:
https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-heads-master-b6482f3963824bb38a/System.Collections.Concurrent.Tests/core.1000.22024?sv=2019-07-07&se=2020-12-22T10%3A40%3A07Z&sr=c&sp=rl&sig=l5N76%2FlDXHLoRkWIFox8OOiSkZPdUawXGM9N0cBe86A%3D
console.8cc118e5.log:
https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-heads-master-b6482f3963824bb38a/System.Collections.Concurrent.Tests/console.8cc118e5.log?sv=2019-07-07&se=2020-12-22T10%3A40%3A07Z&sr=c&sp=rl&sig=l5N76%2FlDXHLoRkWIFox8OOiSkZPdUawXGM9N0cBe86A%3D
Runfo Tracking Issue: [system.collections.concurrent.tests crashes](https://runfo.azurewebsites.net/tracking/issue/145)
|Build|Definition|Kind|Run Name|Console|Core Dump|Test Results|Run Client|
|---|---|---|---|---|---|---|---|
|[1082899](https://dev.azure.com/dnceng/public/_build/results?buildId=1082899)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 51099](https://github.com/dotnet/runtime/pull/51099)|net6.0-Linux-Release-arm-CoreCLR_checked-(Alpine.312.Arm32.Open)Ubuntu.1804.ArmArch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:alpine-3.12-helix-arm32v7-20200908125213-5bece88|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-51099-merge-d6024ba89abf4f44b9/System.Collections.Concurrent.Tests/console.72f93a48.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-05-02T10%2525253A09%2525253A29Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DAfGzZ10r%2525252BbKc5ClEW3Dac%2525252FPX0urUKP7TZmW72za8sGk%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-51099-merge-d6024ba89abf4f44b9/System.Collections.Concurrent.Tests/core.1001.163?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-05-02T10%2525253A09%2525253A29Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DAfGzZ10r%2525252BbKc5ClEW3Dac%2525252FPX0urUKP7TZmW72za8sGk%2525253D)|||
|[1072066](https://dev.azure.com/dnceng/public/_build/results?buildId=1072066)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50364](https://github.com/dotnet/runtime/pull/50364)|net6.0-Linux-Release-arm-CoreCLR_checked-(Alpine.312.Arm32.Open)Ubuntu.1804.ArmArch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:alpine-3.12-helix-arm32v7-20200908125213-5bece88|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50364-merge-81ec4a7852984fd3bf/System.Collections.Concurrent.Tests/console.afcca185.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-24T16%2525253A27%2525253A52Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DfSIplwNGfpow%2525252BdkjBX2WSP0lU0SxLVroL7GY12wsNsE%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50364-merge-81ec4a7852984fd3bf/System.Collections.Concurrent.Tests/core.1001.163?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-24T16%2525253A27%2525253A52Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DfSIplwNGfpow%2525252BdkjBX2WSP0lU0SxLVroL7GY12wsNsE%2525253D)||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50364-merge-81ec4a7852984fd3bf/System.Collections.Concurrent.Tests/0b193330-6958-4fc8-8d73-f8703acfb49a.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-24T16%2525253A27%2525253A52Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DfSIplwNGfpow%2525252BdkjBX2WSP0lU0SxLVroL7GY12wsNsE%2525253D)|
|[1072066](https://dev.azure.com/dnceng/public/_build/results?buildId=1072066)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50364](https://github.com/dotnet/runtime/pull/50364)|net6.0-Linux-Release-arm-CoreCLR_checked-(Ubuntu.1804.Arm32.Open)Ubuntu.1804.Armarch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:ubuntu-18.04-helix-arm32v7-bfcd90a-20200121150440|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50364-merge-edfb95f980a148dc9e/System.Collections.Concurrent.Tests/console.5ec15241.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-24T16%2525253A29%2525253A11Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D0NC7E9Gax7ZW5nyPylXI%2525252BTsAr2ruO5IKOtOmy%2525252BGgp3A%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50364-merge-edfb95f980a148dc9e/System.Collections.Concurrent.Tests/core.1001.58?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-24T16%2525253A29%2525253A11Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D0NC7E9Gax7ZW5nyPylXI%2525252BTsAr2ruO5IKOtOmy%2525252BGgp3A%2525253D)|||
|[1071508](https://dev.azure.com/dnceng/public/_build/results?buildId=1071508)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50364](https://github.com/dotnet/runtime/pull/50364)|net6.0-Linux-Release-arm-CoreCLR_checked-(Alpine.312.Arm32.Open)Ubuntu.1804.ArmArch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:alpine-3.12-helix-arm32v7-20200908125213-5bece88|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50364-merge-7e065a43acbe42ad83/System.Collections.Concurrent.Tests/console.d18c208c.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-23T19%2525253A09%2525253A40Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DP7d7bOVv1AMEq4fUlcLIHqzCRqRuH07pY%2525252FdTi2rEQxQ%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50364-merge-7e065a43acbe42ad83/System.Collections.Concurrent.Tests/core.1001.163?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-23T19%2525253A09%2525253A40Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DP7d7bOVv1AMEq4fUlcLIHqzCRqRuH07pY%2525252FdTi2rEQxQ%2525253D)|||
|[1071508](https://dev.azure.com/dnceng/public/_build/results?buildId=1071508)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50364](https://github.com/dotnet/runtime/pull/50364)|net6.0-Linux-Release-arm-CoreCLR_checked-(Ubuntu.1804.Arm32.Open)Ubuntu.1804.Armarch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:ubuntu-18.04-helix-arm32v7-bfcd90a-20200121150440|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50364-merge-084729cc26c64be190/System.Collections.Concurrent.Tests/console.6c1d6d38.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-23T19%2525253A08%2525253A01Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DQbvjmP8ZLieZpcZyTS%2525252FW5grO8OMjRMGFfii6akwqDOk%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50364-merge-084729cc26c64be190/System.Collections.Concurrent.Tests/core.1001.59?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-23T19%2525253A08%2525253A01Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DQbvjmP8ZLieZpcZyTS%2525252FW5grO8OMjRMGFfii6akwqDOk%2525253D)|||
|[1067051](https://dev.azure.com/dnceng/public/_build/results?buildId=1067051)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50479](https://github.com/dotnet/runtime/pull/50479)|net6.0-OSX-Debug-x64-Mono_release-OSX.1014.Amd64.Open|[console.log](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-17fe7ae6119e488f8c/System.Collections.Concurrent.Tests/console.32e89931.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A27%2525253A27Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DNcC2ynKHdY1KHXlGAKD9ffarQ0qUcm1dHZ7PsSnK1jA%2525253D)|[core dump](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-17fe7ae6119e488f8c/System.Collections.Concurrent.Tests/core.20776?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A27%2525253A27Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DNcC2ynKHdY1KHXlGAKD9ffarQ0qUcm1dHZ7PsSnK1jA%2525253D)||[runclient.py](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-17fe7ae6119e488f8c/System.Collections.Concurrent.Tests/8ddf8720-4f54-4923-a81d-6ca37fa74d92.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A27%2525253A27Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DNcC2ynKHdY1KHXlGAKD9ffarQ0qUcm1dHZ7PsSnK1jA%2525253D)|
|[1067051](https://dev.azure.com/dnceng/public/_build/results?buildId=1067051)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50479](https://github.com/dotnet/runtime/pull/50479)|net6.0-OSX-Debug-x64-Mono_release-OSX.1015.Amd64.Open|[console.log](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-183d2cde3f494fc6b0/System.Collections.Concurrent.Tests/console.25bf98b5.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A27%2525253A29Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D2dGWmMxS%2525252B9XGhjftWbCIq7qjGBp%2525252Fh2N50jE0F2%2525252Bzu8w%2525253D)|[core dump](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-183d2cde3f494fc6b0/System.Collections.Concurrent.Tests/core.45256?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A27%2525253A29Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D2dGWmMxS%2525252B9XGhjftWbCIq7qjGBp%2525252Fh2N50jE0F2%2525252Bzu8w%2525253D)|||
|[1067051](https://dev.azure.com/dnceng/public/_build/results?buildId=1067051)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50479](https://github.com/dotnet/runtime/pull/50479)|net6.0-Linux-Debug-x64-mono_interpreter_release-Debian.9.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-d11930fec57940d58b/System.Collections.Concurrent.Tests/console.6a7903d5.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A30%2525253A04Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DS0nIVbPf43mT4FTz5UzCmOaWipDga%2525252B3cxP%2525252FjcOJhmHw%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-d11930fec57940d58b/System.Collections.Concurrent.Tests/core.1000.2046?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A30%2525253A04Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DS0nIVbPf43mT4FTz5UzCmOaWipDga%2525252B3cxP%2525252FjcOJhmHw%2525253D)|||
|[1067051](https://dev.azure.com/dnceng/public/_build/results?buildId=1067051)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50479](https://github.com/dotnet/runtime/pull/50479)|net6.0-Linux-Debug-x64-Mono_release-(Centos.8.Amd64.Open)Ubuntu.1604.Amd64.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:centos-8-helix-20201229003624-c1bf759|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-0e7455b5c41a4d64b2/System.Collections.Concurrent.Tests/console.7eb9db9e.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A31%2525253A03Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D6%2525252B84D7E2nJ1j72b413fYgbZDlbFuXlBIAuIFsIiVqP4%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-0e7455b5c41a4d64b2/System.Collections.Concurrent.Tests/core.1000.25?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A31%2525253A03Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D6%2525252B84D7E2nJ1j72b413fYgbZDlbFuXlBIAuIFsIiVqP4%2525253D)|||
|[1067051](https://dev.azure.com/dnceng/public/_build/results?buildId=1067051)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50479](https://github.com/dotnet/runtime/pull/50479)|net6.0-Linux-Debug-x64-Mono_release-RedHat.7.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-4561dc023a4342c787/System.Collections.Concurrent.Tests/console.1104b657.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A31%2525253A04Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D%2525252BxNJdP%2525252BxQJ6N65x%2525252FF3lg4pSLsiCRxaMSg24jhdQrLik%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-4561dc023a4342c787/System.Collections.Concurrent.Tests/core.1000.2057?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A31%2525253A04Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D%2525252BxNJdP%2525252BxQJ6N65x%2525252FF3lg4pSLsiCRxaMSg24jhdQrLik%2525253D)|||
|[1067051](https://dev.azure.com/dnceng/public/_build/results?buildId=1067051)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50479](https://github.com/dotnet/runtime/pull/50479)|net6.0-Linux-Debug-x64-Mono_release-(Debian.10.Amd64.Open)ubuntu.1604.amd64.open@mcr.microsoft.com/dotnet-buildtools/prereqs:debian-10-helix-amd64-bfcd90a-20200121150006|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-f98eed6cbc78418581/System.Collections.Concurrent.Tests/console.074090ee.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A31%2525253A05Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dyz8nQHEJ6WhStswrpO2CBdoDNxZU0ci7evJRHb%2525252BOQVs%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-f98eed6cbc78418581/System.Collections.Concurrent.Tests/core.1000.23?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A31%2525253A05Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dyz8nQHEJ6WhStswrpO2CBdoDNxZU0ci7evJRHb%2525252BOQVs%2525253D)|||
|[1067051](https://dev.azure.com/dnceng/public/_build/results?buildId=1067051)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50479](https://github.com/dotnet/runtime/pull/50479)|net6.0-Linux-Debug-x64-Mono_release-Ubuntu.1604.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-3f1b189bacc14db8bd/System.Collections.Concurrent.Tests/console.441c1bb2.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A31%2525253A06Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DRXpe5PKH0LLdjd3djMMidYRbkqF3OXgL5Lr6aYzC3vg%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-3f1b189bacc14db8bd/System.Collections.Concurrent.Tests/core.1000.12478?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A31%2525253A06Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DRXpe5PKH0LLdjd3djMMidYRbkqF3OXgL5Lr6aYzC3vg%2525253D)|||
|[1067051](https://dev.azure.com/dnceng/public/_build/results?buildId=1067051)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50479](https://github.com/dotnet/runtime/pull/50479)|net6.0-Linux-Debug-x64-Mono_release-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-915e3591e5304a0dbf/System.Collections.Concurrent.Tests/console.f1343d62.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A31%2525253A07Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DcqmHncj5YCi438sEdFS0ecaTjznwyKvxvefvRaIeHCU%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-915e3591e5304a0dbf/System.Collections.Concurrent.Tests/core.1000.19791?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A31%2525253A07Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DcqmHncj5YCi438sEdFS0ecaTjznwyKvxvefvRaIeHCU%2525253D)|||
|[1067051](https://dev.azure.com/dnceng/public/_build/results?buildId=1067051)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50479](https://github.com/dotnet/runtime/pull/50479)|net6.0-Linux-Debug-x64-Mono_release-SLES.15.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-6b70c9a451e3455591/System.Collections.Concurrent.Tests/console.523bf176.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A31%2525253A08Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D1gVKcRHgg5JwcERVlQgqzbtwrep2yiEdaPVSkV%2525252Bi0%2525252FI%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-6b70c9a451e3455591/System.Collections.Concurrent.Tests/core.1000.7848?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A31%2525253A08Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D1gVKcRHgg5JwcERVlQgqzbtwrep2yiEdaPVSkV%2525252Bi0%2525252FI%2525253D)|||
|[1067051](https://dev.azure.com/dnceng/public/_build/results?buildId=1067051)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50479](https://github.com/dotnet/runtime/pull/50479)|net6.0-Linux-Debug-x64-Mono_release-(Fedora.30.Amd64.Open)ubuntu.1604.amd64.open@mcr.microsoft.com/dotnet-buildtools/prereqs:fedora-30-helix-20200512010621-4f8cef7|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-6f5f06232de042a399/System.Collections.Concurrent.Tests/console.a9ac2303.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A31%2525253A09Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DZJDGfBSwn7Ym3kdxPMKSLf8cigksk85HcAWNG%2525252BgA8p0%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-6f5f06232de042a399/System.Collections.Concurrent.Tests/core.1000.23?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A31%2525253A09Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DZJDGfBSwn7Ym3kdxPMKSLf8cigksk85HcAWNG%2525252BgA8p0%2525253D)|||
|[1067051](https://dev.azure.com/dnceng/public/_build/results?buildId=1067051)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50479](https://github.com/dotnet/runtime/pull/50479)|net6.0-Linux-Debug-arm64-Mono_release-(Ubuntu.1804.ArmArch.Open)Ubuntu.1804.ArmArch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:ubuntu-16.04-helix-arm64v8-20210106155927-56c6673|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-652aaf60286040c5b6/System.Collections.Concurrent.Tests/console.b1900fa7.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A31%2525253A45Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DDn7%2525252BV%2525252BccK3XqhtC4ReOWH7juHWem8YeVzG6U2%2525252FTOcus%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50479-merge-652aaf60286040c5b6/System.Collections.Concurrent.Tests/core.1001.100?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-21T03%2525253A31%2525253A45Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DDn7%2525252BV%2525252BccK3XqhtC4ReOWH7juHWem8YeVzG6U2%2525252FTOcus%2525253D)|||
|[1066426](https://dev.azure.com/dnceng/public/_build/results?buildId=1066426)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50364](https://github.com/dotnet/runtime/pull/50364)|net6.0-Linux-Release-arm-CoreCLR_checked-(Alpine.312.Arm32.Open)Ubuntu.1804.ArmArch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:alpine-3.12-helix-arm32v7-20200908125213-5bece88|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50364-merge-2fea74b42351452d96/System.Collections.Concurrent.Tests/console.294d3bc6.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-20T22%2525253A13%2525253A09Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DQq%2525252BYfDDhvijZy2jZrT4K22NK%2525252FrWibMRCX%2525252B%2525252BahY1xm3E%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50364-merge-2fea74b42351452d96/System.Collections.Concurrent.Tests/core.1001.165?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-20T22%2525253A13%2525253A09Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DQq%2525252BYfDDhvijZy2jZrT4K22NK%2525252FrWibMRCX%2525252B%2525252BahY1xm3E%2525253D)|||
|[1066426](https://dev.azure.com/dnceng/public/_build/results?buildId=1066426)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50364](https://github.com/dotnet/runtime/pull/50364)|net6.0-Linux-Release-arm-CoreCLR_checked-(Ubuntu.1804.Arm32.Open)Ubuntu.1804.Armarch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:ubuntu-18.04-helix-arm32v7-bfcd90a-20200121150440|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50364-merge-d825e7c25b474e0896/System.Collections.Concurrent.Tests/console.3fc2bb35.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-20T22%2525253A13%2525253A30Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D8o05ZHhDWxYgtA0WoN3F15KRea7Nil7b56el3S6IUoQ%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50364-merge-d825e7c25b474e0896/System.Collections.Concurrent.Tests/core.1001.58?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-20T22%2525253A13%2525253A30Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D8o05ZHhDWxYgtA0WoN3F15KRea7Nil7b56el3S6IUoQ%2525253D)|||
|[1059736](https://dev.azure.com/dnceng/public/_build/results?buildId=1059736)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50232](https://github.com/dotnet/runtime/pull/50232)|net6.0-OSX-Debug-x64-CoreCLR_checked-OSX.1013.Amd64.Open|[console.log](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-45a3ef1099a245fb90/System.Collections.Concurrent.Tests/console.e797ad01.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T12%2525253A54%2525253A13Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DkhiiU6Zodb%2525252FiVHGII6LmPNybQDqbnzFUsKgWCqKDvr4%2525253D)|[core dump](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-45a3ef1099a245fb90/System.Collections.Concurrent.Tests/core.71663?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T12%2525253A54%2525253A13Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DkhiiU6Zodb%2525252FiVHGII6LmPNybQDqbnzFUsKgWCqKDvr4%2525253D)||[runclient.py](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-45a3ef1099a245fb90/System.Collections.Concurrent.Tests/565863ce-56fe-4584-88f7-a67563dca23b.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T12%2525253A54%2525253A13Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DkhiiU6Zodb%2525252FiVHGII6LmPNybQDqbnzFUsKgWCqKDvr4%2525253D)|
|[1059736](https://dev.azure.com/dnceng/public/_build/results?buildId=1059736)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50232](https://github.com/dotnet/runtime/pull/50232)|net6.0-windows-Debug-x64-CoreCLR_checked-Windows.10.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-346029e93a2a428abf/System.Collections.Concurrent.Tests/console.728800fa.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A14%2525253A18Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D0wOX%2525252BTl49UOcMEn%2525252BhqhCQo0EtGWdHzzqopQleIsGd44%2525253D)|||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-346029e93a2a428abf/System.Collections.Concurrent.Tests/b9737fbd-080b-4f91-9a18-f5bd0be7488e.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A14%2525253A18Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D0wOX%2525252BTl49UOcMEn%2525252BhqhCQo0EtGWdHzzqopQleIsGd44%2525253D)|
|[1059736](https://dev.azure.com/dnceng/public/_build/results?buildId=1059736)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50232](https://github.com/dotnet/runtime/pull/50232)|net6.0-windows-Release-x86-CoreCLR_checked-Windows.10.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-2b13d8fee8004555a6/System.Collections.Concurrent.Tests/console.86326ae7.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A14%2525253A32Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DJHhbgl9rkvuoHwMQb5TFgMRGUXZ29xIGIDWPm41ETuM%2525253D)|||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-2b13d8fee8004555a6/System.Collections.Concurrent.Tests/884f772d-e1fa-44df-9c4f-3f7fcef92fd9.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A14%2525253A32Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DJHhbgl9rkvuoHwMQb5TFgMRGUXZ29xIGIDWPm41ETuM%2525253D)|
|[1059736](https://dev.azure.com/dnceng/public/_build/results?buildId=1059736)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50232](https://github.com/dotnet/runtime/pull/50232)|net6.0-Linux-Debug-x64-CoreCLR_checked-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-48c772cb4f944f8cb7/System.Collections.Concurrent.Tests/console.24acbaa9.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A18%2525253A31Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DcGRTrWHFTGBguimZ2gQFOy0YrxFNn%2525252BSMi11EZvywDo0%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-48c772cb4f944f8cb7/System.Collections.Concurrent.Tests/core.1000.24283?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A18%2525253A31Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DcGRTrWHFTGBguimZ2gQFOy0YrxFNn%2525252BSMi11EZvywDo0%2525253D)||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-48c772cb4f944f8cb7/System.Collections.Concurrent.Tests/3887d772-ff1b-4ec7-bc96-06ee9b6a6781.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A18%2525253A31Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DcGRTrWHFTGBguimZ2gQFOy0YrxFNn%2525252BSMi11EZvywDo0%2525253D)|
|[1059736](https://dev.azure.com/dnceng/public/_build/results?buildId=1059736)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50232](https://github.com/dotnet/runtime/pull/50232)|net6.0-Linux-Debug-x64-CoreCLR_checked-(Alpine.312.Amd64.Open)ubuntu.1604.amd64.open@mcr.microsoft.com/dotnet-buildtools/prereqs:alpine-3.12-helix-20200602002622-e06dc59|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-90fbf38a0d52471e8e/System.Collections.Concurrent.Tests/console.abd022d6.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A20%2525253A07Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DoG23dOIEq3wxiusAHZhzlFg1t5g5otVkPtZ5bZ6jsbU%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-90fbf38a0d52471e8e/System.Collections.Concurrent.Tests/core.1000.93?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A20%2525253A07Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DoG23dOIEq3wxiusAHZhzlFg1t5g5otVkPtZ5bZ6jsbU%2525253D)||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-90fbf38a0d52471e8e/System.Collections.Concurrent.Tests/155e4e65-06f4-4a40-b9c8-5222e4c004a4.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A20%2525253A07Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DoG23dOIEq3wxiusAHZhzlFg1t5g5otVkPtZ5bZ6jsbU%2525253D)|
|[1059736](https://dev.azure.com/dnceng/public/_build/results?buildId=1059736)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50232](https://github.com/dotnet/runtime/pull/50232)|net6.0-Linux-Release-arm-CoreCLR_checked-(Ubuntu.1804.Arm32.Open)Ubuntu.1804.Armarch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:ubuntu-18.04-helix-arm32v7-bfcd90a-20200121150440|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-aaa100ee744341ddb8/System.Collections.Concurrent.Tests/console.a05c9995.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A19%2525253A44Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DOe32JhKCEXYGdT%2525252BOaWrnjyukkIOeVtN9ALSWoQHiAwg%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-aaa100ee744341ddb8/System.Collections.Concurrent.Tests/core.1001.58?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A19%2525253A44Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DOe32JhKCEXYGdT%2525252BOaWrnjyukkIOeVtN9ALSWoQHiAwg%2525253D)||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-aaa100ee744341ddb8/System.Collections.Concurrent.Tests/12b3f7ab-c334-4d2c-9854-247e3c2aae9f.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A19%2525253A44Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DOe32JhKCEXYGdT%2525252BOaWrnjyukkIOeVtN9ALSWoQHiAwg%2525253D)|
|[1059736](https://dev.azure.com/dnceng/public/_build/results?buildId=1059736)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50232](https://github.com/dotnet/runtime/pull/50232)|net6.0-Linux-Release-arm-CoreCLR_checked-(Alpine.312.Arm32.Open)Ubuntu.1804.ArmArch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:alpine-3.12-helix-arm32v7-20200908125213-5bece88|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-26f36e9598524ef2ad/System.Collections.Concurrent.Tests/console.653041c4.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A17%2525253A55Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DUln8DGilwvxu4e5aCgFbba%2525252FRChYwFcjF6pJXgyiIxmo%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-26f36e9598524ef2ad/System.Collections.Concurrent.Tests/core.1001.163?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A17%2525253A55Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DUln8DGilwvxu4e5aCgFbba%2525252FRChYwFcjF6pJXgyiIxmo%2525253D)||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-26f36e9598524ef2ad/System.Collections.Concurrent.Tests/51e6b97a-78b2-40b2-ab78-11f450596718.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A17%2525253A55Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DUln8DGilwvxu4e5aCgFbba%2525252FRChYwFcjF6pJXgyiIxmo%2525253D)|
|[1059736](https://dev.azure.com/dnceng/public/_build/results?buildId=1059736)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 50232](https://github.com/dotnet/runtime/pull/50232)|net6.0-Linux-Release-arm64-CoreCLR_checked-(Alpine.312.Arm64.Open)Ubuntu.1804.ArmArch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:alpine-3.12-helix-arm64v8-20200602002604-25f8a3e|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-da57be80caf54c15ba/System.Collections.Concurrent.Tests/console.80450663.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A19%2525253A59Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dt7EZuehnBAHRHhZRuBfR9aL9MeKcz9gi8mo7Y0i0Ce4%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-da57be80caf54c15ba/System.Collections.Concurrent.Tests/core.1001.92?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A19%2525253A59Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dt7EZuehnBAHRHhZRuBfR9aL9MeKcz9gi8mo7Y0i0Ce4%2525253D)||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-50232-merge-da57be80caf54c15ba/System.Collections.Concurrent.Tests/554a91c5-7ffb-4846-999d-ae58cf103a40.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-16T14%2525253A19%2525253A59Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dt7EZuehnBAHRHhZRuBfR9aL9MeKcz9gi8mo7Y0i0Ce4%2525253D)|
|[1050603](https://dev.azure.com/dnceng/public/_build/results?buildId=1050603)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49990](https://github.com/dotnet/runtime/pull/49990)|net6.0-windows-Debug-x64-CoreCLR_checked-Windows.10.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49990-merge-02cbebde102743849c/System.Collections.Concurrent.Tests/console.fcac267c.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-11T15%2525253A14%2525253A32Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D8%2525252BmbqoXRxfq8v2dbaWyxmU8QeC%2525252BKaYaFZZQSa9ZVzek%2525253D)||||
|[1050314](https://dev.azure.com/dnceng/public/_build/results?buildId=1050314)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 48601](https://github.com/dotnet/runtime/pull/48601)|net6.0-windows-Debug-x64-CoreCLR_checked-Windows.10.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48601-merge-fb17b485b75f405ab1/System.Collections.Concurrent.Tests/console.79fe19ea.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-11T10%2525253A48%2525253A01Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DSKzqkPxU7bUcunNzfT462wE2dI2gC%2525252FPZb8DavL2WVRo%2525253D)|||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48601-merge-fb17b485b75f405ab1/System.Collections.Concurrent.Tests/12f07b9a-80a8-4604-b7ab-6e2f8557c1ec.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-11T10%2525253A48%2525253A01Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DSKzqkPxU7bUcunNzfT462wE2dI2gC%2525252FPZb8DavL2WVRo%2525253D)|
|[1050243](https://dev.azure.com/dnceng/public/_build/results?buildId=1050243)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49635](https://github.com/dotnet/runtime/pull/49635)|net6.0-Browser-Release-wasm-Mono_Release-normal-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49635-merge-1671eec60ca84d4aa9/System.Collections.Concurrent.Tests/console.7c959233.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-11T08%2525253A22%2525253A10Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DnL8yS8O13PmDdSZgQ99vTyJyilki27iBbcluFyOG8dk%2525253D)||||
|[1050243](https://dev.azure.com/dnceng/public/_build/results?buildId=1050243)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49635](https://github.com/dotnet/runtime/pull/49635)|net6.0-Browser-Release-wasm-Mono_Release-wasmtestonbrowser-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49635-merge-413023f2336e44a3bd/System.Collections.Concurrent.Tests/console.fd119df1.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-11T08%2525253A22%2525253A10Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DXiXe6plx4fk5Q38IDpe8V5qfdfmZ8eA%2525252FSkW9rw7WHB4%2525253D)||||
|[1047146](https://dev.azure.com/dnceng/public/_build/results?buildId=1047146)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49635](https://github.com/dotnet/runtime/pull/49635)|net6.0-Browser-Release-wasm-Mono_Release-normal-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49635-merge-e6462f4ace674a96ad/System.Collections.Concurrent.Tests/console.bf6c0eca.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-08T08%2525253A21%2525253A50Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D%2525252Bu0D87zQ26fBUoL2r1U5BbUCURdHW9UZ8M7zgvvqAGA%2525253D)|||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49635-merge-e6462f4ace674a96ad/System.Collections.Concurrent.Tests/ff605c9e-8bbf-4d1a-97da-2e33930a8d78.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-08T08%2525253A21%2525253A50Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D%2525252Bu0D87zQ26fBUoL2r1U5BbUCURdHW9UZ8M7zgvvqAGA%2525253D)|
|[1047146](https://dev.azure.com/dnceng/public/_build/results?buildId=1047146)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49635](https://github.com/dotnet/runtime/pull/49635)|net6.0-Browser-Release-wasm-Mono_Release-wasmtestonbrowser-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49635-merge-239d9422f70f43d79e/System.Collections.Concurrent.Tests/console.1cdae36e.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-08T08%2525253A21%2525253A50Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dhn%2525252B4Is1%2525252BQZDe%2525252FcVYFxGNZAcX1%2525252FgQfSP4ifrJVi9zjT4%2525253D)||||
|[1047146](https://dev.azure.com/dnceng/public/_build/results?buildId=1047146)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49635](https://github.com/dotnet/runtime/pull/49635)|net6.0-Browser-Release-wasm-Mono_Release-normal-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49635-merge-de72bd7c485f45ac83/System.Collections.Concurrent.Tests/console.6f8b2525.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-08T13%2525253A43%2525253A54Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DG%2525252BQaO5Ct8AXLe5xNJJZpd8Mrp%2525252FU8WjLlFX5C%2525252FM43dAM%2525253D)||||
|[1047146](https://dev.azure.com/dnceng/public/_build/results?buildId=1047146)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49635](https://github.com/dotnet/runtime/pull/49635)|net6.0-Browser-Release-wasm-Mono_Release-wasmtestonbrowser-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49635-merge-a559c9bbb4274f66b6/System.Collections.Concurrent.Tests/console.b1482bd7.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-08T13%2525253A43%2525253A54Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dl6tZjN3FQH0%2525252FiWCfe2%2525252FInY2y04U5p1ePI9VqkQYo7Bg%2525253D)||||
|[1046026](https://dev.azure.com/dnceng/public/_build/results?buildId=1046026)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49740](https://github.com/dotnet/runtime/pull/49740)|net6.0-Linux-Debug-x64-mono_interpreter_release-Debian.9.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-f281da3bddda4cba81/System.Collections.Concurrent.Tests/console.8bd80016.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A57%2525253A22Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Do1yNnl%2525252Bu%2525252FZUA1NbQhhbnRJwgXcJ2uisO0aYMxwg6yQE%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-f281da3bddda4cba81/System.Collections.Concurrent.Tests/core.1000.2570?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A57%2525253A22Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Do1yNnl%2525252Bu%2525252FZUA1NbQhhbnRJwgXcJ2uisO0aYMxwg6yQE%2525253D)|||
|[1046026](https://dev.azure.com/dnceng/public/_build/results?buildId=1046026)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49740](https://github.com/dotnet/runtime/pull/49740)|net6.0-Linux-Debug-x64-Mono_release-(Centos.8.Amd64.Open)Ubuntu.1604.Amd64.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:centos-8-helix-20201229003624-c1bf759|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-fb75fad0ca24470bbf/System.Collections.Concurrent.Tests/console.825537d2.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A56%2525253A25Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DysGhSLzWwXnlAbaQJK3Gufc%2525252BC8AyPjWpaPSQnDvHzLc%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-fb75fad0ca24470bbf/System.Collections.Concurrent.Tests/core.1000.24?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A56%2525253A25Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DysGhSLzWwXnlAbaQJK3Gufc%2525252BC8AyPjWpaPSQnDvHzLc%2525253D)|||
|[1046026](https://dev.azure.com/dnceng/public/_build/results?buildId=1046026)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49740](https://github.com/dotnet/runtime/pull/49740)|net6.0-Linux-Debug-x64-Mono_release-RedHat.7.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-be80e3e7fca543f687/System.Collections.Concurrent.Tests/console.053ed2b5.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A56%2525253A26Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D8iNuuUoICMCNvRjvljfQRFg53yc69ygKgsAkj6sf2uU%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-be80e3e7fca543f687/System.Collections.Concurrent.Tests/core.1000.28037?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A56%2525253A26Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D8iNuuUoICMCNvRjvljfQRFg53yc69ygKgsAkj6sf2uU%2525253D)||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-be80e3e7fca543f687/System.Collections.Concurrent.Tests/914c8fd5-fcb4-45cb-8ea0-e9ff8c266de9.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A56%2525253A26Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D8iNuuUoICMCNvRjvljfQRFg53yc69ygKgsAkj6sf2uU%2525253D)|
|[1046026](https://dev.azure.com/dnceng/public/_build/results?buildId=1046026)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49740](https://github.com/dotnet/runtime/pull/49740)|net6.0-Linux-Debug-x64-Mono_release-(Debian.10.Amd64.Open)ubuntu.1604.amd64.open@mcr.microsoft.com/dotnet-buildtools/prereqs:debian-10-helix-amd64-bfcd90a-20200121150006|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-4e3e35667cd8459f92/System.Collections.Concurrent.Tests/console.47a4d8d5.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A56%2525253A27Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dg3w4mACBzmwGpUROwMiumeTDdhsAqI4lRbiQWj%2525252B282M%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-4e3e35667cd8459f92/System.Collections.Concurrent.Tests/core.1000.24?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A56%2525253A27Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dg3w4mACBzmwGpUROwMiumeTDdhsAqI4lRbiQWj%2525252B282M%2525253D)|||
|[1046026](https://dev.azure.com/dnceng/public/_build/results?buildId=1046026)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49740](https://github.com/dotnet/runtime/pull/49740)|net6.0-Linux-Debug-x64-Mono_release-Ubuntu.1604.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-f59584358e094c80a9/System.Collections.Concurrent.Tests/console.74c422fc.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A56%2525253A28Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DpMNuM8rmTvj3iUOv1iYa1%2525252Fd616LM1PIpjiw9WnPWBxk%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-f59584358e094c80a9/System.Collections.Concurrent.Tests/core.1000.29818?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A56%2525253A28Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DpMNuM8rmTvj3iUOv1iYa1%2525252Fd616LM1PIpjiw9WnPWBxk%2525253D)|||
|[1046026](https://dev.azure.com/dnceng/public/_build/results?buildId=1046026)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49740](https://github.com/dotnet/runtime/pull/49740)|net6.0-Linux-Debug-x64-Mono_release-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-ed16de0ec9cf4e23ac/System.Collections.Concurrent.Tests/console.222b49ec.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A56%2525253A28Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DU60tx4bLxRsaArR9EGRUpA5Sc6olNKxVzhONcwosRJo%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-ed16de0ec9cf4e23ac/System.Collections.Concurrent.Tests/core.1000.28689?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A56%2525253A28Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DU60tx4bLxRsaArR9EGRUpA5Sc6olNKxVzhONcwosRJo%2525253D)|||
|[1046026](https://dev.azure.com/dnceng/public/_build/results?buildId=1046026)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49740](https://github.com/dotnet/runtime/pull/49740)|net6.0-Linux-Debug-x64-Mono_release-SLES.15.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-ee75153850d743fb93/System.Collections.Concurrent.Tests/console.b6a0fa43.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A56%2525253A29Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DIq1qb3FqADnzzRvZQPSy4TiivUGZ1R%2525252FdL6EID0wlGg4%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-ee75153850d743fb93/System.Collections.Concurrent.Tests/core.1000.30228?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A56%2525253A29Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DIq1qb3FqADnzzRvZQPSy4TiivUGZ1R%2525252FdL6EID0wlGg4%2525253D)|||
|[1046026](https://dev.azure.com/dnceng/public/_build/results?buildId=1046026)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49740](https://github.com/dotnet/runtime/pull/49740)|net6.0-Linux-Debug-x64-Mono_release-(Fedora.30.Amd64.Open)ubuntu.1604.amd64.open@mcr.microsoft.com/dotnet-buildtools/prereqs:fedora-30-helix-20200512010621-4f8cef7|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-32d847ae8652449a8e/System.Collections.Concurrent.Tests/console.6ba52942.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A56%2525253A30Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DEiYs9Oqq6XNJjJRAHG2HGNv%2525252BVtOcWLLSzHCjQ97iQHg%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-32d847ae8652449a8e/System.Collections.Concurrent.Tests/core.1000.24?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A56%2525253A30Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DEiYs9Oqq6XNJjJRAHG2HGNv%2525252BVtOcWLLSzHCjQ97iQHg%2525253D)|||
|[1046026](https://dev.azure.com/dnceng/public/_build/results?buildId=1046026)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49740](https://github.com/dotnet/runtime/pull/49740)|net6.0-Linux-Debug-arm64-Mono_release-(Ubuntu.1804.ArmArch.Open)Ubuntu.1804.ArmArch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:ubuntu-16.04-helix-arm64v8-20210106155927-56c6673|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-b59fe3db196841458a/System.Collections.Concurrent.Tests/console.8276161e.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A57%2525253A16Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dhx1p02%2525252BlhjWuT3jJqZMRg0MvIZWv6%2525252BSlkPwzVhYEzq4%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-b59fe3db196841458a/System.Collections.Concurrent.Tests/core.1001.99?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T22%2525253A57%2525253A16Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dhx1p02%2525252BlhjWuT3jJqZMRg0MvIZWv6%2525252BSlkPwzVhYEzq4%2525253D)|||
|[1046026](https://dev.azure.com/dnceng/public/_build/results?buildId=1046026)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49740](https://github.com/dotnet/runtime/pull/49740)|net6.0-OSX-Debug-x64-Mono_release-OSX.1014.Amd64.Open|[console.log](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-6666a53618ef4f8684/System.Collections.Concurrent.Tests/console.c8f64f93.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T23%2525253A43%2525253A58Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DNm7nq%2525252FzROt%2525252FEiXoWUIQAvUXLkfsLHbYELiAXGNCIYXg%2525253D)||||
|[1046026](https://dev.azure.com/dnceng/public/_build/results?buildId=1046026)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49740](https://github.com/dotnet/runtime/pull/49740)|net6.0-OSX-Debug-x64-Mono_release-OSX.1015.Amd64.Open|[console.log](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-b8b640178d6a4489b7/System.Collections.Concurrent.Tests/console.369e08fd.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T23%2525253A43%2525253A59Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DAiugKC1u0Na1Y6PYjWxx2AeE7Tng4cYWB3AvPxYqOos%2525253D)|[core dump](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49740-merge-b8b640178d6a4489b7/System.Collections.Concurrent.Tests/core.56932?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T23%2525253A43%2525253A59Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DAiugKC1u0Na1Y6PYjWxx2AeE7Tng4cYWB3AvPxYqOos%2525253D)|||
|[1045420](https://dev.azure.com/dnceng/public/_build/results?buildId=1045420)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49635](https://github.com/dotnet/runtime/pull/49635)|net6.0-Browser-Release-wasm-Mono_Release-normal-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49635-merge-e56bd622f0114af4a6/System.Collections.Concurrent.Tests/console.7a8685f3.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T13%2525253A57%2525253A39Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D4gF3jitfjc9ymLDnqRZ2JZKumBH8Vww6m8eVXVC%2525252FyEw%2525253D)|||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49635-merge-e56bd622f0114af4a6/System.Collections.Concurrent.Tests/600bb2a2-d01d-4d3e-9ee5-a51faed64166.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T13%2525253A57%2525253A39Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D4gF3jitfjc9ymLDnqRZ2JZKumBH8Vww6m8eVXVC%2525252FyEw%2525253D)|
|[1045420](https://dev.azure.com/dnceng/public/_build/results?buildId=1045420)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49635](https://github.com/dotnet/runtime/pull/49635)|net6.0-Browser-Release-wasm-Mono_Release-wasmtestonbrowser-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49635-merge-e621e15fe64e4d2394/System.Collections.Concurrent.Tests/console.d74dfacb.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-07T13%2525253A57%2525253A40Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DMVGcFuI9hZj%2525252FeKwknCz%2525252ByfrL6Pb%2525252BGtpkAL7iym%2525252Bp580%2525253D)||||
|[1042619](https://dev.azure.com/dnceng/public/_build/results?buildId=1042619)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|Rolling|net5.0-Linux-Release-x64-Mono_release-SLES.15.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-heads-release-50-048ab36c301b4f458b/System.Collections.Concurrent.Tests/console.9c1d4203.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-06T00%2525253A45%2525253A36Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DKbPhmGQI5ebLP8KBzrujplKrHEdMTR%2525252BRNoJd3zCShOc%2525253D)|||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-heads-release-50-048ab36c301b4f458b/System.Collections.Concurrent.Tests/a18ac2d5-3408-4a15-894b-5edc8d5322c8.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-06T00%2525253A45%2525253A36Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DKbPhmGQI5ebLP8KBzrujplKrHEdMTR%2525252BRNoJd3zCShOc%2525253D)|
|[1041040](https://dev.azure.com/dnceng/public/_build/results?buildId=1041040)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49635](https://github.com/dotnet/runtime/pull/49635)|net6.0-Browser-Release-wasm-Mono_Release-normal-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49635-merge-3ce70eb6f67a4a6eb4/System.Collections.Concurrent.Tests/console.59770a16.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-05T09%2525253A33%2525253A04Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DThqQZfZuZfvw3huuCCzKLRc%2525252BSWfQYYxpeJXAKoZfNdA%2525253D)||||
|[1041040](https://dev.azure.com/dnceng/public/_build/results?buildId=1041040)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49635](https://github.com/dotnet/runtime/pull/49635)|net6.0-Browser-Release-wasm-Mono_Release-wasmtestonbrowser-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49635-merge-6d316523caa94b43b8/System.Collections.Concurrent.Tests/console.029c7c93.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-05T09%2525253A33%2525253A04Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D4%2525252BNj%2525252BoXAPgf16TuOvGdASuBj5a9HffqGgynw5JWM1f4%2525253D)||||
|[1041040](https://dev.azure.com/dnceng/public/_build/results?buildId=1041040)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49635](https://github.com/dotnet/runtime/pull/49635)|net6.0-Browser-Release-wasm-Mono_Release-normal-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49635-merge-9d5d21ea67274a4f9d/System.Collections.Concurrent.Tests/console.c099c826.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-05T12%2525253A10%2525253A30Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DDgNvSXBtobyoWwkrwTtSq2MMhdnegvQqFR0U8rSGmJo%2525253D)||||
|[1041040](https://dev.azure.com/dnceng/public/_build/results?buildId=1041040)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49635](https://github.com/dotnet/runtime/pull/49635)|net6.0-Browser-Release-wasm-Mono_Release-wasmtestonbrowser-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49635-merge-01e1cfb39a9f42628a/System.Collections.Concurrent.Tests/console.59f100ba.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-05T12%2525253A10%2525253A30Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DYnpKz7iSZ4PvYOHC8KtguZNd0yTsZUwPeuz8KmAFcYA%2525253D)||||
|[1041040](https://dev.azure.com/dnceng/public/_build/results?buildId=1041040)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49635](https://github.com/dotnet/runtime/pull/49635)|net6.0-Browser-Release-wasm-Mono_Release-normal-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49635-merge-9ef2020a5224485384/System.Collections.Concurrent.Tests/console.fb491c44.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-05T13%2525253A56%2525253A34Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DSOAYh3V8Ln3hklX%2525252FzojpLhcW3juK19PIepwdBSA%2525252BsMY%2525253D)||||
|[1041040](https://dev.azure.com/dnceng/public/_build/results?buildId=1041040)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49635](https://github.com/dotnet/runtime/pull/49635)|net6.0-Browser-Release-wasm-Mono_Release-wasmtestonbrowser-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49635-merge-16e5cdde089747e29a/System.Collections.Concurrent.Tests/console.d0de0ebd.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-05T13%2525253A56%2525253A34Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DW84xhojaDHon3eon0MQa0FIuoqxjd8mroJGU7tKgeVg%2525253D)||||
|[1039858](https://dev.azure.com/dnceng/public/_build/results?buildId=1039858)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 43706](https://github.com/dotnet/runtime/pull/43706)|net6.0-Linux-Release-arm-CoreCLR_checked-(Alpine.312.Arm32.Open)Ubuntu.1804.ArmArch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:alpine-3.12-helix-arm32v7-20200908125213-5bece88|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-43706-merge-0991f9705d6942948b/System.Collections.Concurrent.Tests/console.d9bf0c7d.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-04T19%2525253A53%2525253A43Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DE65F1We3YtKAtBYU%2525252FGb%2525252FRSS891yKYv8XbLCbFjtYOf8%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-43706-merge-0991f9705d6942948b/System.Collections.Concurrent.Tests/core.1001.21?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-04T19%2525253A53%2525253A43Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DE65F1We3YtKAtBYU%2525252FGb%2525252FRSS891yKYv8XbLCbFjtYOf8%2525253D)|||
|[1039858](https://dev.azure.com/dnceng/public/_build/results?buildId=1039858)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 43706](https://github.com/dotnet/runtime/pull/43706)|net6.0-Linux-Release-arm-CoreCLR_checked-(Ubuntu.1804.Arm32.Open)Ubuntu.1804.Armarch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:ubuntu-18.04-helix-arm32v7-bfcd90a-20200121150440|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-43706-merge-4c99d9400f144427a4/System.Collections.Concurrent.Tests/console.c84cc306.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-04T19%2525253A55%2525253A11Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D5VzkrWNivcL7EY8H6%2525252B%2525252FhWyKtKLPkNQhFUwzaIxgYoOI%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-43706-merge-4c99d9400f144427a4/System.Collections.Concurrent.Tests/core.1001.23?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-04T19%2525253A55%2525253A11Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D5VzkrWNivcL7EY8H6%2525252B%2525252FhWyKtKLPkNQhFUwzaIxgYoOI%2525253D)|||
|[1038213](https://dev.azure.com/dnceng/public/_build/results?buildId=1038213)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49511](https://github.com/dotnet/runtime/pull/49511)|net6.0-Browser-Release-wasm-Mono_Release-normal-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49511-merge-ba72b9f43b514e1c93/System.Collections.Concurrent.Tests/console.9ea1688c.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-03T00%2525253A35%2525253A23Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DTsolav42H%2525252BiQ6soV%2525252F%2525252Fi1Zv%2525252BKlGu0A8jcQu21s4w2y3g%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49511-merge-ba72b9f43b514e1c93/System.Collections.Concurrent.Tests/core.1000.12255?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-03T00%2525253A35%2525253A23Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DTsolav42H%2525252BiQ6soV%2525252F%2525252Fi1Zv%2525252BKlGu0A8jcQu21s4w2y3g%2525253D)|[test results](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49511-merge-ba72b9f43b514e1c93/System.Collections.Concurrent.Tests/xharness-output/testResults.xml?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-03T00%2525253A35%2525253A23Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DTsolav42H%2525252BiQ6soV%2525252F%2525252Fi1Zv%2525252BKlGu0A8jcQu21s4w2y3g%2525253D)||
|[1038213](https://dev.azure.com/dnceng/public/_build/results?buildId=1038213)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49511](https://github.com/dotnet/runtime/pull/49511)|net6.0-Browser-Release-wasm-Mono_Release-normal-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49511-merge-b2df012787dd4d2285/System.Collections.Concurrent.Tests/console.913cd6bf.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-03T05%2525253A11%2525253A36Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DsU12Qr9TlO2puZnNNTJFZw%2525252FJH6LXIbqg8RUBf2OcKzw%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49511-merge-b2df012787dd4d2285/System.Collections.Concurrent.Tests/core.1000.1368?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-03T05%2525253A11%2525253A36Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DsU12Qr9TlO2puZnNNTJFZw%2525252FJH6LXIbqg8RUBf2OcKzw%2525253D)|[test results](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49511-merge-b2df012787dd4d2285/System.Collections.Concurrent.Tests/xharness-output/testResults.xml?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-04-03T05%2525253A11%2525253A36Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DsU12Qr9TlO2puZnNNTJFZw%2525252FJH6LXIbqg8RUBf2OcKzw%2525253D)||
|[1033540](https://dev.azure.com/dnceng/public/_build/results?buildId=1033540)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 48601](https://github.com/dotnet/runtime/pull/48601)|net6.0-windows-Debug-x64-CoreCLR_checked-Windows.10.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48601-merge-d947adca30b54b7989/System.Collections.Concurrent.Tests/console.d3483445.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-31T14%2525253A17%2525253A53Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DcxByVu6kJwED7XlHCB4xbTPSUXlINqZ21gh3%2525252BfURQLc%2525253D)|||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48601-merge-d947adca30b54b7989/System.Collections.Concurrent.Tests/77e2f011-2bbb-4404-bb51-dd885b00b6c7.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-31T14%2525253A17%2525253A53Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DcxByVu6kJwED7XlHCB4xbTPSUXlINqZ21gh3%2525252BfURQLc%2525253D)|
|[1027303](https://dev.azure.com/dnceng/public/_build/results?buildId=1027303)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49072](https://github.com/dotnet/runtime/pull/49072)|net6.0-Browser-Release-wasm-Mono_Release-normal-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49072-merge-111a38b2be9a4001b0/System.Collections.Concurrent.Tests/console.32b67e25.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-28T17%2525253A52%2525253A14Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dw%2525252BquKmFtcSb0YbfMB3NUYXhOIyerL6hXD%2525252BEuntVOiKc%2525253D)||||
|[1027303](https://dev.azure.com/dnceng/public/_build/results?buildId=1027303)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49072](https://github.com/dotnet/runtime/pull/49072)|net6.0-Browser-Release-wasm-Mono_Release-wasmtestonbrowser-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49072-merge-9417572304c14c40a6/System.Collections.Concurrent.Tests/console.d0ae357c.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-28T17%2525253A52%2525253A14Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DXMt%2525252F3bSIFV4DWDloN8fQJdE%2525252BhF28gEX5GQvALWtkT%2525252FA%2525253D)||||
|[1027001](https://dev.azure.com/dnceng/public/_build/results?buildId=1027001)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49072](https://github.com/dotnet/runtime/pull/49072)|net6.0-Browser-Release-wasm-Mono_Release-normal-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49072-merge-b68cbb53ef004c7ca2/System.Collections.Concurrent.Tests/console.1ad51e1c.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-28T12%2525253A35%2525253A31Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DC0iCjpffpk29c0CeM0U6eDZcSwftk5hzeq5RKE9LfCo%2525253D)|||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49072-merge-b68cbb53ef004c7ca2/System.Collections.Concurrent.Tests/adfcaaaa-58d0-40c2-b430-880a5d0241f3.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-28T12%2525253A35%2525253A31Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DC0iCjpffpk29c0CeM0U6eDZcSwftk5hzeq5RKE9LfCo%2525253D)|
|[1026754](https://dev.azure.com/dnceng/public/_build/results?buildId=1026754)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49256](https://github.com/dotnet/runtime/pull/49256)|net6.0-OSX-Debug-x64-CoreCLR_checked-OSX.1013.Amd64.Open|[console.log](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49256-merge-8961a1dfeb3f49b1af/System.Collections.Concurrent.Tests/console.fa0f61bc.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-28T01%2525253A28%2525253A59Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DZO6FIpPN7%2525252BzTLMQYTbd%2525252FF%2525252B%2525252F45CuXQ4OMqZaHYwrnD4s%2525253D)|||[runclient.py](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49256-merge-8961a1dfeb3f49b1af/System.Collections.Concurrent.Tests/9dc90140-fe04-44b3-b84f-12274bb1e2b6.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-28T01%2525253A28%2525253A59Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DZO6FIpPN7%2525252BzTLMQYTbd%2525252FF%2525252B%2525252F45CuXQ4OMqZaHYwrnD4s%2525253D)|
|[1026754](https://dev.azure.com/dnceng/public/_build/results?buildId=1026754)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49256](https://github.com/dotnet/runtime/pull/49256)|net6.0-Linux-Release-arm64-CoreCLR_checked-(Alpine.312.Arm64.Open)Ubuntu.1804.ArmArch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:alpine-3.12-helix-arm64v8-20200602002604-25f8a3e|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49256-merge-ea3591a1f1b24871b6/System.Collections.Concurrent.Tests/console.35ae230f.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-28T01%2525253A28%2525253A22Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DbIhPJ%2525252FZcF9JQMQtO10Zdd8MMcNvlgjg8CK6oklbEIhE%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49256-merge-ea3591a1f1b24871b6/System.Collections.Concurrent.Tests/core.1001.22?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-28T01%2525253A28%2525253A22Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DbIhPJ%2525252FZcF9JQMQtO10Zdd8MMcNvlgjg8CK6oklbEIhE%2525253D)|||
|[1026754](https://dev.azure.com/dnceng/public/_build/results?buildId=1026754)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49256](https://github.com/dotnet/runtime/pull/49256)|net6.0-Linux-Debug-x64-CoreCLR_checked-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49256-merge-6ddbbdbb37c0468baa/System.Collections.Concurrent.Tests/console.1c815644.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-28T01%2525253A35%2525253A16Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DSGj1PuHPJisMxRZIMHfxZeArhReO9jUZHrepGbjZg2M%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49256-merge-6ddbbdbb37c0468baa/System.Collections.Concurrent.Tests/core.1000.28187?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-28T01%2525253A35%2525253A16Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DSGj1PuHPJisMxRZIMHfxZeArhReO9jUZHrepGbjZg2M%2525253D)|||
|[1026754](https://dev.azure.com/dnceng/public/_build/results?buildId=1026754)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49256](https://github.com/dotnet/runtime/pull/49256)|net6.0-Linux-Debug-x64-CoreCLR_checked-(Alpine.312.Amd64.Open)ubuntu.1604.amd64.open@mcr.microsoft.com/dotnet-buildtools/prereqs:alpine-3.12-helix-20200602002622-e06dc59|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49256-merge-65b68c2c7fcd4d4a88/System.Collections.Concurrent.Tests/console.76b20502.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-28T01%2525253A43%2525253A27Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dzk%2525252FfJC4irMJDNIk8jHJStJXJGPb%2525252FGMSM3WlannkGQws%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49256-merge-65b68c2c7fcd4d4a88/System.Collections.Concurrent.Tests/core.1000.21?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-28T01%2525253A43%2525253A27Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dzk%2525252FfJC4irMJDNIk8jHJStJXJGPb%2525252FGMSM3WlannkGQws%2525253D)|||
|[1025834](https://dev.azure.com/dnceng/public/_build/results?buildId=1025834)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49260](https://github.com/dotnet/runtime/pull/49260)|net6.0-OSX-Debug-x64-Mono_release-OSX.1014.Amd64.Open|[console.log](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-783335307f894e588b/System.Collections.Concurrent.Tests/console.3354011b.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A16%2525253A46Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D%2525252FCIh7rjWoZxlTTmkSTs%2525252F0kWR93z4kuDZnVGvb%2525252FU6V2g%2525253D)|[core dump](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-783335307f894e588b/System.Collections.Concurrent.Tests/core.1369?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A16%2525253A46Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D%2525252FCIh7rjWoZxlTTmkSTs%2525252F0kWR93z4kuDZnVGvb%2525252FU6V2g%2525253D)|||
|[1025834](https://dev.azure.com/dnceng/public/_build/results?buildId=1025834)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49260](https://github.com/dotnet/runtime/pull/49260)|net6.0-OSX-Debug-x64-Mono_release-OSX.1015.Amd64.Open|[console.log](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-2efacd88ec5c41f692/System.Collections.Concurrent.Tests/console.2312a5ee.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A16%2525253A47Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DyBIkzjAszLtu%2525252FUkGzHu04OkxnfKnmvYlBFYt4VRdtJo%2525253D)|[core dump](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-2efacd88ec5c41f692/System.Collections.Concurrent.Tests/core.29623?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A16%2525253A47Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DyBIkzjAszLtu%2525252FUkGzHu04OkxnfKnmvYlBFYt4VRdtJo%2525253D)|||
|[1025834](https://dev.azure.com/dnceng/public/_build/results?buildId=1025834)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49260](https://github.com/dotnet/runtime/pull/49260)|net6.0-Linux-Debug-x64-Mono_release-(Centos.8.Amd64.Open)Ubuntu.1604.Amd64.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:centos-8-helix-20201229003624-c1bf759|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-f90ba428f1a54000a7/System.Collections.Concurrent.Tests/console.1b95d0b2.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A17%2525253A40Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dne%2525252BE1xNmdkrf7tQz8RH0BGJwt8kaq7I5Hk2QkJlR0ZI%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-f90ba428f1a54000a7/System.Collections.Concurrent.Tests/core.1000.25?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A17%2525253A40Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dne%2525252BE1xNmdkrf7tQz8RH0BGJwt8kaq7I5Hk2QkJlR0ZI%2525253D)|||
|[1025834](https://dev.azure.com/dnceng/public/_build/results?buildId=1025834)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49260](https://github.com/dotnet/runtime/pull/49260)|net6.0-Linux-Debug-x64-Mono_release-RedHat.7.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-f0834e388fb24420ab/System.Collections.Concurrent.Tests/console.a769368a.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A17%2525253A41Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DzYn7t%2525252FaCmLvQrRjxxiBYJ%2525252B5no34ugyeRdBNjFdDiu1o%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-f0834e388fb24420ab/System.Collections.Concurrent.Tests/core.1000.8190?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A17%2525253A41Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DzYn7t%2525252FaCmLvQrRjxxiBYJ%2525252B5no34ugyeRdBNjFdDiu1o%2525253D)|||
|[1025834](https://dev.azure.com/dnceng/public/_build/results?buildId=1025834)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49260](https://github.com/dotnet/runtime/pull/49260)|net6.0-Linux-Debug-x64-Mono_release-(Debian.10.Amd64.Open)ubuntu.1604.amd64.open@mcr.microsoft.com/dotnet-buildtools/prereqs:debian-10-helix-amd64-bfcd90a-20200121150006|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-b66e8d5b3c6448c694/System.Collections.Concurrent.Tests/console.d3cdca48.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A17%2525253A41Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DT8u3P6mQ3949TSZsPhPqkAMA8thrEFGr2N2xyd6C5iA%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-b66e8d5b3c6448c694/System.Collections.Concurrent.Tests/core.1000.23?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A17%2525253A41Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DT8u3P6mQ3949TSZsPhPqkAMA8thrEFGr2N2xyd6C5iA%2525253D)|||
|[1025834](https://dev.azure.com/dnceng/public/_build/results?buildId=1025834)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49260](https://github.com/dotnet/runtime/pull/49260)|net6.0-Linux-Debug-x64-Mono_release-Ubuntu.1604.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-cf209af355f146d2b2/System.Collections.Concurrent.Tests/console.a7280af7.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A17%2525253A42Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Db4VlFIX6FYGz6blKBhzBwKvXaOYAlvzt04Cd5BuGAKM%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-cf209af355f146d2b2/System.Collections.Concurrent.Tests/core.1000.15233?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A17%2525253A42Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Db4VlFIX6FYGz6blKBhzBwKvXaOYAlvzt04Cd5BuGAKM%2525253D)|||
|[1025834](https://dev.azure.com/dnceng/public/_build/results?buildId=1025834)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49260](https://github.com/dotnet/runtime/pull/49260)|net6.0-Linux-Debug-x64-Mono_release-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-ab7a010820784dd584/System.Collections.Concurrent.Tests/console.f82c9c3a.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A17%2525253A42Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DG8rAbqKBViTMqVLSwu0XhntEi8CzXu1Uspx5HY8Abtk%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-ab7a010820784dd584/System.Collections.Concurrent.Tests/core.1000.26582?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A17%2525253A42Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DG8rAbqKBViTMqVLSwu0XhntEi8CzXu1Uspx5HY8Abtk%2525253D)|||
|[1025834](https://dev.azure.com/dnceng/public/_build/results?buildId=1025834)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49260](https://github.com/dotnet/runtime/pull/49260)|net6.0-Linux-Debug-x64-Mono_release-SLES.15.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-18fd3dbb2c3d4571af/System.Collections.Concurrent.Tests/console.fd3441c9.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A17%2525253A43Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DqgImJ36aq7gEZrXjSNtlvl9ZMz52VnHRCJBcaq%2525252FZPQw%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-18fd3dbb2c3d4571af/System.Collections.Concurrent.Tests/core.1000.18102?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A17%2525253A43Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DqgImJ36aq7gEZrXjSNtlvl9ZMz52VnHRCJBcaq%2525252FZPQw%2525253D)|||
|[1025834](https://dev.azure.com/dnceng/public/_build/results?buildId=1025834)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49260](https://github.com/dotnet/runtime/pull/49260)|net6.0-Linux-Debug-x64-Mono_release-(Fedora.30.Amd64.Open)ubuntu.1604.amd64.open@mcr.microsoft.com/dotnet-buildtools/prereqs:fedora-30-helix-20200512010621-4f8cef7|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-b66ee9ee98f54bca8e/System.Collections.Concurrent.Tests/console.16bd6427.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A17%2525253A44Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DhVE8yUHziLi4dD%2525252BAd%2525252FWPUXeiQ4GHz8WS8LrZFGApSMA%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-b66ee9ee98f54bca8e/System.Collections.Concurrent.Tests/core.1000.23?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A17%2525253A44Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DhVE8yUHziLi4dD%2525252BAd%2525252FWPUXeiQ4GHz8WS8LrZFGApSMA%2525253D)|||
|[1025834](https://dev.azure.com/dnceng/public/_build/results?buildId=1025834)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49260](https://github.com/dotnet/runtime/pull/49260)|net6.0-windows-Debug-x64-Mono_release-Windows.81.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-36abc307b18a41e59b/System.Collections.Concurrent.Tests/console.83cd51ea.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A27%2525253A51Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DQjX5kOoIXGikk11Mh%2525252BeFnU%2525252B%2525252FY%2525252B%2525252BdUL5he1gjAutgNrc%2525253D)||||
|[1025834](https://dev.azure.com/dnceng/public/_build/results?buildId=1025834)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49260](https://github.com/dotnet/runtime/pull/49260)|net6.0-windows-Debug-x64-Mono_release-Windows.10.Amd64.Server19H1.ES.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49260-merge-46225614695646ec86/System.Collections.Concurrent.Tests/console.b7c62f44.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T12%2525253A27%2525253A52Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DScmgGGt4K5x9YKspf1rOZgH%2525252BLjYEPUKVTPAWCtFM3mo%2525253D)||||
|[1025677](https://dev.azure.com/dnceng/public/_build/results?buildId=1025677)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49257](https://github.com/dotnet/runtime/pull/49257)|net6.0-windows-Debug-x64-CoreCLR_checked-Windows.10.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49257-merge-eeaea8e2f53645e2ad/System.Collections.Concurrent.Tests/console.af9f88df.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T05%2525253A34%2525253A38Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dr65OcrhT5iRBp1FRD5APYVcMSOUsnrIUu61N%2525252BQha5Rc%2525253D)||||
|[1025677](https://dev.azure.com/dnceng/public/_build/results?buildId=1025677)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49257](https://github.com/dotnet/runtime/pull/49257)|net6.0-windows-Release-x86-CoreCLR_checked-Windows.10.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49257-merge-ff3a300eede04686bb/System.Collections.Concurrent.Tests/console.eb99967f.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T05%2525253A35%2525253A04Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dtv2NMZXBmEJolNfxNqfeQf0vEMNMN2DJyCOLqly%2525252FiQU%2525253D)||||
|[1025677](https://dev.azure.com/dnceng/public/_build/results?buildId=1025677)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49257](https://github.com/dotnet/runtime/pull/49257)|net6.0-OSX-Debug-x64-CoreCLR_checked-OSX.1013.Amd64.Open|[console.log](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49257-merge-107fae19fb11429dac/System.Collections.Concurrent.Tests/console.a027cfd0.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T05%2525253A46%2525253A33Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DFITOpqx4XnI4qwnTMtd5yNWJOyEIXYgEeOH45LCG3Qk%2525253D)||||
|[1025677](https://dev.azure.com/dnceng/public/_build/results?buildId=1025677)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49257](https://github.com/dotnet/runtime/pull/49257)|net6.0-Linux-Release-arm-CoreCLR_checked-(Alpine.312.Arm32.Open)Ubuntu.1804.ArmArch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:alpine-3.12-helix-arm32v7-20200908125213-5bece88|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49257-merge-40d422b49d3f4ffe80/System.Collections.Concurrent.Tests/console.964fbe4d.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T05%2525253A50%2525253A23Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DIetvrvnM83xYeuOAdgDUPvTGBsw11kVAwL7sZ89V6iE%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49257-merge-40d422b49d3f4ffe80/System.Collections.Concurrent.Tests/core.1001.21?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T05%2525253A50%2525253A23Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DIetvrvnM83xYeuOAdgDUPvTGBsw11kVAwL7sZ89V6iE%2525253D)|||
|[1025677](https://dev.azure.com/dnceng/public/_build/results?buildId=1025677)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49257](https://github.com/dotnet/runtime/pull/49257)|net6.0-Linux-Release-arm-CoreCLR_checked-(Ubuntu.1804.Arm32.Open)Ubuntu.1804.Armarch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:ubuntu-18.04-helix-arm32v7-bfcd90a-20200121150440|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49257-merge-e914822b1b70411f80/System.Collections.Concurrent.Tests/console.2672c8ea.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T05%2525253A50%2525253A32Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DY7INdlTzqiJIXxFWxZFtGsiMN2IMzyuDDn0R%2525252BbOlnvU%2525253D)||||
|[1025677](https://dev.azure.com/dnceng/public/_build/results?buildId=1025677)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49257](https://github.com/dotnet/runtime/pull/49257)|net6.0-Linux-Release-arm64-CoreCLR_checked-(Alpine.312.Arm64.Open)Ubuntu.1804.ArmArch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:alpine-3.12-helix-arm64v8-20200602002604-25f8a3e|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49257-merge-007d240f322f4aa0a8/System.Collections.Concurrent.Tests/console.10ee8795.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T05%2525253A51%2525253A08Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D2AhZWG9bxZK%2525252B5U0w6PdVjCYZdvmv1h42ell%2525252FqRp7lNs%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49257-merge-007d240f322f4aa0a8/System.Collections.Concurrent.Tests/core.1001.21?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T05%2525253A51%2525253A08Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D2AhZWG9bxZK%2525252B5U0w6PdVjCYZdvmv1h42ell%2525252FqRp7lNs%2525253D)||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49257-merge-007d240f322f4aa0a8/System.Collections.Concurrent.Tests/1fdf7d71-d569-4a47-92d0-cd0967009c12.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T05%2525253A51%2525253A08Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D2AhZWG9bxZK%2525252B5U0w6PdVjCYZdvmv1h42ell%2525252FqRp7lNs%2525253D)|
|[1025677](https://dev.azure.com/dnceng/public/_build/results?buildId=1025677)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49257](https://github.com/dotnet/runtime/pull/49257)|net6.0-Linux-Debug-x64-CoreCLR_checked-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49257-merge-d666932561e644b992/System.Collections.Concurrent.Tests/console.df3c455c.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T06%2525253A02%2525253A37Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DByCS2TAWkzSWfaj%2525252FC0N9hB6OEljw6eNhG4tGMvYir%2525252FU%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49257-merge-d666932561e644b992/System.Collections.Concurrent.Tests/core.1000.3055?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T06%2525253A02%2525253A37Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DByCS2TAWkzSWfaj%2525252FC0N9hB6OEljw6eNhG4tGMvYir%2525252FU%2525253D)|||
|[1025677](https://dev.azure.com/dnceng/public/_build/results?buildId=1025677)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49257](https://github.com/dotnet/runtime/pull/49257)|net6.0-Linux-Debug-x64-CoreCLR_checked-(Alpine.312.Amd64.Open)ubuntu.1604.amd64.open@mcr.microsoft.com/dotnet-buildtools/prereqs:alpine-3.12-helix-20200602002622-e06dc59|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49257-merge-63d6bad1f5a54787ac/System.Collections.Concurrent.Tests/console.5b78d89c.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T06%2525253A09%2525253A59Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D1%2525252BEj928CR%2525252FasltQJT2KECfDyQWGqJLQWBgMmdFdfA5A%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49257-merge-63d6bad1f5a54787ac/System.Collections.Concurrent.Tests/core.1000.21?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-26T06%2525253A09%2525253A59Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D1%2525252BEj928CR%2525252FasltQJT2KECfDyQWGqJLQWBgMmdFdfA5A%2525253D)|||
|[1024591](https://dev.azure.com/dnceng/public/_build/results?buildId=1024591)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 48601](https://github.com/dotnet/runtime/pull/48601)|net6.0-windows-Debug-x64-CoreCLR_checked-Windows.10.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48601-merge-7bba3f65c3ed4c9fae/System.Collections.Concurrent.Tests/console.e739d43f.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-25T15%2525253A29%2525253A08Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DCM85nPPgKrvJ2lWpodhrwwXpcsXAHLPzyfY8VHYYE0o%2525253D)|||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48601-merge-7bba3f65c3ed4c9fae/System.Collections.Concurrent.Tests/b4d88ec6-9bbe-4735-8806-1afa90a49b4b.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-25T15%2525253A29%2525253A08Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DCM85nPPgKrvJ2lWpodhrwwXpcsXAHLPzyfY8VHYYE0o%2525253D)|
|[1024540](https://dev.azure.com/dnceng/public/_build/results?buildId=1024540)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 47269](https://github.com/dotnet/runtime/pull/47269)|net6.0-Linux-Release-arm-CoreCLR_checked-(Ubuntu.1804.Arm32.Open)Ubuntu.1804.Armarch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:ubuntu-18.04-helix-arm32v7-bfcd90a-20200121150440|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-47269-merge-3cd69f0f2f4a434181/System.Collections.Concurrent.Tests/console.0badc198.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-25T14%2525253A29%2525253A32Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D1p%2525252BykK7A18qOraMP6wl0vPLf2At4w5moxss0bb8ssNM%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-47269-merge-3cd69f0f2f4a434181/System.Collections.Concurrent.Tests/core.1001.23?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-25T14%2525253A29%2525253A32Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D1p%2525252BykK7A18qOraMP6wl0vPLf2At4w5moxss0bb8ssNM%2525253D)||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-47269-merge-3cd69f0f2f4a434181/System.Collections.Concurrent.Tests/aaa7d526-0b9c-496d-b6f1-cd5b11c85908.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-25T14%2525253A29%2525253A32Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D1p%2525252BykK7A18qOraMP6wl0vPLf2At4w5moxss0bb8ssNM%2525253D)|
|[1024540](https://dev.azure.com/dnceng/public/_build/results?buildId=1024540)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 47269](https://github.com/dotnet/runtime/pull/47269)|net6.0-Linux-Release-arm64-CoreCLR_checked-(Alpine.312.Arm64.Open)Ubuntu.1804.ArmArch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:alpine-3.12-helix-arm64v8-20200602002604-25f8a3e|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-47269-merge-e9dd2df8e4b44b21a5/System.Collections.Concurrent.Tests/console.ad116978.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-25T14%2525253A30%2525253A01Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dv5xRFiTUAa8suP67%2525252BAiysWVd6WEAsMapMx9cwJrZFjE%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-47269-merge-e9dd2df8e4b44b21a5/System.Collections.Concurrent.Tests/core.1001.21?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-25T14%2525253A30%2525253A01Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253Dv5xRFiTUAa8suP67%2525252BAiysWVd6WEAsMapMx9cwJrZFjE%2525253D)|||
|[1024540](https://dev.azure.com/dnceng/public/_build/results?buildId=1024540)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 47269](https://github.com/dotnet/runtime/pull/47269)|net6.0-Linux-Release-arm-CoreCLR_checked-(Alpine.312.Arm32.Open)Ubuntu.1804.ArmArch.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:alpine-3.12-helix-arm32v7-20200908125213-5bece88|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-47269-merge-79165b9f41d2476990/System.Collections.Concurrent.Tests/console.691b5ced.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-25T14%2525253A33%2525253A34Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DJPU1wi35r8Tfd9l%2525252Byc2xy4jJ7VuSuoRocXI3nw3jH7k%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-47269-merge-79165b9f41d2476990/System.Collections.Concurrent.Tests/core.1001.21?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-25T14%2525253A33%2525253A34Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DJPU1wi35r8Tfd9l%2525252Byc2xy4jJ7VuSuoRocXI3nw3jH7k%2525253D)|||
|[1024540](https://dev.azure.com/dnceng/public/_build/results?buildId=1024540)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 47269](https://github.com/dotnet/runtime/pull/47269)|net6.0-windows-Release-x86-CoreCLR_checked-Windows.10.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-47269-merge-4e109542b2db40de8f/System.Collections.Concurrent.Tests/console.b8e259dc.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-25T14%2525253A40%2525253A53Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D3W3m90iIV0EK%2525252FQIczKoKYsfBXJTP0tVtNfnYUG2RPe8%2525253D)||||
|[1022669](https://dev.azure.com/dnceng/public/_build/results?buildId=1022669)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49072](https://github.com/dotnet/runtime/pull/49072)|net6.0-Browser-Release-wasm-Mono_Release-normal-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49072-merge-cf95207ed75b4a9e85/System.Collections.Concurrent.Tests/console.75d96281.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-24T12%2525253A54%2525253A22Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D2kkr4RgbjsSnh3yaD%2525252F%2525252FxZEoulQOSJPjfXfRun%2525252FGiuJ4%2525253D)||||
|[1022669](https://dev.azure.com/dnceng/public/_build/results?buildId=1022669)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 49072](https://github.com/dotnet/runtime/pull/49072)|net6.0-Browser-Release-wasm-Mono_Release-wasmtestonbrowser-Ubuntu.1804.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-49072-merge-4b41f0020c764d6ebe/System.Collections.Concurrent.Tests/console.8b6813cc.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-24T12%2525253A54%2525253A21Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DMGG2zWPkqQPihnoL5q6iw%2525252Fm9oKai7Kub7JnFI4GRYt4%2525253D)||||
|[1019817](https://dev.azure.com/dnceng/public/_build/results?buildId=1019817)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 47864](https://github.com/dotnet/runtime/pull/47864)|net6.0-OSX-Debug-arm64-Mono_release-OSX.1100.ARM64.Open|[console.log](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-47864-merge-39128b90aa9b4eeea3/System.Collections.Concurrent.Tests/console.eb752cfa.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-23T00%2525253A58%2525253A09Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DLZvXPWfSCvX4tm3mUyB5dg90y8%2525252FC4OaSzWG68iDnM%2525252Fc%2525253D)|[core dump](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-47864-merge-39128b90aa9b4eeea3/System.Collections.Concurrent.Tests/core.69513?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-23T00%2525253A58%2525253A09Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DLZvXPWfSCvX4tm3mUyB5dg90y8%2525252FC4OaSzWG68iDnM%2525252Fc%2525253D)||[runclient.py](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-47864-merge-39128b90aa9b4eeea3/System.Collections.Concurrent.Tests/cdb68aac-8bc6-453c-a71f-3450721b28b8.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-23T00%2525253A58%2525253A09Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DLZvXPWfSCvX4tm3mUyB5dg90y8%2525252FC4OaSzWG68iDnM%2525252Fc%2525253D)|
|[1017879](https://dev.azure.com/dnceng/public/_build/results?buildId=1017879)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 48923](https://github.com/dotnet/runtime/pull/48923)|net6.0-windows-Release-x86-CoreCLR_checked-Windows.10.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48923-merge-0560eff46ca14cdb9c/System.Collections.Concurrent.Tests/console.6f9c3193.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-21T23%2525253A53%2525253A42Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DK709QaSfYy8B8erbY4%2525252FZSUzYZ4C1jglPATaqSnemER8%2525253D)|||[runclient.py](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48923-merge-0560eff46ca14cdb9c/System.Collections.Concurrent.Tests/92370ec8-1d1c-47fd-8af3-ff8cb01746ed.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-21T23%2525253A53%2525253A42Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DK709QaSfYy8B8erbY4%2525252FZSUzYZ4C1jglPATaqSnemER8%2525253D)|
|[1016780](https://dev.azure.com/dnceng/public/_build/results?buildId=1016780)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 48908](https://github.com/dotnet/runtime/pull/48908)|net6.0-OSX-Debug-x64-Mono_release-OSX.1014.Amd64.Open|[console.log](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48908-merge-2b45e0c6dbe84cba92/System.Collections.Concurrent.Tests/console.f3125e54.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-21T14%2525253A11%2525253A47Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D%2525252B4gPWy%2525252BOCQ2%2525252F2wtrNz%2525252FV2NXWlQWOXZ0GLmo8WidOFFA%2525253D)|||[runclient.py](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48908-merge-2b45e0c6dbe84cba92/System.Collections.Concurrent.Tests/105376b0-e069-4d9b-bf8c-4fe26f9cf311.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-21T14%2525253A11%2525253A47Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D%2525252B4gPWy%2525252BOCQ2%2525252F2wtrNz%2525252FV2NXWlQWOXZ0GLmo8WidOFFA%2525253D)|
|[1016780](https://dev.azure.com/dnceng/public/_build/results?buildId=1016780)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 48908](https://github.com/dotnet/runtime/pull/48908)|net6.0-OSX-Debug-x64-Mono_release-OSX.1015.Amd64.Open|[console.log](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48908-merge-e70db87982964cab86/System.Collections.Concurrent.Tests/console.8a9aa89f.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-21T14%2525253A11%2525253A49Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DGZuxcFqp99UMMO1JQCgUmAsyrVZ%2525252B%2525252FrQNkAf1X%2525252BvN%2525252BKc%2525253D)|[core dump](https://helixre107v0xdeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48908-merge-e70db87982964cab86/System.Collections.Concurrent.Tests/core.10612?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-21T14%2525253A11%2525253A49Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DGZuxcFqp99UMMO1JQCgUmAsyrVZ%2525252B%2525252FrQNkAf1X%2525252BvN%2525252BKc%2525253D)|||
|[1016780](https://dev.azure.com/dnceng/public/_build/results?buildId=1016780)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 48908](https://github.com/dotnet/runtime/pull/48908)|net6.0-Linux-Debug-x64-Mono_release-(Centos.8.Amd64.Open)Ubuntu.1604.Amd64.Open@mcr.microsoft.com/dotnet-buildtools/prereqs:centos-8-helix-20201229003624-c1bf759|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48908-merge-b6e59375849140d5bc/System.Collections.Concurrent.Tests/console.7952c3f5.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-21T14%2525253A18%2525253A20Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D5Xj0gMLvDgRVKzRu7ns6KxpeikmeE%2525252BlljVvCYwiqlf8%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48908-merge-b6e59375849140d5bc/System.Collections.Concurrent.Tests/core.1000.25?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-21T14%2525253A18%2525253A20Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D5Xj0gMLvDgRVKzRu7ns6KxpeikmeE%2525252BlljVvCYwiqlf8%2525253D)|||
|[1016780](https://dev.azure.com/dnceng/public/_build/results?buildId=1016780)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 48908](https://github.com/dotnet/runtime/pull/48908)|net6.0-Linux-Debug-x64-Mono_release-RedHat.7.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48908-merge-d19302894cf74726b1/System.Collections.Concurrent.Tests/console.61224e79.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-21T14%2525253A18%2525253A21Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DjuAIRX18pRXGPiLwi6rkBR%2525252BBu%2525252BSbUgUgmmlmbYOACK0%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48908-merge-d19302894cf74726b1/System.Collections.Concurrent.Tests/core.1000.14534?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-21T14%2525253A18%2525253A21Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DjuAIRX18pRXGPiLwi6rkBR%2525252BBu%2525252BSbUgUgmmlmbYOACK0%2525253D)|||
|[1016780](https://dev.azure.com/dnceng/public/_build/results?buildId=1016780)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 48908](https://github.com/dotnet/runtime/pull/48908)|net6.0-Linux-Debug-x64-Mono_release-(Debian.10.Amd64.Open)ubuntu.1604.amd64.open@mcr.microsoft.com/dotnet-buildtools/prereqs:debian-10-helix-amd64-bfcd90a-20200121150006|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48908-merge-f0db635fef984ae5a3/System.Collections.Concurrent.Tests/console.2cd938c0.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-21T14%2525253A18%2525253A22Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DkpGYTmiosMf25R8mtIOy%2525252FnzHDeCjbqdO7rSVFmP8IOc%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48908-merge-f0db635fef984ae5a3/System.Collections.Concurrent.Tests/core.1000.23?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-21T14%2525253A18%2525253A22Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253DkpGYTmiosMf25R8mtIOy%2525252FnzHDeCjbqdO7rSVFmP8IOc%2525253D)|||
|[1016780](https://dev.azure.com/dnceng/public/_build/results?buildId=1016780)|[runtime](https://dnceng.visualstudio.com/public/_build?definitionId=686)|[PR 48908](https://github.com/dotnet/runtime/pull/48908)|net6.0-Linux-Debug-x64-Mono_release-Ubuntu.1604.Amd64.Open|[console.log](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48908-merge-7b457c82769044eea2/System.Collections.Concurrent.Tests/console.949729aa.log?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-21T14%2525253A18%2525253A22Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D0p8BGMdppcAJgUqjUEKHXQtJ61%2525252BxvPUkGcTupF7Te6M%2525253D)|[core dump](https://helixre8s23ayyeko0k025g8.blob.core.windows.net/dotnet-runtime-refs-pull-48908-merge-7b457c82769044eea2/System.Collections.Concurrent.Tests/core.1000.12063?%3F%253F%25253Fsv%25253D2019-07-07%252526se%25253D2021-03-21T14%2525253A18%2525253A22Z%252526sr%25253Dc%252526sp%25253Drl%252526sig%25253D0p8BGMdppcAJgUqjUEKHXQtJ61%2525252BxvPUkGcTupF7Te6M%2525253D)|||
Displaying 100 of 143 results
Build Result Summary
|Day Hit Count|Week Hit Count|Month Hit Count|
|---|---|---|
|1|1|16|
",1,system collections concurrent tests crashing in ci build configuration linux release coreclr release redhat open how to debug dump md core console log runfo tracking issue build definition kind run name console core dump test results run client displaying of results build result summary day hit count week hit count month hit count ,1
299676,22617892732.0,IssuesEvent,2022-06-30 01:20:09,devstream-io/devstream,https://api.github.com/repos/devstream-io/devstream,closed,:open_book: `Docs(Translation)`: Project Layout - Chinese Version,documentation good first issue,"### What should be changed?
Now we have the document [Project Layout](https://github.com/devstream-io/devstream/blob/main/docs/development/project-layout.md), so the corresponding Chinese version can also start writing (translation) [here](https://github.com/devstream-io/devstream/blob/main/docs/development/project-layout.zh.md).
---
The translation does not need to correspond strictly to each sentence, it just needs to express the same meaning. ""Meaningful translation"" is recommended . Please feel free to ask me when you have any questions.",1.0,":open_book: `Docs(Translation)`: Project Layout - Chinese Version - ### What should be changed?
Now we have the document [Project Layout](https://github.com/devstream-io/devstream/blob/main/docs/development/project-layout.md), so the corresponding Chinese version can also start writing (translation) [here](https://github.com/devstream-io/devstream/blob/main/docs/development/project-layout.zh.md).
---
The translation does not need to correspond strictly to each sentence, it just needs to express the same meaning. ""Meaningful translation"" is recommended . Please feel free to ask me when you have any questions.",0, open book docs translation project layout chinese version what should be changed now we have the document so the corresponding chinese version can also start writing translation the translation does not need to correspond strictly to each sentence it just needs to express the same meaning meaningful translation is recommended please feel free to ask me when you have any questions ,0
1721,19093452392.0,IssuesEvent,2021-11-29 14:29:22,web3-storage/web3.storage,https://api.github.com/repos/web3-storage/web3.storage,closed,[research] Cannot retrieve file through web gateway after upload,kind/bug P1 reliability-performance-sprint,"My web3.storage/files interface includes this link:
https://bafybeia76qn4cxtqilptkqkytxgsgb7akz2tintvadmcbgxwfi2si7mbna.ipfs.dweb.link/
Alas, that URL always 502's for me. I suspect my file size is a factor.",True,"[research] Cannot retrieve file through web gateway after upload - My web3.storage/files interface includes this link:
https://bafybeia76qn4cxtqilptkqkytxgsgb7akz2tintvadmcbgxwfi2si7mbna.ipfs.dweb.link/
Alas, that URL always 502's for me. I suspect my file size is a factor.",1, cannot retrieve file through web gateway after upload my storage files interface includes this link alas that url always s for me i suspect my file size is a factor ,1
358,6902952932.0,IssuesEvent,2017-11-26 04:34:38,willamm/WaveSimulator,https://api.github.com/repos/willamm/WaveSimulator,closed,Handle assert when adding shape out of bounds,safety / reliability TODO,"Should handle gracefully instead of crashing, change to using an exception instead of an assert?",True,"Handle assert when adding shape out of bounds - Should handle gracefully instead of crashing, change to using an exception instead of an assert?",1,handle assert when adding shape out of bounds should handle gracefully instead of crashing change to using an exception instead of an assert ,1
554065,16388343920.0,IssuesEvent,2021-05-17 13:23:18,webcompat/web-bugs,https://api.github.com/repos/webcompat/web-bugs,closed,www.polygon.com - site is not usable,browser-firefox-ios os-ios priority-normal,"
**URL**: https://www.polygon.com/
**Browser / Version**: Firefox iOS 33.1
**Operating System**: iOS 14.5.1
**Tested Another Browser**: Yes Safari
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
When accessing polygon website, no options are visible when clicking on menus such as Gaming.
View the screenshotBrowser Configuration
None
_From [webcompat.com](https://webcompat.com/) with ❤️_",1.0,"www.polygon.com - site is not usable -
**URL**: https://www.polygon.com/
**Browser / Version**: Firefox iOS 33.1
**Operating System**: iOS 14.5.1
**Tested Another Browser**: Yes Safari
**Problem type**: Site is not usable
**Description**: Page not loading correctly
**Steps to Reproduce**:
When accessing polygon website, no options are visible when clicking on menus such as Gaming.
View the screenshotBrowser Configuration
None
_From [webcompat.com](https://webcompat.com/) with ❤️_",0, site is not usable url browser version firefox ios operating system ios tested another browser yes safari problem type site is not usable description page not loading correctly steps to reproduce when accessing polygon website no options are visible when clicking on menus such as gaming view the screenshot img alt screenshot src browser configuration none from with ❤️ ,0
108,3969784793.0,IssuesEvent,2016-05-04 01:59:56,RxBroadcast/RxBroadcast,https://api.github.com/repos/RxBroadcast/RxBroadcast,closed,Prevent duplicate messages in UDP broadcast,reliability udp,"> [UDP provides] no guarantee of delivery, ordering, or **duplicate protection**.
Emphasis in the above quote is mine. `UdpBroadcast` class needs to provide duplicate protection to satisfy the ""No Duplicates"" property outlined in §8.1.2.2.",True,"Prevent duplicate messages in UDP broadcast - > [UDP provides] no guarantee of delivery, ordering, or **duplicate protection**.
Emphasis in the above quote is mine. `UdpBroadcast` class needs to provide duplicate protection to satisfy the ""No Duplicates"" property outlined in §8.1.2.2.",1,prevent duplicate messages in udp broadcast no guarantee of delivery ordering or duplicate protection emphasis in the above quote is mine udpbroadcast class needs to provide duplicate protection to satisfy the no duplicates property outlined in sect ,1
7286,2610361906.0,IssuesEvent,2015-02-26 19:57:01,chrsmith/scribefire-chrome,https://api.github.com/repos/chrsmith/scribefire-chrome,opened,cannot change blog password on Scribefire,auto-migrated Priority-Medium Type-Defect,"```
The password on my wordpress blog has been changed, but I cannot find the way
to make the change on scribefire. (I keep getting ""username/password error""
boxes.
What browser are you using?
Firefox
What version of ScribeFire are you running?
Classic 4.001
```
-----
Original issue reported on code.google.com by `snicke...@juno.com` on 16 May 2012 at 1:48",1.0,"cannot change blog password on Scribefire - ```
The password on my wordpress blog has been changed, but I cannot find the way
to make the change on scribefire. (I keep getting ""username/password error""
boxes.
What browser are you using?
Firefox
What version of ScribeFire are you running?
Classic 4.001
```
-----
Original issue reported on code.google.com by `snicke...@juno.com` on 16 May 2012 at 1:48",0,cannot change blog password on scribefire the password on my wordpress blog has been changed but i cannot find the way to make the change on scribefire i keep getting username password error boxes what browser are you using firefox what version of scribefire are you running classic original issue reported on code google com by snicke juno com on may at ,0
356573,10594823244.0,IssuesEvent,2019-10-09 17:37:10,GlitchEnzo/NuGetForUnity,https://api.github.com/repos/GlitchEnzo/NuGetForUnity,closed,Sanitize/Validate Source URLs,high priority,"I mistakenly copy&pasted a source URL with an additional space at the end - this made requests fail. It wasn't difficult to figure out what happened, but it'd be great if those trailing spaces had been removed automatically, or a visible error message/dialog warned users when the URL didn't finish with a ""/"" character.
- **NuGetForUnity Version:** 1.1.0
",1.0,"Sanitize/Validate Source URLs - I mistakenly copy&pasted a source URL with an additional space at the end - this made requests fail. It wasn't difficult to figure out what happened, but it'd be great if those trailing spaces had been removed automatically, or a visible error message/dialog warned users when the URL didn't finish with a ""/"" character.
- **NuGetForUnity Version:** 1.1.0
",0,sanitize validate source urls i mistakenly copy pasted a source url with an additional space at the end this made requests fail it wasn t difficult to figure out what happened but it d be great if those trailing spaces had been removed automatically or a visible error message dialog warned users when the url didn t finish with a character nugetforunity version ,0
1092,13041829055.0,IssuesEvent,2020-07-28 21:08:34,mozilla/hubs,https://api.github.com/repos/mozilla/hubs,closed,Move from node-sass to dart-sass,enhancement reliability,"I think we could improve developer experience by moving from `node-sass` to `dart-sass`. `node-sass` works fine, but uses a native module using`node-gyp` that requires Python 2.7 to be installed. `node-dart` is a drop in replacement and doesn't have this dependency.
",True,"Move from node-sass to dart-sass - I think we could improve developer experience by moving from `node-sass` to `dart-sass`. `node-sass` works fine, but uses a native module using`node-gyp` that requires Python 2.7 to be installed. `node-dart` is a drop in replacement and doesn't have this dependency.
",1,move from node sass to dart sass i think we could improve developer experience by moving from node sass to dart sass node sass works fine but uses a native module using node gyp that requires python to be installed node dart is a drop in replacement and doesn t have this dependency ,1
168112,6362221262.0,IssuesEvent,2017-07-31 14:34:11,jaredpalmer/formik,https://api.github.com/repos/jaredpalmer/formik,closed,RFC: isValid prop for the whole form,enhancement priority: medium,"Hey guys!
First of all, we've really enjoyed using Formik in our products. It's made form submission so much easier. One pain point that we have in our usage though is that there's no easy way to always tell if the whole form is valid.
This would be nice for doing something like disabling the submit button based off one prop instead of having to check for other things. For example, as far as I can tell nothing shows up in the errors array until at least one field has been typed in. So although the form is invalid, you can't tell that just by checking the errors array.
Just wanted to see if anyone else thought something like that would be a good idea... Thanks again for the awesome library!",1.0,"RFC: isValid prop for the whole form - Hey guys!
First of all, we've really enjoyed using Formik in our products. It's made form submission so much easier. One pain point that we have in our usage though is that there's no easy way to always tell if the whole form is valid.
This would be nice for doing something like disabling the submit button based off one prop instead of having to check for other things. For example, as far as I can tell nothing shows up in the errors array until at least one field has been typed in. So although the form is invalid, you can't tell that just by checking the errors array.
Just wanted to see if anyone else thought something like that would be a good idea... Thanks again for the awesome library!",0,rfc isvalid prop for the whole form hey guys first of all we ve really enjoyed using formik in our products it s made form submission so much easier one pain point that we have in our usage though is that there s no easy way to always tell if the whole form is valid this would be nice for doing something like disabling the submit button based off one prop instead of having to check for other things for example as far as i can tell nothing shows up in the errors array until at least one field has been typed in so although the form is invalid you can t tell that just by checking the errors array just wanted to see if anyone else thought something like that would be a good idea thanks again for the awesome library ,0
251642,27194795250.0,IssuesEvent,2023-02-20 03:34:18,WFS-Mend/vtrade-frontend-legacy,https://api.github.com/repos/WFS-Mend/vtrade-frontend-legacy,opened,async-2.6.1.tgz: 5 vulnerabilities (highest severity is: 9.1),security vulnerability," Vulnerable Library - async-2.6.1.tgz
Higher-order functions and common patterns for asynchronous code
Versions of lodash lower than 4.17.12 are vulnerable to Prototype Pollution. The function defaultsDeep could be tricked into adding or modifying properties of Object.prototype using a constructor payload.
In Async before 2.6.4 and 3.x before 3.2.2, a malicious user can obtain privileges via the mapValues() method, aka lib/internal/iterator.js createObjectIterator prototype pollution.
Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions.
Mend Note: After conducting further research, Mend has determined that CVE-2020-28500 only affects environments with versions 4.0.0 to 4.17.20 of Lodash.
Versions of lodash lower than 4.17.12 are vulnerable to Prototype Pollution. The function defaultsDeep could be tricked into adding or modifying properties of Object.prototype using a constructor payload.
In Async before 2.6.4 and 3.x before 3.2.2, a malicious user can obtain privileges via the mapValues() method, aka lib/internal/iterator.js createObjectIterator prototype pollution.
Lodash versions prior to 4.17.21 are vulnerable to Regular Expression Denial of Service (ReDoS) via the toNumber, trim and trimEnd functions.
Mend Note: After conducting further research, Mend has determined that CVE-2020-28500 only affects environments with versions 4.0.0 to 4.17.20 of Lodash.
:rescue_worker_helmet: Automatic Remediation is available for this issue
***
:rescue_worker_helmet: Automatic Remediation is available for this issue.
",0,async tgz vulnerabilities highest severity is vulnerable library async tgz higher order functions and common patterns for asynchronous code library home page a href path to dependency file package json path to vulnerable library node modules grunt retire node modules form data node modules async package json node modules async package json found in head commit a href vulnerabilities cve severity cvss dependency type fixed in async version remediation available critical lodash tgz transitive high async tgz direct high lodash tgz transitive high lodash tgz transitive medium lodash tgz transitive details cve vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file package json path to vulnerable library node modules lodash package json dependency hierarchy async tgz root library x lodash tgz vulnerable library found in head commit a href found in base branch master vulnerability details versions of lodash lower than are vulnerable to prototype pollution the function defaultsdeep could be tricked into adding or modifying properties of object prototype using a constructor payload publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash direct dependency fix resolution async rescue worker helmet automatic remediation is available for this issue cve vulnerable library async tgz higher order functions and common patterns for asynchronous code library home page a href path to dependency file package json path to vulnerable library node modules grunt retire node modules form data node modules async package json node modules async package json dependency hierarchy x async tgz vulnerable library found in head commit a href found in base branch master vulnerability details in async before and x before a malicious user can obtain privileges via the mapvalues method aka lib internal iterator js createobjectiterator prototype pollution publish date url a href cvss score details base score metrics exploitability metrics attack vector local attack complexity low privileges required none user interaction required scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution rescue worker helmet automatic remediation is available for this issue cve vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file package json path to vulnerable library node modules lodash package json dependency hierarchy async tgz root library x lodash tgz vulnerable library found in head commit a href found in base branch master vulnerability details prototype pollution attack when using zipobjectdeep in lodash before publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash direct dependency fix resolution async rescue worker helmet automatic remediation is available for this issue cve vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file package json path to vulnerable library node modules lodash package json dependency hierarchy async tgz root library x lodash tgz vulnerable library found in head commit a href found in base branch master vulnerability details lodash versions prior to are vulnerable to command injection via the template function publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required high user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version release date fix resolution lodash direct dependency fix resolution async rescue worker helmet automatic remediation is available for this issue cve vulnerable library lodash tgz lodash modular utilities library home page a href path to dependency file package json path to vulnerable library node modules lodash package json dependency hierarchy async tgz root library x lodash tgz vulnerable library found in head commit a href found in base branch master vulnerability details lodash versions prior to are vulnerable to regular expression denial of service redos via the tonumber trim and trimend functions mend note after conducting further research mend has determined that cve only affects environments with versions to of lodash publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution lodash direct dependency fix resolution async rescue worker helmet automatic remediation is available for this issue rescue worker helmet automatic remediation is available for this issue ,0
573244,17023615629.0,IssuesEvent,2021-07-03 02:56:20,tomhughes/trac-tickets,https://api.github.com/repos/tomhughes/trac-tickets,closed,[general] site relations in mapnik,Component: mapnik Priority: major Resolution: wontfix Type: enhancement,"**[Submitted to the original trac issue database at 4.41pm, Tuesday, 13th July 2010]**
I would really love to have site-relations integrated in the current mapnik stylesheet.
Here are some examples:
http://www.openstreetmap.org/browse/relation/1030019
http://www.openstreetmap.org/browse/relation/1029540
http://www.openstreetmap.org/browse/relation/1026680
",1.0,"[general] site relations in mapnik - **[Submitted to the original trac issue database at 4.41pm, Tuesday, 13th July 2010]**
I would really love to have site-relations integrated in the current mapnik stylesheet.
Here are some examples:
http://www.openstreetmap.org/browse/relation/1030019
http://www.openstreetmap.org/browse/relation/1029540
http://www.openstreetmap.org/browse/relation/1026680
",0, site relations in mapnik i would really love to have site relations integrated in the current mapnik stylesheet here are some examples ,0
307,6418809841.0,IssuesEvent,2017-08-08 19:47:48,dotnet/roslyn,https://api.github.com/repos/dotnet/roslyn,closed,[Loc] Rename in editor causing exception,Area-IDE Bug Tenet-Reliability,"**Version Used**:
2.3.0.6183003
Windows 8.1 Chinese (Traditional)
**Steps to Reproduce**:
1. Right click in editor window
2. Select rename
**Expected Behavior**:
Should be able to rename methods, declarations, etc.
**Actual Behavior**:

Originally thought to be Live Unit Testing related but is reproducible without LUT running. [testimpact/2003](https://github.com/dotnet/testimpact/issues/2003)",True,"[Loc] Rename in editor causing exception - **Version Used**:
2.3.0.6183003
Windows 8.1 Chinese (Traditional)
**Steps to Reproduce**:
1. Right click in editor window
2. Select rename
**Expected Behavior**:
Should be able to rename methods, declarations, etc.
**Actual Behavior**:

Originally thought to be Live Unit Testing related but is reproducible without LUT running. [testimpact/2003](https://github.com/dotnet/testimpact/issues/2003)",1, rename in editor causing exception version used windows chinese traditional steps to reproduce right click in editor window select rename expected behavior should be able to rename methods declarations etc actual behavior originally thought to be live unit testing related but is reproducible without lut running ,1
294245,22143119176.0,IssuesEvent,2022-06-03 09:02:46,NetAppDocs/xcp,https://api.github.com/repos/NetAppDocs/xcp,closed,"Page displays ""Error! Reference source not found."" rather than appropriate file path",documentation good first issue,"Page: [](https://docs.netapp.com/us-en/xcp/xcp-logging-for-nfs-and-smb.html)
The page for XCP logging displays ""Error! Reference source not found."" in both the ""Config JSON file location"" table and step 4 of the ""Configure the JSON configuration file"" section. I'm not sure what the appropriate values should be, but it appears to be referencing a location that may not exist.",1.0,"Page displays ""Error! Reference source not found."" rather than appropriate file path - Page: [](https://docs.netapp.com/us-en/xcp/xcp-logging-for-nfs-and-smb.html)
The page for XCP logging displays ""Error! Reference source not found."" in both the ""Config JSON file location"" table and step 4 of the ""Configure the JSON configuration file"" section. I'm not sure what the appropriate values should be, but it appears to be referencing a location that may not exist.",0,page displays error reference source not found rather than appropriate file path page the page for xcp logging displays error reference source not found in both the config json file location table and step of the configure the json configuration file section i m not sure what the appropriate values should be but it appears to be referencing a location that may not exist ,0
2439,25345647783.0,IssuesEvent,2022-11-19 06:38:42,ppy/osu,https://api.github.com/repos/ppy/osu,closed,Game crashes on first-time setup when osu!stable folder is not accessible due to permissions,type:reliability platform:windows,"### Type
Crash to desktop
### Bug description
I was just installing osu!lazer on my new windows install, and on the first-time setup it attempted to get numbers for my amount of beatmaps and such on stable, and crashed when doing so, because my current user account does not have access to the old osu! songs folder. I could fix this by giving myself permissions, but I think it's important the game handles this as well.
It seems to be crashing when accessing collections.db - `System.UnauthorizedAccessException: Access to the path 'J:\osu!\collection.db' is denied.`, but I suspect it'd also have issues when trying to do scores and songs.
When directly importing from the settings menu, it throws errors in the notifications panel, but does not crash the game completely.
### Screenshots or videos

https://user-images.githubusercontent.com/33783503/201481995-524dc527-fe54-4aa4-8a15-b7854d1cbf18.mp4
### Version
2022.1101.0-lazer
### Logs
[input.log](https://github.com/ppy/osu/files/9995297/input.log)
[performance.log](https://github.com/ppy/osu/files/9995298/performance.log)
[runtime.log](https://github.com/ppy/osu/files/9995299/runtime.log)
[updater.log](https://github.com/ppy/osu/files/9995300/updater.log)
[database.log](https://github.com/ppy/osu/files/9995301/database.log)
",True,"Game crashes on first-time setup when osu!stable folder is not accessible due to permissions - ### Type
Crash to desktop
### Bug description
I was just installing osu!lazer on my new windows install, and on the first-time setup it attempted to get numbers for my amount of beatmaps and such on stable, and crashed when doing so, because my current user account does not have access to the old osu! songs folder. I could fix this by giving myself permissions, but I think it's important the game handles this as well.
It seems to be crashing when accessing collections.db - `System.UnauthorizedAccessException: Access to the path 'J:\osu!\collection.db' is denied.`, but I suspect it'd also have issues when trying to do scores and songs.
When directly importing from the settings menu, it throws errors in the notifications panel, but does not crash the game completely.
### Screenshots or videos

https://user-images.githubusercontent.com/33783503/201481995-524dc527-fe54-4aa4-8a15-b7854d1cbf18.mp4
### Version
2022.1101.0-lazer
### Logs
[input.log](https://github.com/ppy/osu/files/9995297/input.log)
[performance.log](https://github.com/ppy/osu/files/9995298/performance.log)
[runtime.log](https://github.com/ppy/osu/files/9995299/runtime.log)
[updater.log](https://github.com/ppy/osu/files/9995300/updater.log)
[database.log](https://github.com/ppy/osu/files/9995301/database.log)
",1,game crashes on first time setup when osu stable folder is not accessible due to permissions type crash to desktop bug description i was just installing osu lazer on my new windows install and on the first time setup it attempted to get numbers for my amount of beatmaps and such on stable and crashed when doing so because my current user account does not have access to the old osu songs folder i could fix this by giving myself permissions but i think it s important the game handles this as well it seems to be crashing when accessing collections db system unauthorizedaccessexception access to the path j osu collection db is denied but i suspect it d also have issues when trying to do scores and songs when directly importing from the settings menu it throws errors in the notifications panel but does not crash the game completely screenshots or videos version lazer logs ,1
76659,3490500224.0,IssuesEvent,2016-01-04 10:24:25,Mobicents/RestComm,https://api.github.com/repos/Mobicents/RestComm,closed,Fix Call FSM when stopping a call,1. Bug Core engine High-Priority XMS-1.0.0,"Load tests against RestComm + XMS show a timeout issue when SIPp is waiting for a BYE from RestComm.
Looking into the logs, seems a concurrency issue with jain-sip UDP implementation.",1.0,"Fix Call FSM when stopping a call - Load tests against RestComm + XMS show a timeout issue when SIPp is waiting for a BYE from RestComm.
Looking into the logs, seems a concurrency issue with jain-sip UDP implementation.",0,fix call fsm when stopping a call load tests against restcomm xms show a timeout issue when sipp is waiting for a bye from restcomm looking into the logs seems a concurrency issue with jain sip udp implementation ,0
1112,13163879751.0,IssuesEvent,2020-08-11 01:53:36,Azure/azure-sdk-for-java,https://api.github.com/repos/Azure/azure-sdk-for-java,closed,Add Connection Timeout Handling to OkHttp,Azure.Core Client HttpClient tenet-reliability,"Currently request timeouts leverage Reactor's timeout mechanism, this works great in most scenarios other than when a large payload is being sent. The reason this doesn't work well is that the timeout period begins on subscription and ends on an element being emitted, response received from the service in this case, or when the timeout period elapses. Large uploads may take a very long time to finish sending and getting a response from the service, this leads to either errant timeouts on uploads due to the timeout being slow or scenarios where responses are being flaky and the timeout period is very long.
The timeout handling pattern should be updated to push this logic into the HttpClient being used where it is able to monitor the flow of data to either trigger a request timeout, upload taking too long to complete, or a response timeout, server is busy, went down, or dropped connection. For OkHttp an `Interceptor` should be added which hooks up the request/response timeouts based on either a default value or one passed through Reactor's context.",True,"Add Connection Timeout Handling to OkHttp - Currently request timeouts leverage Reactor's timeout mechanism, this works great in most scenarios other than when a large payload is being sent. The reason this doesn't work well is that the timeout period begins on subscription and ends on an element being emitted, response received from the service in this case, or when the timeout period elapses. Large uploads may take a very long time to finish sending and getting a response from the service, this leads to either errant timeouts on uploads due to the timeout being slow or scenarios where responses are being flaky and the timeout period is very long.
The timeout handling pattern should be updated to push this logic into the HttpClient being used where it is able to monitor the flow of data to either trigger a request timeout, upload taking too long to complete, or a response timeout, server is busy, went down, or dropped connection. For OkHttp an `Interceptor` should be added which hooks up the request/response timeouts based on either a default value or one passed through Reactor's context.",1,add connection timeout handling to okhttp currently request timeouts leverage reactor s timeout mechanism this works great in most scenarios other than when a large payload is being sent the reason this doesn t work well is that the timeout period begins on subscription and ends on an element being emitted response received from the service in this case or when the timeout period elapses large uploads may take a very long time to finish sending and getting a response from the service this leads to either errant timeouts on uploads due to the timeout being slow or scenarios where responses are being flaky and the timeout period is very long the timeout handling pattern should be updated to push this logic into the httpclient being used where it is able to monitor the flow of data to either trigger a request timeout upload taking too long to complete or a response timeout server is busy went down or dropped connection for okhttp an interceptor should be added which hooks up the request response timeouts based on either a default value or one passed through reactor s context ,1
1091,13040959171.0,IssuesEvent,2020-07-28 19:28:37,dotnet/roslyn,https://api.github.com/repos/dotnet/roslyn,closed,Editing record crashed VS to desktop,Area-Compilers Bug Tenet-Reliability Urgency-Soon,"**Version Used**:
16.7.0 Preview 4.0
**Steps to Reproduce**:
Insert the following code into Program.cs of a .NET 5.0 ConsoleApp1
```C#
record A(x)
```
**Expected Behavior**:
No crashing
**Actual Behavior**:
VS crashes to desktop
**Stack Trace**
```
System.InvalidCastException: Unable to cast object of type 'Microsoft.CodeAnalysis.CSharp.Syntax.CompilationUnitSyntax' to type 'Microsoft.CodeAnalysis.CSharp.Syntax.ParameterSyntax'.
at Microsoft.CodeAnalysis.CSharp.Symbols.SynthesizedRecordPropertySymbol.CreateAccessorSymbol(Boolean isGet, CSharpSyntaxNode syntax, PropertySymbol explicitlyImplementedPropertyOpt, String aliasQualifierOpt, Boolean isAutoPropertyAccessor, Boolean isExplicitInterfaceImplementation, DiagnosticBag diagnostics) in D:\Projects\roslyn\src\Compilers\CSharp\Portable\Symbols\Synthesized\Records\SynthesizedRecordPropertySymbol.cs:line 80
at Microsoft.CodeAnalysis.CSharp.Symbols.SourcePropertySymbolBase..ctor(SourceMemberContainerTypeSymbol containingType, Binder binder, CSharpSyntaxNode syntax, CSharpSyntaxNode getSyntax, CSharpSyntaxNode setSyntax, ArrowExpressionClauseSyntax arrowExpression, ExplicitInterfaceSpecifierSyntax interfaceSpecifier, DeclarationModifiers modifiers, Boolean isIndexer, Boolean hasInitializer, Boolean isAutoProperty, Boolean hasAccessorList, Boolean isInitOnly, RefKind refKind, String name, Location location, TypeWithAnnotations typeOpt, Boolean hasParameters, DiagnosticBag diagnostics) in D:\Projects\roslyn\src\Compilers\CSharp\Portable\Symbols\Source\SourcePropertySymbolBase.cs:line 296
at Microsoft.CodeAnalysis.CSharp.Symbols.SynthesizedRecordPropertySymbol..ctor(SourceMemberContainerTypeSymbol containingType, CSharpSyntaxNode syntax, ParameterSymbol backingParameter, Boolean isOverride, DiagnosticBag diagnostics) in D:\Projects\roslyn\src\Compilers\CSharp\Portable\Symbols\Synthesized\Records\SynthesizedRecordPropertySymbol.cs:line 24
at Microsoft.CodeAnalysis.CSharp.Symbols.SourceMemberContainerTypeSymbol.g__addProperties|162_4(ImmutableArray`1 recordParameters, <>c__DisplayClass162_0& ) in D:\Projects\roslyn\src\Compilers\CSharp\Portable\Symbols\Source\SourceMemberContainerSymbol.cs:line 3082
at Microsoft.CodeAnalysis.CSharp.Symbols.SourceMemberContainerTypeSymbol.AddSynthesizedRecordMembersIfNecessary(MembersAndInitializersBuilder builder, DiagnosticBag diagnostics) in D:\Projects\roslyn\src\Compilers\CSharp\Portable\Symbols\Source\SourceMemberContainerSymbol.cs:line 3007
at Microsoft.CodeAnalysis.CSharp.Symbols.SourceMemberContainerTypeSymbol.BuildMembersAndInitializers(DiagnosticBag diagnostics) in D:\Projects\roslyn\src\Compilers\CSharp\Portable\Symbols\Source\SourceMemberContainerSymbol.cs:line 2446
at Microsoft.CodeAnalysis.CSharp.Symbols.SourceMemberContainerTypeSymbol.GetMembersAndInitializers() in D:\Projects\roslyn\src\Compilers\CSharp\Portable\Symbols\Source\SourceMemberContainerSymbol.cs:line 1311
at Microsoft.CodeAnalysis.CSharp.Symbols.SourceMemberContainerTypeSymbol.MakeAllMembers(DiagnosticBag diagnostics) in D:\Projects\roslyn\src\Compilers\CSharp\Portable\Symbols\Source\SourceMemberContainerSymbol.cs:line 2235
at Microsoft.CodeAnalysis.CSharp.Symbols.SourceMemberContainerTypeSymbol.GetMembersByNameSlow() in D:\Projects\roslyn\src\Compilers\CSharp\Portable\Symbols\Source\SourceMemberContainerSymbol.cs:line 1341
at Microsoft.CodeAnalysis.CSharp.Symbols.SourceMemberContainerTypeSymbol.GetMembersByName() in D:\Projects\roslyn\src\Compilers\CSharp\Portable\Symbols\Source\SourceMemberContainerSymbol.cs:line 1333
at Microsoft.CodeAnalysis.CSharp.Symbols.SourceMemberContainerTypeSymbol.GetMembersUnordered() in D:\Projects\roslyn\src\Compilers\CSharp\Portable\Symbols\Source\SourceMemberContainerSymbol.cs:line 1175
at Microsoft.CodeAnalysis.CSharp.Symbols.SourceMemberContainerTypeSymbol.GetMembers() in D:\Projects\roslyn\src\Compilers\CSharp\Portable\Symbols\Source\SourceMemberContainerSymbol.cs:line 1191
at Microsoft.CodeAnalysis.CSharp.Symbols.PublicModel.NamespaceOrTypeSymbol.Microsoft.CodeAnalysis.INamespaceOrTypeSymbol.GetMembers() in D:\Projects\roslyn\src\Compilers\CSharp\Portable\Symbols\PublicModel\NamespaceOrTypeSymbol.cs:line 17
at Microsoft.CodeAnalysis.Editor.CSharp.NavigationBar.CSharpNavigationBarItemService.GetMembersInTypes(SyntaxTree tree, IEnumerable`1 types, CancellationToken cancellationToken) in D:\Projects\roslyn\src\EditorFeatures\CSharp\NavigationBar\CSharpNavigationBarItemService.cs:line 72
at Microsoft.CodeAnalysis.Editor.CSharp.NavigationBar.CSharpNavigationBarItemService.d__3.MoveNext() in D:\Projects\roslyn\src\EditorFeatures\CSharp\NavigationBar\CSharpNavigationBarItemService.cs:line 56
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable`1.ConfiguredTaskAwaiter.GetResult()
at Microsoft.CodeAnalysis.Editor.Implementation.NavigationBar.NavigationBarController.d__28.MoveNext() in D:\Projects\roslyn\src\EditorFeatures\Core\Implementation\NavigationBar\NavigationBarController_ModelComputation.cs:line 86
```",True,"Editing record crashed VS to desktop - **Version Used**:
16.7.0 Preview 4.0
**Steps to Reproduce**:
Insert the following code into Program.cs of a .NET 5.0 ConsoleApp1
```C#
record A(x)
```
**Expected Behavior**:
No crashing
**Actual Behavior**:
VS crashes to desktop
**Stack Trace**
```
System.InvalidCastException: Unable to cast object of type 'Microsoft.CodeAnalysis.CSharp.Syntax.CompilationUnitSyntax' to type 'Microsoft.CodeAnalysis.CSharp.Syntax.ParameterSyntax'.
at Microsoft.CodeAnalysis.CSharp.Symbols.SynthesizedRecordPropertySymbol.CreateAccessorSymbol(Boolean isGet, CSharpSyntaxNode syntax, PropertySymbol explicitlyImplementedPropertyOpt, String aliasQualifierOpt, Boolean isAutoPropertyAccessor, Boolean isExplicitInterfaceImplementation, DiagnosticBag diagnostics) in D:\Projects\roslyn\src\Compilers\CSharp\Portable\Symbols\Synthesized\Records\SynthesizedRecordPropertySymbol.cs:line 80
at Microsoft.CodeAnalysis.CSharp.Symbols.SourcePropertySymbolBase..ctor(SourceMemberContainerTypeSymbol containingType, Binder binder, CSharpSyntaxNode syntax, CSharpSyntaxNode getSyntax, CSharpSyntaxNode setSyntax, ArrowExpressionClauseSyntax arrowExpression, ExplicitInterfaceSpecifierSyntax interfaceSpecifier, DeclarationModifiers modifiers, Boolean isIndexer, Boolean hasInitializer, Boolean isAutoProperty, Boolean hasAccessorList, Boolean isInitOnly, RefKind refKind, String name, Location location, TypeWithAnnotations typeOpt, Boolean hasParameters, DiagnosticBag diagnostics) in D:\Projects\roslyn\src\Compilers\CSharp\Portable\Symbols\Source\SourcePropertySymbolBase.cs:line 296
at Microsoft.CodeAnalysis.CSharp.Symbols.SynthesizedRecordPropertySymbol..ctor(SourceMemberContainerTypeSymbol containingType, CSharpSyntaxNode syntax, ParameterSymbol backingParameter, Boolean isOverride, DiagnosticBag diagnostics) in D:\Projects\roslyn\src\Compilers\CSharp\Portable\Symbols\Synthesized\Records\SynthesizedRecordPropertySymbol.cs:line 24
at Microsoft.CodeAnalysis.CSharp.Symbols.SourceMemberContainerTypeSymbol.g__addProperties|162_4(ImmutableArray`1 recordParameters, <>c__DisplayClass162_0& ) in D:\Projects\roslyn\src\Compilers\CSharp\Portable\Symbols\Source\SourceMemberContainerSymbol.cs:line 3082
at Microsoft.CodeAnalysis.CSharp.Symbols.SourceMemberContainerTypeSymbol.AddSynthesizedRecordMembersIfNecessary(MembersAndInitializersBuilder builder, DiagnosticBag diagnostics) in D:\Projects\roslyn\src\Compilers\CSharp\Portable\Symbols\Source\SourceMemberContainerSymbol.cs:line 3007
at Microsoft.CodeAnalysis.CSharp.Symbols.SourceMemberContainerTypeSymbol.BuildMembersAndInitializers(DiagnosticBag diagnostics) in D:\Projects\roslyn\src\Compilers\CSharp\Portable\Symbols\Source\SourceMemberContainerSymbol.cs:line 2446
at Microsoft.CodeAnalysis.CSharp.Symbols.SourceMemberContainerTypeSymbol.GetMembersAndInitializers() in D:\Projects\roslyn\src\Compilers\CSharp\Portable\Symbols\Source\SourceMemberContainerSymbol.cs:line 1311
at Microsoft.CodeAnalysis.CSharp.Symbols.SourceMemberContainerTypeSymbol.MakeAllMembers(DiagnosticBag diagnostics) in D:\Projects\roslyn\src\Compilers\CSharp\Portable\Symbols\Source\SourceMemberContainerSymbol.cs:line 2235
at Microsoft.CodeAnalysis.CSharp.Symbols.SourceMemberContainerTypeSymbol.GetMembersByNameSlow() in D:\Projects\roslyn\src\Compilers\CSharp\Portable\Symbols\Source\SourceMemberContainerSymbol.cs:line 1341
at Microsoft.CodeAnalysis.CSharp.Symbols.SourceMemberContainerTypeSymbol.GetMembersByName() in D:\Projects\roslyn\src\Compilers\CSharp\Portable\Symbols\Source\SourceMemberContainerSymbol.cs:line 1333
at Microsoft.CodeAnalysis.CSharp.Symbols.SourceMemberContainerTypeSymbol.GetMembersUnordered() in D:\Projects\roslyn\src\Compilers\CSharp\Portable\Symbols\Source\SourceMemberContainerSymbol.cs:line 1175
at Microsoft.CodeAnalysis.CSharp.Symbols.SourceMemberContainerTypeSymbol.GetMembers() in D:\Projects\roslyn\src\Compilers\CSharp\Portable\Symbols\Source\SourceMemberContainerSymbol.cs:line 1191
at Microsoft.CodeAnalysis.CSharp.Symbols.PublicModel.NamespaceOrTypeSymbol.Microsoft.CodeAnalysis.INamespaceOrTypeSymbol.GetMembers() in D:\Projects\roslyn\src\Compilers\CSharp\Portable\Symbols\PublicModel\NamespaceOrTypeSymbol.cs:line 17
at Microsoft.CodeAnalysis.Editor.CSharp.NavigationBar.CSharpNavigationBarItemService.GetMembersInTypes(SyntaxTree tree, IEnumerable`1 types, CancellationToken cancellationToken) in D:\Projects\roslyn\src\EditorFeatures\CSharp\NavigationBar\CSharpNavigationBarItemService.cs:line 72
at Microsoft.CodeAnalysis.Editor.CSharp.NavigationBar.CSharpNavigationBarItemService.d__3.MoveNext() in D:\Projects\roslyn\src\EditorFeatures\CSharp\NavigationBar\CSharpNavigationBarItemService.cs:line 56
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at System.Runtime.CompilerServices.ConfiguredTaskAwaitable`1.ConfiguredTaskAwaiter.GetResult()
at Microsoft.CodeAnalysis.Editor.Implementation.NavigationBar.NavigationBarController.d__28.MoveNext() in D:\Projects\roslyn\src\EditorFeatures\Core\Implementation\NavigationBar\NavigationBarController_ModelComputation.cs:line 86
```",1,editing record crashed vs to desktop version used preview steps to reproduce insert the following code into program cs of a net c record a x expected behavior no crashing actual behavior vs crashes to desktop stack trace system invalidcastexception unable to cast object of type microsoft codeanalysis csharp syntax compilationunitsyntax to type microsoft codeanalysis csharp syntax parametersyntax at microsoft codeanalysis csharp symbols synthesizedrecordpropertysymbol createaccessorsymbol boolean isget csharpsyntaxnode syntax propertysymbol explicitlyimplementedpropertyopt string aliasqualifieropt boolean isautopropertyaccessor boolean isexplicitinterfaceimplementation diagnosticbag diagnostics in d projects roslyn src compilers csharp portable symbols synthesized records synthesizedrecordpropertysymbol cs line at microsoft codeanalysis csharp symbols sourcepropertysymbolbase ctor sourcemembercontainertypesymbol containingtype binder binder csharpsyntaxnode syntax csharpsyntaxnode getsyntax csharpsyntaxnode setsyntax arrowexpressionclausesyntax arrowexpression explicitinterfacespecifiersyntax interfacespecifier declarationmodifiers modifiers boolean isindexer boolean hasinitializer boolean isautoproperty boolean hasaccessorlist boolean isinitonly refkind refkind string name location location typewithannotations typeopt boolean hasparameters diagnosticbag diagnostics in d projects roslyn src compilers csharp portable symbols source sourcepropertysymbolbase cs line at microsoft codeanalysis csharp symbols synthesizedrecordpropertysymbol ctor sourcemembercontainertypesymbol containingtype csharpsyntaxnode syntax parametersymbol backingparameter boolean isoverride diagnosticbag diagnostics in d projects roslyn src compilers csharp portable symbols synthesized records synthesizedrecordpropertysymbol cs line at microsoft codeanalysis csharp symbols sourcemembercontainertypesymbol g addproperties immutablearray recordparameters c in d projects roslyn src compilers csharp portable symbols source sourcemembercontainersymbol cs line at microsoft codeanalysis csharp symbols sourcemembercontainertypesymbol addsynthesizedrecordmembersifnecessary membersandinitializersbuilder builder diagnosticbag diagnostics in d projects roslyn src compilers csharp portable symbols source sourcemembercontainersymbol cs line at microsoft codeanalysis csharp symbols sourcemembercontainertypesymbol buildmembersandinitializers diagnosticbag diagnostics in d projects roslyn src compilers csharp portable symbols source sourcemembercontainersymbol cs line at microsoft codeanalysis csharp symbols sourcemembercontainertypesymbol getmembersandinitializers in d projects roslyn src compilers csharp portable symbols source sourcemembercontainersymbol cs line at microsoft codeanalysis csharp symbols sourcemembercontainertypesymbol makeallmembers diagnosticbag diagnostics in d projects roslyn src compilers csharp portable symbols source sourcemembercontainersymbol cs line at microsoft codeanalysis csharp symbols sourcemembercontainertypesymbol getmembersbynameslow in d projects roslyn src compilers csharp portable symbols source sourcemembercontainersymbol cs line at microsoft codeanalysis csharp symbols sourcemembercontainertypesymbol getmembersbyname in d projects roslyn src compilers csharp portable symbols source sourcemembercontainersymbol cs line at microsoft codeanalysis csharp symbols sourcemembercontainertypesymbol getmembersunordered in d projects roslyn src compilers csharp portable symbols source sourcemembercontainersymbol cs line at microsoft codeanalysis csharp symbols sourcemembercontainertypesymbol getmembers in d projects roslyn src compilers csharp portable symbols source sourcemembercontainersymbol cs line at microsoft codeanalysis csharp symbols publicmodel namespaceortypesymbol microsoft codeanalysis inamespaceortypesymbol getmembers in d projects roslyn src compilers csharp portable symbols publicmodel namespaceortypesymbol cs line at microsoft codeanalysis editor csharp navigationbar csharpnavigationbaritemservice getmembersintypes syntaxtree tree ienumerable types cancellationtoken cancellationtoken in d projects roslyn src editorfeatures csharp navigationbar csharpnavigationbaritemservice cs line at microsoft codeanalysis editor csharp navigationbar csharpnavigationbaritemservice d movenext in d projects roslyn src editorfeatures csharp navigationbar csharpnavigationbaritemservice cs line at system runtime compilerservices taskawaiter throwfornonsuccess task task at system runtime compilerservices taskawaiter handlenonsuccessanddebuggernotification task task at system runtime compilerservices configuredtaskawaitable configuredtaskawaiter getresult at microsoft codeanalysis editor implementation navigationbar navigationbarcontroller d movenext in d projects roslyn src editorfeatures core implementation navigationbar navigationbarcontroller modelcomputation cs line ,1
1161,13492639800.0,IssuesEvent,2020-09-11 18:21:41,dotnet/roslyn,https://api.github.com/repos/dotnet/roslyn,closed,Find All References infinitely searches when Unity project Assembly-CSharp has a circular reference through assembly AOTGenerated,4 - In Review Area-Compilers Bug Tenet-Reliability,"This issue tracks the bug report at https://dev.azure.com/devdiv/DevDiv/_workitems/edit/1192910 (Microsoft-internal)
Apparently the compiler is going through a very slow path (which may include allocations) in `ReuseAssemblySymbols` when the project contains an indirect circular reference. 5GB allocations within code that doesn't accept a cancellation token (entry point is `CSharpCompilation.CommonAssembly`).
",True,"Find All References infinitely searches when Unity project Assembly-CSharp has a circular reference through assembly AOTGenerated - This issue tracks the bug report at https://dev.azure.com/devdiv/DevDiv/_workitems/edit/1192910 (Microsoft-internal)
Apparently the compiler is going through a very slow path (which may include allocations) in `ReuseAssemblySymbols` when the project contains an indirect circular reference. 5GB allocations within code that doesn't accept a cancellation token (entry point is `CSharpCompilation.CommonAssembly`).
",1,find all references infinitely searches when unity project assembly csharp has a circular reference through assembly aotgenerated this issue tracks the bug report at microsoft internal apparently the compiler is going through a very slow path which may include allocations in reuseassemblysymbols when the project contains an indirect circular reference allocations within code that doesn t accept a cancellation token entry point is csharpcompilation commonassembly ,1
491928,14174093286.0,IssuesEvent,2020-11-12 19:21:35,wllfaria/darkmoon,https://api.github.com/repos/wllfaria/darkmoon,opened,web: cart to add products,Effort: 13 Priority: now Type: feature,"## Describe the feature
The cart should allow the following features:
- Add a product to cart
- Remove a product from the cart
- Edit the amount of any product on the cart
- Change the size selected to any size available
- Checkout (buy everything on the cart)",1.0,"web: cart to add products - ## Describe the feature
The cart should allow the following features:
- Add a product to cart
- Remove a product from the cart
- Edit the amount of any product on the cart
- Change the size selected to any size available
- Checkout (buy everything on the cart)",0,web cart to add products describe the feature the cart should allow the following features add a product to cart remove a product from the cart edit the amount of any product on the cart change the size selected to any size available checkout buy everything on the cart ,0
2766,27582184768.0,IssuesEvent,2023-03-08 16:56:16,NVIDIA/spark-rapids,https://api.github.com/repos/NVIDIA/spark-rapids,closed,[BUG] HostToGpuCoalesceIterator leaks all host batches,bug ? - Needs Triage P0 reliability,"While testing the coalesce iterators for retry semantics I found that the HostToGpuCoalesceIterator (used when we read from host shuffle everywhere) is leaking _**every host batch**_. Thanks to the MemoryCleaner leak detection logic that caught it.
This bug can cause us to host OOM (off heap) because it's memory not tracked by the JVM. Once the JVM GCs, these buffers are cleared, and that's why this has gone unnoticed. If we have very large executor heaps we are likely to see off heap memory accumulation, because the MemoryCleaner wouldn't kick in as often.
I am going to handle this as part of a PR I have open for the retry semantics for coalesce.",True,"[BUG] HostToGpuCoalesceIterator leaks all host batches - While testing the coalesce iterators for retry semantics I found that the HostToGpuCoalesceIterator (used when we read from host shuffle everywhere) is leaking _**every host batch**_. Thanks to the MemoryCleaner leak detection logic that caught it.
This bug can cause us to host OOM (off heap) because it's memory not tracked by the JVM. Once the JVM GCs, these buffers are cleared, and that's why this has gone unnoticed. If we have very large executor heaps we are likely to see off heap memory accumulation, because the MemoryCleaner wouldn't kick in as often.
I am going to handle this as part of a PR I have open for the retry semantics for coalesce.",1, hosttogpucoalesceiterator leaks all host batches while testing the coalesce iterators for retry semantics i found that the hosttogpucoalesceiterator used when we read from host shuffle everywhere is leaking every host batch thanks to the memorycleaner leak detection logic that caught it this bug can cause us to host oom off heap because it s memory not tracked by the jvm once the jvm gcs these buffers are cleared and that s why this has gone unnoticed if we have very large executor heaps we are likely to see off heap memory accumulation because the memorycleaner wouldn t kick in as often i am going to handle this as part of a pr i have open for the retry semantics for coalesce ,1
549,8553570527.0,IssuesEvent,2018-11-08 01:30:37,dotnet/roslyn,https://api.github.com/repos/dotnet/roslyn,closed,Fix failure to dispose 'scope' after failure to dispose 'connection',4 - In Review Area-IDE Bug Tenet-Reliability,"In the following code, an exception during the call to `connection.Dispose` will result in `scope` not getting disposed. This eventually leads to an exception in the finalizer of `PinnedRemotableDataScope`.
https://github.com/dotnet/roslyn/blob/1f082e40860963edcd81d1567069aae924d9369f/src/Workspaces/Core/Portable/Remote/RemoteHostSessionHelpers.cs#L43-L46
:link: https://devdiv.visualstudio.com/DevDiv/_workitems/edit/671157",True,"Fix failure to dispose 'scope' after failure to dispose 'connection' - In the following code, an exception during the call to `connection.Dispose` will result in `scope` not getting disposed. This eventually leads to an exception in the finalizer of `PinnedRemotableDataScope`.
https://github.com/dotnet/roslyn/blob/1f082e40860963edcd81d1567069aae924d9369f/src/Workspaces/Core/Portable/Remote/RemoteHostSessionHelpers.cs#L43-L46
:link: https://devdiv.visualstudio.com/DevDiv/_workitems/edit/671157",1,fix failure to dispose scope after failure to dispose connection in the following code an exception during the call to connection dispose will result in scope not getting disposed this eventually leads to an exception in the finalizer of pinnedremotabledatascope link ,1
301226,9217828655.0,IssuesEvent,2019-03-11 11:48:53,webcompat/web-bugs,https://api.github.com/repos/webcompat/web-bugs,closed,outlook.live.com - site is not usable,browser-firefox-mobile browser-focus-geckoview priority-critical,"
**URL**: https://outlook.live.com/
**Browser / Version**: Firefox Mobile 65.0
**Operating System**: Android 8.0.0
**Tested Another Browser**: Yes
**Problem type**: Site is not usable
**Description**: won't connect
**Steps to Reproduce**:
Attempted to remove site tracking
Browser Configuration
None
_From [webcompat.com](https://webcompat.com/) with ❤️_",1.0,"outlook.live.com - site is not usable -
**URL**: https://outlook.live.com/
**Browser / Version**: Firefox Mobile 65.0
**Operating System**: Android 8.0.0
**Tested Another Browser**: Yes
**Problem type**: Site is not usable
**Description**: won't connect
**Steps to Reproduce**:
Attempted to remove site tracking
Browser Configuration
None
_From [webcompat.com](https://webcompat.com/) with ❤️_",0,outlook live com site is not usable url browser version firefox mobile operating system android tested another browser yes problem type site is not usable description won t connect steps to reproduce attempted to remove site tracking browser configuration none from with ❤️ ,0
225143,17242340029.0,IssuesEvent,2021-07-21 01:38:26,zakuArbor/proxyAuth,https://api.github.com/repos/zakuArbor/proxyAuth,opened,Add Doxygen (Documentation Generator) Support,Documentation,"## Purpose
Replace existing function comments with Doxygen-style comments so that I can generate code documentation.
## Tasks/Goals
- [ ] Install Doxygen on a personal machine
- [ ] Replace a single header file with Doxygen-style comments
- [ ] Change all header files to comply with Doxygen-style comments
- [ ] Update Makefile to have an option to generate Doxygen
- [ ] Create Github Action to generate Documents whenever a header file is updated
## Summary
*To fill out once the issue is to be closed. Give a short summary of the changes you made to implement or fix an issue*
",1.0,"Add Doxygen (Documentation Generator) Support - ## Purpose
Replace existing function comments with Doxygen-style comments so that I can generate code documentation.
## Tasks/Goals
- [ ] Install Doxygen on a personal machine
- [ ] Replace a single header file with Doxygen-style comments
- [ ] Change all header files to comply with Doxygen-style comments
- [ ] Update Makefile to have an option to generate Doxygen
- [ ] Create Github Action to generate Documents whenever a header file is updated
## Summary
*To fill out once the issue is to be closed. Give a short summary of the changes you made to implement or fix an issue*
",0,add doxygen documentation generator support purpose replace existing function comments with doxygen style comments so that i can generate code documentation tasks goals install doxygen on a personal machine replace a single header file with doxygen style comments change all header files to comply with doxygen style comments update makefile to have an option to generate doxygen create github action to generate documents whenever a header file is updated summary to fill out once the issue is to be closed give a short summary of the changes you made to implement or fix an issue ,0
327,6622089367.0,IssuesEvent,2017-09-21 21:49:07,dotnet/roslyn,https://api.github.com/repos/dotnet/roslyn,opened,CompletesTrackingOperation should not dispose of EmptyAsyncToken,Tenet-Performance Tenet-Reliability up-for-grabs,"**Version Used**: 15.3
:memo: Observed 18,000 scheduled tasks in a work queue in a ""low memory"" heap dump submitted for analysis.
When `TaskExtensions.CompletesTrackingOperation` is called with `EmptyAsyncToken.Instance`, it should not attempt to dispose of the token. The `Dispose()` method is empty, and scheduling the operation has substantial overhead.
",True,"CompletesTrackingOperation should not dispose of EmptyAsyncToken - **Version Used**: 15.3
:memo: Observed 18,000 scheduled tasks in a work queue in a ""low memory"" heap dump submitted for analysis.
When `TaskExtensions.CompletesTrackingOperation` is called with `EmptyAsyncToken.Instance`, it should not attempt to dispose of the token. The `Dispose()` method is empty, and scheduling the operation has substantial overhead.
",1,completestrackingoperation should not dispose of emptyasynctoken version used memo observed scheduled tasks in a work queue in a low memory heap dump submitted for analysis when taskextensions completestrackingoperation is called with emptyasynctoken instance it should not attempt to dispose of the token the dispose method is empty and scheduling the operation has substantial overhead ,1
270874,29144695519.0,IssuesEvent,2023-05-18 01:04:37,remigiusz-donczyk/final-project,https://api.github.com/repos/remigiusz-donczyk/final-project,opened,workflow-job-1207.ve6191ff089f8.jar: 1 vulnerabilities (highest severity is: 7.5),Mend: dependency security vulnerability," Vulnerable Library - workflow-job-1207.ve6191ff089f8.jar
The Jenkins Plugins Parent POM Project
Path to vulnerable library: /setup/jenkins/plugins/workflow-job/WEB-INF/lib/workflow-job.jar
Jenkins Pipeline: Job Plugin does not escape the display name of the build that caused an earlier build to be aborted, resulting in a stored cross-site scripting (XSS) vulnerability exploitable by attackers able to set build display names immediately.
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
",True,"workflow-job-1207.ve6191ff089f8.jar: 1 vulnerabilities (highest severity is: 7.5) - Vulnerable Library - workflow-job-1207.ve6191ff089f8.jar
The Jenkins Plugins Parent POM Project
Path to vulnerable library: /setup/jenkins/plugins/workflow-job/WEB-INF/lib/workflow-job.jar
Jenkins Pipeline: Job Plugin does not escape the display name of the build that caused an earlier build to be aborted, resulting in a stored cross-site scripting (XSS) vulnerability exploitable by attackers able to set build display names immediately.
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)
",0,workflow job jar vulnerabilities highest severity is vulnerable library workflow job jar the jenkins plugins parent pom project path to vulnerable library setup jenkins plugins workflow job web inf lib workflow job jar found in head commit a href vulnerabilities cve severity cvss dependency type fixed in workflow job version remediation available high workflow job jar direct org jenkins ci plugins workflow workflow job details cve vulnerable library workflow job jar the jenkins plugins parent pom project path to vulnerable library setup jenkins plugins workflow job web inf lib workflow job jar dependency hierarchy x workflow job jar vulnerable library found in head commit a href found in base branch dev vulnerability details jenkins pipeline job plugin does not escape the display name of the build that caused an earlier build to be aborted resulting in a stored cross site scripting xss vulnerability exploitable by attackers able to set build display names immediately publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required low user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution org jenkins ci plugins workflow workflow job step up your open source security game with mend ,0
13919,9106782974.0,IssuesEvent,2019-02-21 01:21:07,coreos/ignition,https://api.github.com/repos/coreos/ignition,closed,Ignition Logs Configuration to journalctl,area/security kind/friction,"# Bug #
Ignition logs all parsed and fetched configuration to journalctl. This is a security risk for organizations which send all journalctl output to a central log storage. At the very least, using ignition_file for secure configurations (keys/secrets) must be warned against in the documentation.
## Operating System Version ##
CoreOS-stable-1967.5.0-hvm
## Ignition Version ##
0.28.0
## Environment ##
AWS/ap-south-1/c5.large ec2 instance
## Expected Behavior ##
Ignition should not log complete configuration to journalctl.
## Actual Behavior ##
Ignition logs complete configuration to journalctl.
The simple `journalctl --identifier=ignition --all` command mentioned in the documentation gives the following 2 traces:
https://github.com/coreos/ignition/blob/3c7dbd3888646ba49f318188b7bf41b532252144/internal/providers/util/config.go#L25
https://github.com/coreos/ignition/blob/aad24ad59393d49d1e7cdf6c4504a94615d9f0c3/internal/exec/engine.go#L264
They show up as the following:
```
Feb 13 09:08:55 localhost ignition[422]: parsing config: {
Feb 13 09:08:55 localhost ignition[422]: parsing config: {""ignition"":{""config"":{""replace"":{""source"":""s3://eco-example-config/config.json"",""verification"":{""hash"":""sha512-9ff7f8f0bc00d37f32e013c792c3411b18db3dc9333881003ecc0f307150301a188b8fc9b6bc1016e9498db2be57f679eaaab86080ce814a8ac336981dc2a76c""}}},""timeouts"":{},""version"":""2.1.0""},""networkd"":{},""passwd"":{},""storage"":{},""systemd"":{}}
Feb 13 09:09:49 localhost ignition[472]: parsing config: {
Feb 13 05:09:49 localhost ignition[417]: fetched referenced config: {""ignition"":{""config"":{""append"":[{""source"":""data:text/plain;charset=utf-8;base64,eyJpZ25pd>
Feb 13 05:09:49 localhost ignition[417]: fetched referenced config: {""ignition"":{""config"":{},""timeouts"":{},""version"":""2.1.0""},""networkd"":{},""passwd"":{},""stora>
Feb 13 05:09:49 localhost ignition[417]: disks: op(1): [started] waiting for udev to settle
```
While both are marked as Debug, the default configuration on latest CoreOS (CoreOS-stable-1967.5.0-hvm (ami-09642e32f99945765)) seems to be logging this.",True,"Ignition Logs Configuration to journalctl - # Bug #
Ignition logs all parsed and fetched configuration to journalctl. This is a security risk for organizations which send all journalctl output to a central log storage. At the very least, using ignition_file for secure configurations (keys/secrets) must be warned against in the documentation.
## Operating System Version ##
CoreOS-stable-1967.5.0-hvm
## Ignition Version ##
0.28.0
## Environment ##
AWS/ap-south-1/c5.large ec2 instance
## Expected Behavior ##
Ignition should not log complete configuration to journalctl.
## Actual Behavior ##
Ignition logs complete configuration to journalctl.
The simple `journalctl --identifier=ignition --all` command mentioned in the documentation gives the following 2 traces:
https://github.com/coreos/ignition/blob/3c7dbd3888646ba49f318188b7bf41b532252144/internal/providers/util/config.go#L25
https://github.com/coreos/ignition/blob/aad24ad59393d49d1e7cdf6c4504a94615d9f0c3/internal/exec/engine.go#L264
They show up as the following:
```
Feb 13 09:08:55 localhost ignition[422]: parsing config: {
Feb 13 09:08:55 localhost ignition[422]: parsing config: {""ignition"":{""config"":{""replace"":{""source"":""s3://eco-example-config/config.json"",""verification"":{""hash"":""sha512-9ff7f8f0bc00d37f32e013c792c3411b18db3dc9333881003ecc0f307150301a188b8fc9b6bc1016e9498db2be57f679eaaab86080ce814a8ac336981dc2a76c""}}},""timeouts"":{},""version"":""2.1.0""},""networkd"":{},""passwd"":{},""storage"":{},""systemd"":{}}
Feb 13 09:09:49 localhost ignition[472]: parsing config: {
Feb 13 05:09:49 localhost ignition[417]: fetched referenced config: {""ignition"":{""config"":{""append"":[{""source"":""data:text/plain;charset=utf-8;base64,eyJpZ25pd>
Feb 13 05:09:49 localhost ignition[417]: fetched referenced config: {""ignition"":{""config"":{},""timeouts"":{},""version"":""2.1.0""},""networkd"":{},""passwd"":{},""stora>
Feb 13 05:09:49 localhost ignition[417]: disks: op(1): [started] waiting for udev to settle
```
While both are marked as Debug, the default configuration on latest CoreOS (CoreOS-stable-1967.5.0-hvm (ami-09642e32f99945765)) seems to be logging this.",0,ignition logs configuration to journalctl bug ignition logs all parsed and fetched configuration to journalctl this is a security risk for organizations which send all journalctl output to a central log storage at the very least using ignition file for secure configurations keys secrets must be warned against in the documentation operating system version coreos stable hvm ignition version environment aws ap south large instance expected behavior ignition should not log complete configuration to journalctl actual behavior ignition logs complete configuration to journalctl the simple journalctl identifier ignition all command mentioned in the documentation gives the following traces they show up as the following feb localhost ignition parsing config feb localhost ignition parsing config ignition config replace source eco example config config json verification hash timeouts version networkd passwd storage systemd feb localhost ignition parsing config feb localhost ignition fetched referenced config ignition config append source data text plain charset utf feb localhost ignition fetched referenced config ignition config timeouts version networkd passwd stora feb localhost ignition disks op waiting for udev to settle while both are marked as debug the default configuration on latest coreos coreos stable hvm ami seems to be logging this ,0
3050,31929820117.0,IssuesEvent,2023-09-19 06:31:29,camunda/zeebe,https://api.github.com/repos/camunda/zeebe,closed,Bloated `DEADLINE_JOBS` column family; no way to recover,kind/bug blocker/info area/reliability component/engine,"**Describe the bug**
In circumstances very similar to https://github.com/camunda-cloud/zeebe/issues/5925, a faulty client somehow caused tens of thousands of entries to be created inside the `DEADLINE_JOBS` column family. Under the current state of things, these entries cannot go away. Worse, they cause the JobTimeOutProcessor to overflow the available log space every time it runs, which results in only a fraction of the job time-outs actually being taken into account. The remaining jobs become “stuck,” i.e. they never get repeated to workers again.
**To Reproduce**
The [faulty client story](https://github.com/camunda/zeebe/issues/5925#issuecomment-796880239) is my best guess (i.e. we increased the client buffer size, and the runaway growth in the DEADLINE_JOBS table stopped). I'm therefore not sure about reproduction steps, however our RocksDB dumps are available on demand. Kindly send private email directly to dominique.quatravaux@epfl.ch.
**Expected behavior**
There should exist some kind of automated, or perhaps operator-initiated cleanup procedure, so that the `DEADLINE_JOBS` column family may be either shrunk over time, or better, resynced wholesale to the ground truth in `JOBS`. (I was thinking, perhaps naïvely, that we could take advantage of the snapshot rollover time to just empty the `DEADLINE_JOBS` column family, and reconstruct it from `JOBS`?)
**Log/Stacktrace**
A build of Zeebe from our [work branch](https://github.com/epfl-si/zeebe/tree/bug/overfull-DEADLINE_JOBS) produces the following, relevant excerpts:
Log excerpts
```
for broker in $(seq 0 2); do oc -n phd-assess-test logs zeebe-broker-$broker-0|grep 'deactivate\|forEach'; done
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachActivatableJobs"",""file"":""DbJobState.java"",""line"":307},""message"":""forEachActivatableJobs: starting"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881373,""timestampNanos"":108547275}
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachActivatableJobs"",""file"":""DbJobState.java"",""line"":325},""message"":""forEachActivatableJobs: done, processed 13 jobs"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881373,""timestampNanos"":169529866}
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachActivatableJobs"",""file"":""DbJobState.java"",""line"":307},""message"":""forEachActivatableJobs: starting"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881373,""timestampNanos"":171764111}
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachActivatableJobs"",""file"":""DbJobState.java"",""line"":325},""message"":""forEachActivatableJobs: done, processed 0 jobs"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881373,""timestampNanos"":172219141}
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachActivatableJobs"",""file"":""DbJobState.java"",""line"":307},""message"":""forEachActivatableJobs: starting"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881376,""timestampNanos"":967749123}
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachActivatableJobs"",""file"":""DbJobState.java"",""line"":325},""message"":""forEachActivatableJobs: done, processed 0 jobs"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881376,""timestampNanos"":968356521}
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachActivatableJobs"",""file"":""DbJobState.java"",""line"":307},""message"":""forEachActivatableJobs: starting"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881376,""timestampNanos"":969018562}
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachActivatableJobs"",""file"":""DbJobState.java"",""line"":325},""message"":""forEachActivatableJobs: done, processed 0 jobs"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881376,""timestampNanos"":969438570}
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachActivatableJobs"",""file"":""DbJobState.java"",""line"":307},""message"":""forEachActivatableJobs: starting"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881396,""timestampNanos"":15069073}
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachActivatableJobs"",""file"":""DbJobState.java"",""line"":325},""message"":""forEachActivatableJobs: done, processed 0 jobs"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881396,""timestampNanos"":15790841}
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachTimedOutEntry"",""file"":""DbJobState.java"",""line"":252},""message"":""forEachTimedOutEntry: starting"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881396,""timestampNanos"":844718785}
{""severity"":""ERROR"",""logging.googleapis.com/sourceLocation"":{""function"":""lambda$deactivateTimedOutJobs$0"",""file"":""JobTimeoutTrigger.java"",""line"":87},""message"":""deactivateTimedOutJobs: unable to flush: key=2251799816453675, record={\""deadline\"":1647881486986,\""worker\"":\""c533b992-ffe5-4fa0-8811-c718a5f41620\"",\""retries\"":0,\""retryBackoff\"":0,\""recurringTime\"":-1,\""type\"":\""phdAssessFillForm\"",\""customHeaders\"":[packed value (length=17960)],\""variables\"":\""gA==\"",\""errorMessage\"":\""\"",\""errorCode\"":\""\"",\""bpmnProcessId\"":\""phdAssessProcess\"",\""processDefinitionVersion\"":4,\""processDefinitionKey\"":2251799816451171,\""processInstanceKey\"":2251799816453632,\""elementId\"":\""Activity_PHD_fills_annual_report\"",\""elementInstanceKey\"":2251799816453671}"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""reportLocation"":{""functionName"":""lambda$deactivateTimedOutJobs$0"",""filePath"":""JobTimeoutTrigger.java"",""lineNumber"":87},""actor-name"":""Broker-0-StreamProcessor-1""},""@type"":""type.googleapis.com/google.devtools.clouderrorreporting.v1beta1.ReportedErrorEvent"",""timestampSeconds"":1647881396,""timestampNanos"":855322735}
{""severity"":""ERROR"",""logging.googleapis.com/sourceLocation"":{""function"":""lambda$forEachTimedOutEntry$1"",""file"":""DbJobState.java"",""line"":264},""message"":""forEachTimedOutEntry: bailing out after failed visitJob at count = 443"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""reportLocation"":{""functionName"":""lambda$forEachTimedOutEntry$1"",""filePath"":""DbJobState.java"",""lineNumber"":264},""actor-name"":""Broker-0-StreamProcessor-1""},""@type"":""type.googleapis.com/google.devtools.clouderrorreporting.v1beta1.ReportedErrorEvent"",""timestampSeconds"":1647881396,""timestampNanos"":855880235}
{""severity"":""ERROR"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachTimedOutEntry"",""file"":""DbJobState.java"",""line"":275},""message"":""forEachTimedOutEntry: done at count = 443"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""reportLocation"":{""functionName"":""forEachTimedOutEntry"",""filePath"":""DbJobState.java"",""lineNumber"":275},""actor-name"":""Broker-0-StreamProcessor-1""},""@type"":""type.googleapis.com/google.devtools.clouderrorreporting.v1beta1.ReportedErrorEvent"",""timestampSeconds"":1647881396,""timestampNanos"":856099294}
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachActivatableJobs"",""file"":""DbJobState.java"",""line"":307},""message"":""forEachActivatableJobs: starting"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881401,""timestampNanos"":22222435}
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachActivatableJobs"",""file"":""DbJobState.java"",""line"":325},""message"":""forEachActivatableJobs: done, processed 13 jobs"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881401,""timestampNanos"":69950298}
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachActivatableJobs"",""file"":""DbJobState.java"",""line"":307},""message"":""forEachActivatableJobs: starting"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881401,""timestampNanos"":84670641}
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachActivatableJobs"",""file"":""DbJobState.java"",""line"":325},""message"":""forEachActivatableJobs: done, processed 0 jobs"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881401,""timestampNanos"":168317382}
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachActivatableJobs"",""file"":""DbJobState.java"",""line"":307},""message"":""forEachActivatableJobs: starting"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881405,""timestampNanos"":687893084}
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachActivatableJobs"",""file"":""DbJobState.java"",""line"":325},""message"":""forEachActivatableJobs: done, processed 0 jobs"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881405,""timestampNanos"":688659261}
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachActivatableJobs"",""file"":""DbJobState.java"",""line"":307},""message"":""forEachActivatableJobs: starting"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881405,""timestampNanos"":689358301}
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachActivatableJobs"",""file"":""DbJobState.java"",""line"":325},""message"":""forEachActivatableJobs: done, processed 0 jobs"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881405,""timestampNanos"":689805874}
```
Reading RocksDB snapshots using our [home-grown Perl script](https://github.com/epfl-si/PhDAssess/blob/master/scripts/read-snapshot.pl) gives us the following stats:
- Number of entries in the `JOBS` column family: 39 (out of which, only 13 currently get presented to a worker)
- Number of entries in `DEADLINE_JOBS`: 47017
**Environment:**
- OS: Kubernetes
- Zeebe Version: 1.3.5, as well as today's 1.3.6-SNAPSHOT (built from [our branch](https://github.com/epfl-si/zeebe/tree/bug/overfull-DEADLINE_JOBS))
- Configuration: 3-way Kubernetes replication on OpenShift 3.11, integrated gateway + broker pods (one of the three being backed to NFS)
",True,"Bloated `DEADLINE_JOBS` column family; no way to recover - **Describe the bug**
In circumstances very similar to https://github.com/camunda-cloud/zeebe/issues/5925, a faulty client somehow caused tens of thousands of entries to be created inside the `DEADLINE_JOBS` column family. Under the current state of things, these entries cannot go away. Worse, they cause the JobTimeOutProcessor to overflow the available log space every time it runs, which results in only a fraction of the job time-outs actually being taken into account. The remaining jobs become “stuck,” i.e. they never get repeated to workers again.
**To Reproduce**
The [faulty client story](https://github.com/camunda/zeebe/issues/5925#issuecomment-796880239) is my best guess (i.e. we increased the client buffer size, and the runaway growth in the DEADLINE_JOBS table stopped). I'm therefore not sure about reproduction steps, however our RocksDB dumps are available on demand. Kindly send private email directly to dominique.quatravaux@epfl.ch.
**Expected behavior**
There should exist some kind of automated, or perhaps operator-initiated cleanup procedure, so that the `DEADLINE_JOBS` column family may be either shrunk over time, or better, resynced wholesale to the ground truth in `JOBS`. (I was thinking, perhaps naïvely, that we could take advantage of the snapshot rollover time to just empty the `DEADLINE_JOBS` column family, and reconstruct it from `JOBS`?)
**Log/Stacktrace**
A build of Zeebe from our [work branch](https://github.com/epfl-si/zeebe/tree/bug/overfull-DEADLINE_JOBS) produces the following, relevant excerpts:
Log excerpts
```
for broker in $(seq 0 2); do oc -n phd-assess-test logs zeebe-broker-$broker-0|grep 'deactivate\|forEach'; done
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachActivatableJobs"",""file"":""DbJobState.java"",""line"":307},""message"":""forEachActivatableJobs: starting"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881373,""timestampNanos"":108547275}
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachActivatableJobs"",""file"":""DbJobState.java"",""line"":325},""message"":""forEachActivatableJobs: done, processed 13 jobs"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881373,""timestampNanos"":169529866}
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachActivatableJobs"",""file"":""DbJobState.java"",""line"":307},""message"":""forEachActivatableJobs: starting"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881373,""timestampNanos"":171764111}
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachActivatableJobs"",""file"":""DbJobState.java"",""line"":325},""message"":""forEachActivatableJobs: done, processed 0 jobs"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881373,""timestampNanos"":172219141}
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachActivatableJobs"",""file"":""DbJobState.java"",""line"":307},""message"":""forEachActivatableJobs: starting"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881376,""timestampNanos"":967749123}
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachActivatableJobs"",""file"":""DbJobState.java"",""line"":325},""message"":""forEachActivatableJobs: done, processed 0 jobs"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881376,""timestampNanos"":968356521}
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachActivatableJobs"",""file"":""DbJobState.java"",""line"":307},""message"":""forEachActivatableJobs: starting"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881376,""timestampNanos"":969018562}
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachActivatableJobs"",""file"":""DbJobState.java"",""line"":325},""message"":""forEachActivatableJobs: done, processed 0 jobs"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881376,""timestampNanos"":969438570}
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachActivatableJobs"",""file"":""DbJobState.java"",""line"":307},""message"":""forEachActivatableJobs: starting"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881396,""timestampNanos"":15069073}
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachActivatableJobs"",""file"":""DbJobState.java"",""line"":325},""message"":""forEachActivatableJobs: done, processed 0 jobs"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881396,""timestampNanos"":15790841}
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachTimedOutEntry"",""file"":""DbJobState.java"",""line"":252},""message"":""forEachTimedOutEntry: starting"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881396,""timestampNanos"":844718785}
{""severity"":""ERROR"",""logging.googleapis.com/sourceLocation"":{""function"":""lambda$deactivateTimedOutJobs$0"",""file"":""JobTimeoutTrigger.java"",""line"":87},""message"":""deactivateTimedOutJobs: unable to flush: key=2251799816453675, record={\""deadline\"":1647881486986,\""worker\"":\""c533b992-ffe5-4fa0-8811-c718a5f41620\"",\""retries\"":0,\""retryBackoff\"":0,\""recurringTime\"":-1,\""type\"":\""phdAssessFillForm\"",\""customHeaders\"":[packed value (length=17960)],\""variables\"":\""gA==\"",\""errorMessage\"":\""\"",\""errorCode\"":\""\"",\""bpmnProcessId\"":\""phdAssessProcess\"",\""processDefinitionVersion\"":4,\""processDefinitionKey\"":2251799816451171,\""processInstanceKey\"":2251799816453632,\""elementId\"":\""Activity_PHD_fills_annual_report\"",\""elementInstanceKey\"":2251799816453671}"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""reportLocation"":{""functionName"":""lambda$deactivateTimedOutJobs$0"",""filePath"":""JobTimeoutTrigger.java"",""lineNumber"":87},""actor-name"":""Broker-0-StreamProcessor-1""},""@type"":""type.googleapis.com/google.devtools.clouderrorreporting.v1beta1.ReportedErrorEvent"",""timestampSeconds"":1647881396,""timestampNanos"":855322735}
{""severity"":""ERROR"",""logging.googleapis.com/sourceLocation"":{""function"":""lambda$forEachTimedOutEntry$1"",""file"":""DbJobState.java"",""line"":264},""message"":""forEachTimedOutEntry: bailing out after failed visitJob at count = 443"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""reportLocation"":{""functionName"":""lambda$forEachTimedOutEntry$1"",""filePath"":""DbJobState.java"",""lineNumber"":264},""actor-name"":""Broker-0-StreamProcessor-1""},""@type"":""type.googleapis.com/google.devtools.clouderrorreporting.v1beta1.ReportedErrorEvent"",""timestampSeconds"":1647881396,""timestampNanos"":855880235}
{""severity"":""ERROR"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachTimedOutEntry"",""file"":""DbJobState.java"",""line"":275},""message"":""forEachTimedOutEntry: done at count = 443"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""reportLocation"":{""functionName"":""forEachTimedOutEntry"",""filePath"":""DbJobState.java"",""lineNumber"":275},""actor-name"":""Broker-0-StreamProcessor-1""},""@type"":""type.googleapis.com/google.devtools.clouderrorreporting.v1beta1.ReportedErrorEvent"",""timestampSeconds"":1647881396,""timestampNanos"":856099294}
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachActivatableJobs"",""file"":""DbJobState.java"",""line"":307},""message"":""forEachActivatableJobs: starting"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881401,""timestampNanos"":22222435}
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachActivatableJobs"",""file"":""DbJobState.java"",""line"":325},""message"":""forEachActivatableJobs: done, processed 13 jobs"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881401,""timestampNanos"":69950298}
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachActivatableJobs"",""file"":""DbJobState.java"",""line"":307},""message"":""forEachActivatableJobs: starting"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881401,""timestampNanos"":84670641}
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachActivatableJobs"",""file"":""DbJobState.java"",""line"":325},""message"":""forEachActivatableJobs: done, processed 0 jobs"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881401,""timestampNanos"":168317382}
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachActivatableJobs"",""file"":""DbJobState.java"",""line"":307},""message"":""forEachActivatableJobs: starting"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881405,""timestampNanos"":687893084}
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachActivatableJobs"",""file"":""DbJobState.java"",""line"":325},""message"":""forEachActivatableJobs: done, processed 0 jobs"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881405,""timestampNanos"":688659261}
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachActivatableJobs"",""file"":""DbJobState.java"",""line"":307},""message"":""forEachActivatableJobs: starting"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881405,""timestampNanos"":689358301}
{""severity"":""INFO"",""logging.googleapis.com/sourceLocation"":{""function"":""forEachActivatableJobs"",""file"":""DbJobState.java"",""line"":325},""message"":""forEachActivatableJobs: done, processed 0 jobs"",""serviceContext"":{""service"":""zeebe"",""version"":""development""},""context"":{""threadId"":32,""partitionId"":""1"",""threadPriority"":5,""loggerName"":""io.camunda.zeebe.broker.process"",""threadName"":""Broker-0-zb-actors-0"",""actor-name"":""Broker-0-StreamProcessor-1""},""timestampSeconds"":1647881405,""timestampNanos"":689805874}
```
Reading RocksDB snapshots using our [home-grown Perl script](https://github.com/epfl-si/PhDAssess/blob/master/scripts/read-snapshot.pl) gives us the following stats:
- Number of entries in the `JOBS` column family: 39 (out of which, only 13 currently get presented to a worker)
- Number of entries in `DEADLINE_JOBS`: 47017
**Environment:**
- OS: Kubernetes
- Zeebe Version: 1.3.5, as well as today's 1.3.6-SNAPSHOT (built from [our branch](https://github.com/epfl-si/zeebe/tree/bug/overfull-DEADLINE_JOBS))
- Configuration: 3-way Kubernetes replication on OpenShift 3.11, integrated gateway + broker pods (one of the three being backed to NFS)
",1,bloated deadline jobs column family no way to recover describe the bug in circumstances very similar to a faulty client somehow caused tens of thousands of entries to be created inside the deadline jobs column family under the current state of things these entries cannot go away worse they cause the jobtimeoutprocessor to overflow the available log space every time it runs which results in only a fraction of the job time outs actually being taken into account the remaining jobs become “stuck ” i e they never get repeated to workers again to reproduce the is my best guess i e we increased the client buffer size and the runaway growth in the deadline jobs table stopped i m therefore not sure about reproduction steps however our rocksdb dumps are available on demand kindly send private email directly to dominique quatravaux epfl ch expected behavior there should exist some kind of automated or perhaps operator initiated cleanup procedure so that the deadline jobs column family may be either shrunk over time or better resynced wholesale to the ground truth in jobs i was thinking perhaps naïvely that we could take advantage of the snapshot rollover time to just empty the deadline jobs column family and reconstruct it from jobs log stacktrace a build of zeebe from our produces the following relevant excerpts log excerpts for broker in seq do oc n phd assess test logs zeebe broker broker grep deactivate foreach done severity info logging googleapis com sourcelocation function foreachactivatablejobs file dbjobstate java line message foreachactivatablejobs starting servicecontext service zeebe version development context threadid partitionid threadpriority loggername io camunda zeebe broker process threadname broker zb actors actor name broker streamprocessor timestampseconds timestampnanos severity info logging googleapis com sourcelocation function foreachactivatablejobs file dbjobstate java line message foreachactivatablejobs done processed jobs servicecontext service zeebe version development context threadid partitionid threadpriority loggername io camunda zeebe broker process threadname broker zb actors actor name broker streamprocessor timestampseconds timestampnanos severity info logging googleapis com sourcelocation function foreachactivatablejobs file dbjobstate java line message foreachactivatablejobs starting servicecontext service zeebe version development context threadid partitionid threadpriority loggername io camunda zeebe broker process threadname broker zb actors actor name broker streamprocessor timestampseconds timestampnanos severity info logging googleapis com sourcelocation function foreachactivatablejobs file dbjobstate java line message foreachactivatablejobs done processed jobs servicecontext service zeebe version development context threadid partitionid threadpriority loggername io camunda zeebe broker process threadname broker zb actors actor name broker streamprocessor timestampseconds timestampnanos severity info logging googleapis com sourcelocation function foreachactivatablejobs file dbjobstate java line message foreachactivatablejobs starting servicecontext service zeebe version development context threadid partitionid threadpriority loggername io camunda zeebe broker process threadname broker zb actors actor name broker streamprocessor timestampseconds timestampnanos severity info logging googleapis com sourcelocation function foreachactivatablejobs file dbjobstate java line message foreachactivatablejobs done processed jobs servicecontext service zeebe version development context threadid partitionid threadpriority loggername io camunda zeebe broker process threadname broker zb actors actor name broker streamprocessor timestampseconds timestampnanos severity info logging googleapis com sourcelocation function foreachactivatablejobs file dbjobstate java line message foreachactivatablejobs starting servicecontext service zeebe version development context threadid partitionid threadpriority loggername io camunda zeebe broker process threadname broker zb actors actor name broker streamprocessor timestampseconds timestampnanos severity info logging googleapis com sourcelocation function foreachactivatablejobs file dbjobstate java line message foreachactivatablejobs done processed jobs servicecontext service zeebe version development context threadid partitionid threadpriority loggername io camunda zeebe broker process threadname broker zb actors actor name broker streamprocessor timestampseconds timestampnanos severity info logging googleapis com sourcelocation function foreachactivatablejobs file dbjobstate java line message foreachactivatablejobs starting servicecontext service zeebe version development context threadid partitionid threadpriority loggername io camunda zeebe broker process threadname broker zb actors actor name broker streamprocessor timestampseconds timestampnanos severity info logging googleapis com sourcelocation function foreachactivatablejobs file dbjobstate java line message foreachactivatablejobs done processed jobs servicecontext service zeebe version development context threadid partitionid threadpriority loggername io camunda zeebe broker process threadname broker zb actors actor name broker streamprocessor timestampseconds timestampnanos severity info logging googleapis com sourcelocation function foreachtimedoutentry file dbjobstate java line message foreachtimedoutentry starting servicecontext service zeebe version development context threadid partitionid threadpriority loggername io camunda zeebe broker process threadname broker zb actors actor name broker streamprocessor timestampseconds timestampnanos severity error logging googleapis com sourcelocation function lambda deactivatetimedoutjobs file jobtimeouttrigger java line message deactivatetimedoutjobs unable to flush key record deadline worker retries retrybackoff recurringtime type phdassessfillform customheaders variables ga errormessage errorcode bpmnprocessid phdassessprocess processdefinitionversion processdefinitionkey processinstancekey elementid activity phd fills annual report elementinstancekey servicecontext service zeebe version development context threadid partitionid threadpriority loggername io camunda zeebe broker process threadname broker zb actors reportlocation functionname lambda deactivatetimedoutjobs filepath jobtimeouttrigger java linenumber actor name broker streamprocessor type type googleapis com google devtools clouderrorreporting reportederrorevent timestampseconds timestampnanos severity error logging googleapis com sourcelocation function lambda foreachtimedoutentry file dbjobstate java line message foreachtimedoutentry bailing out after failed visitjob at count servicecontext service zeebe version development context threadid partitionid threadpriority loggername io camunda zeebe broker process threadname broker zb actors reportlocation functionname lambda foreachtimedoutentry filepath dbjobstate java linenumber actor name broker streamprocessor type type googleapis com google devtools clouderrorreporting reportederrorevent timestampseconds timestampnanos severity error logging googleapis com sourcelocation function foreachtimedoutentry file dbjobstate java line message foreachtimedoutentry done at count servicecontext service zeebe version development context threadid partitionid threadpriority loggername io camunda zeebe broker process threadname broker zb actors reportlocation functionname foreachtimedoutentry filepath dbjobstate java linenumber actor name broker streamprocessor type type googleapis com google devtools clouderrorreporting reportederrorevent timestampseconds timestampnanos severity info logging googleapis com sourcelocation function foreachactivatablejobs file dbjobstate java line message foreachactivatablejobs starting servicecontext service zeebe version development context threadid partitionid threadpriority loggername io camunda zeebe broker process threadname broker zb actors actor name broker streamprocessor timestampseconds timestampnanos severity info logging googleapis com sourcelocation function foreachactivatablejobs file dbjobstate java line message foreachactivatablejobs done processed jobs servicecontext service zeebe version development context threadid partitionid threadpriority loggername io camunda zeebe broker process threadname broker zb actors actor name broker streamprocessor timestampseconds timestampnanos severity info logging googleapis com sourcelocation function foreachactivatablejobs file dbjobstate java line message foreachactivatablejobs starting servicecontext service zeebe version development context threadid partitionid threadpriority loggername io camunda zeebe broker process threadname broker zb actors actor name broker streamprocessor timestampseconds timestampnanos severity info logging googleapis com sourcelocation function foreachactivatablejobs file dbjobstate java line message foreachactivatablejobs done processed jobs servicecontext service zeebe version development context threadid partitionid threadpriority loggername io camunda zeebe broker process threadname broker zb actors actor name broker streamprocessor timestampseconds timestampnanos severity info logging googleapis com sourcelocation function foreachactivatablejobs file dbjobstate java line message foreachactivatablejobs starting servicecontext service zeebe version development context threadid partitionid threadpriority loggername io camunda zeebe broker process threadname broker zb actors actor name broker streamprocessor timestampseconds timestampnanos severity info logging googleapis com sourcelocation function foreachactivatablejobs file dbjobstate java line message foreachactivatablejobs done processed jobs servicecontext service zeebe version development context threadid partitionid threadpriority loggername io camunda zeebe broker process threadname broker zb actors actor name broker streamprocessor timestampseconds timestampnanos severity info logging googleapis com sourcelocation function foreachactivatablejobs file dbjobstate java line message foreachactivatablejobs starting servicecontext service zeebe version development context threadid partitionid threadpriority loggername io camunda zeebe broker process threadname broker zb actors actor name broker streamprocessor timestampseconds timestampnanos severity info logging googleapis com sourcelocation function foreachactivatablejobs file dbjobstate java line message foreachactivatablejobs done processed jobs servicecontext service zeebe version development context threadid partitionid threadpriority loggername io camunda zeebe broker process threadname broker zb actors actor name broker streamprocessor timestampseconds timestampnanos reading rocksdb snapshots using our gives us the following stats number of entries in the jobs column family out of which only currently get presented to a worker number of entries in deadline jobs environment os kubernetes zeebe version as well as today s snapshot built from configuration way kubernetes replication on openshift integrated gateway broker pods one of the three being backed to nfs ,1
317,6558869926.0,IssuesEvent,2017-09-06 23:48:32,waggle-sensor/beehive-server,https://api.github.com/repos/waggle-sensor/beehive-server,opened,Prototype static version of beehive,reliability,"One _possible_ improvement we can do is build a static version of beehive which is regenerated on a schedule. This would dramatically improve page serving performance across the board. This also has the side effect of completely eliminating direct database access for datasets to the outside work and so could eliminate any security mistakes which show up.
I think this is still worth prototyping, even though we now have nginx performing caching and have moved off the development server. As an example, the build-index tool in the data-exporter generates a ""friendly"" summary of all the datasets to make sure things look reasonable.
",True,"Prototype static version of beehive - One _possible_ improvement we can do is build a static version of beehive which is regenerated on a schedule. This would dramatically improve page serving performance across the board. This also has the side effect of completely eliminating direct database access for datasets to the outside work and so could eliminate any security mistakes which show up.
I think this is still worth prototyping, even though we now have nginx performing caching and have moved off the development server. As an example, the build-index tool in the data-exporter generates a ""friendly"" summary of all the datasets to make sure things look reasonable.
",1,prototype static version of beehive one possible improvement we can do is build a static version of beehive which is regenerated on a schedule this would dramatically improve page serving performance across the board this also has the side effect of completely eliminating direct database access for datasets to the outside work and so could eliminate any security mistakes which show up i think this is still worth prototyping even though we now have nginx performing caching and have moved off the development server as an example the build index tool in the data exporter generates a friendly summary of all the datasets to make sure things look reasonable ,1
928,11706506114.0,IssuesEvent,2020-03-07 22:38:43,sohaibaslam/learning_site,https://api.github.com/repos/sohaibaslam/learning_site,opened,"Broken Crawlers 08, Mar 2020",crawler broken/unreliable,"1. **24sevres eu(100%)/fr(100%)/uk(100%)/us(100%)**
1. **abcmart kr(100%)**
1. **abercrombie cn(100%)/hk(100%)/jp(100%)**
1. **adidas pl(100%)**
1. **americaneagle ca(100%)**
1. **ami cn(100%)/dk(100%)/jp(100%)/kr(100%)/uk(100%)/us(100%)**
1. **antonioli es(100%)**
1. **arket uk(100%)**
1. **asos ae(100%)/au(100%)/ch(100%)/cn(100%)/hk(100%)/id(100%)/my(100%)/nl(100%)/ph(100%)/pl(100%)/ru(100%)/sa(100%)/sg(100%)/th(100%)/us(100%)/vn(100%)**
1. **babyshop ae(100%)/sa(100%)**
1. **babywalz at(100%)/ch(100%)/de(100%)**
1. **bananarepublic ca(100%)**
1. **benetton lv(100%)**
1. **bijoubrigitte de(100%)/nl(100%)**
1. **boconcept at(100%)/de(100%)**
1. **boozt uk(100%)**
1. **borbonese eu(100%)/it(100%)/uk(100%)**
1. **buckle us(100%)**
1. **charmingcharlie us(100%)**
1. **chloe kr(100%)**
1. **clarks eu(100%)**
1. **coach ca(100%)**
1. **conforama fr(100%)**
1. **converse au(100%)/kr(100%)/nl(42%)**
1. **cos (100%)/at(100%)/hu(34%)**
1. **creationl de(100%)**
1. **dfs uk(100%)**
1. **dickssportinggoods us(100%)**
1. **eastbay us(100%)**
1. **ernstings de(100%)**
1. **falabella cl(100%)/co(100%)**
1. **fanatics us(100%)**
1. **fendi cn(100%)**
1. **footaction us(100%)**
1. **footlocker (100%)/be(100%)/de(100%)/dk(100%)/es(100%)/fr(100%)/it(100%)/lu(100%)/nl(100%)/no(100%)/se(100%)/uk(100%)**
1. **gap ca(100%)**
1. **getthelabel au(100%)/dk(100%)**
1. **harrods (100%)**
1. **heine at(100%)**
1. **hermes at(100%)/ca(100%)/de(50%)/es(50%)/fr(67%)/nl(50%)/se(50%)/uk(67%)**
1. **hm ae(100%)/cz(34%)/eu(100%)/jp(37%)/kw(100%)/pl(100%)/sa(100%)**
1. **hollister cn(100%)/hk(100%)/jp(100%)/tw(100%)**
1. **hunter (100%)**
1. **ikea au(100%)/pt(100%)**
1. **intersport es(84%)/fr(100%)**
1. **intimissimi cn(100%)/hk(100%)/jp(100%)**
1. **jackwills (100%)**
1. **jeffreycampbell us(100%)**
1. **klingel de(100%)**
1. **lacoste cn(100%)**
1. **laredouteapi es(100%)**
1. **lefties sa(100%)**
1. **levi my(100%)**
1. **lifestylestores in(100%)**
1. **made ch(100%)/de(100%)/es(100%)/nl(100%)/uk(100%)**
1. **massimodutti ad(49%)/al(50%)/am(49%)/az(50%)/ba(50%)/by(51%)/co(49%)/cr(48%)/cy(50%)/do(47%)/ec(51%)/eg(100%)/ge(46%)/gt(49%)/hk(49%)/hn(49%)/id(47%)/il(51%)/in(47%)/kz(50%)/mc(57%)/mk(50%)/mo(47%)/my(100%)/pa(49%)/ph(100%)/rs(49%)/sa(100%)/sg(45%)/th(100%)/tn(51%)/tw(49%)/ua(52%)/vn(100%)**
1. **maxfashion bh(100%)**
1. **melijoe be(44%)/cn(100%)/fr(33%)/kr(89%)/uk(81%)**
1. **michaelkors ca(100%)/us(33%)**
1. **missguided pl(100%)**
1. **moncler ru(100%)**
1. **monki nl(100%)/pl(100%)**
1. **moosejaw us(100%)**
1. **mothercare sa(100%)**
1. **mq se(100%)**
1. **mrporter ie(100%)**
1. **mrprice uk(100%)**
1. **muji de(100%)/uk(67%)**
1. **offspring uk(100%)**
1. **oldnavy ca(100%)**
1. **parfois ad(100%)/al(100%)/am(100%)/ao(100%)/at(100%)/ba(100%)/be(100%)/bg(100%)/bh(100%)/br(100%)/by(100%)/ch(100%)/co(100%)/cz(100%)/de(100%)/dk(100%)/do(100%)/ee(100%)/eg(100%)/es(100%)/fi(100%)/fr(100%)/ge(100%)/gr(100%)/gt(100%)/hr(100%)/hu(100%)/ie(100%)/ir(100%)/it(100%)/jo(100%)/kw(100%)/lb(100%)/lt(100%)/lu(100%)/lv(100%)/ly(100%)/ma(100%)/mc(100%)/mk(100%)/mt(100%)/mx(100%)/mz(100%)/nl(100%)/om(100%)/pa(100%)/pe(100%)/ph(100%)/pl(100%)/pt(100%)/qa(100%)/ro(100%)/rs(100%)/sa(100%)/se(100%)/si(100%)/sk(100%)/tn(100%)/uk(100%)/us(100%)/ve(100%)/ye(100%)**
1. **patagonia ca(100%)**
1. **popup br(100%)**
1. **prettysecrets in(100%)**
1. **pullandbear kr(100%)/qa(100%)/tw(100%)**
1. **rakuten fr(100%)/us(100%)**
1. **ralphlauren cn(30%)/de(100%)**
1. **runnerspoint de(100%)**
1. **runwaysale za(100%)**
1. **sainsburys uk(100%)**
1. **saksfifthavenue mo(100%)/ru(68%)**
1. **selfridges es(100%)/fr(84%)/hk(74%)/kr(70%)/mo(35%)/sa(35%)/tw(30%)**
1. **shoedazzle us(100%)**
1. **simons ca(100%)**
1. **snipes de(100%)**
1. **solebox de(100%)/uk(100%)**
1. **soliver de(100%)**
1. **speedo us(100%)**
1. **splashfashions ae(100%)/bh(100%)/sa(100%)**
1. **stefaniamode au(100%)**
1. **stories be(100%)**
1. **stradivarius lb(100%)/sg(100%)**
1. **stylebop (100%)/au(100%)/ca(100%)/cn(100%)/de(100%)/es(100%)/fr(100%)/hk(100%)/jp(100%)/kr(100%)/mo(100%)/sg(100%)/us(100%)**
1. **superbalist za(100%)**
1. **thread uk(100%)/us(100%)**
1. **tods cn(100%)/gr(100%)/jp(37%)/nl(100%)/pt(100%)**
1. **tommybahama bh(100%)/de(100%)/ph(100%)/za(100%)**
1. **tommyhilfiger jp(100%)/us(100%)**
1. **topbrands ru(100%)**
1. **trendygolf uk(100%)**
1. **undefeated us(100%)**
1. **underarmour ca(100%)/pe(100%)**
1. **watchshop eu(100%)/ru(100%)/se(100%)**
1. **wayfair ca(100%)/de(100%)/uk(100%)**
1. **weekday eu(100%)**
1. **wenz de(100%)**
1. **westwingnow ch(100%)**
1. **womanwithin us(100%)**
1. **xxl dk(100%)**
1. **zalandolounge de(100%)**
1. **zalora id(100%)/my(85%)/tw(100%)**
1. **zara kw(100%)/mo(100%)/pe(100%)/uy(100%)**
1. **zilingo my(100%)**
1. farfetch kr(46%)/mo(98%)
1. snkrs eu(98%)
1. hibbett us(97%)
1. vip cn(95%)
1. nike hk(93%)/kr(79%)
1. lasula uk(92%)
1. dolcegabbana uk(87%)
1. koton tr(85%)
1. schwab de(84%)
1. burberry (74%)/au(78%)/be(82%)/bg(78%)/ch(73%)/dk(68%)/es(75%)/fi(74%)/ie(80%)/jp(73%)/my(77%)/pt(75%)/ro(69%)/sg(68%)/sk(73%)/tw(78%)/us(65%)
1. shoecarnival us(81%)
1. yoox at(74%)/be(42%)/fr(77%)/hk(40%)/ru(33%)/uk(52%)
1. zalando at(68%)/ch(61%)/de(61%)/dk(48%)/fi(31%)/it(44%)/pl(75%)
1. stevemadden us(73%)
1. zivame in(69%)
1. aloyoga us(62%)
1. bensherman eu(61%)
1. pinkboutique uk(61%)
1. timberland my(60%)
1. timberlandtrans sg(60%)
1. sfera es(59%)
1. venteprivee es(59%)/fr(54%)/it(59%)
1. limango de(58%)
1. misssixty cn(55%)
1. anniebing us(54%)
1. liujo es(40%)/it(52%)
1. anayi jp(50%)
1. brunellocucinelli cn(50%)
1. amazon uk(49%)
1. sportchek ca(47%)
1. strellson at(43%)/be(42%)/ch(46%)/de(45%)/fr(40%)/nl(44%)
1. theory jp(45%)
1. jcpenney (43%)
1. hugoboss cn(41%)
1. lululemon cn(41%)
1. marinarinaldi at(36%)/be(35%)/cz(38%)/de(38%)/dk(36%)/es(38%)/fr(37%)/hu(36%)/ie(38%)/it(41%)/nl(38%)/pl(37%)/pt(36%)/ro(38%)/se(37%)/uk(37%)/us(32%)
1. prada (38%)/at(31%)/ch(37%)/de(38%)/dk(38%)/es(36%)/fi(38%)/gr(31%)/ie(41%)/it(38%)/no(32%)/pt(38%)/se(39%)
1. acne dk(40%)
1. interightatjd cn(40%)
1. petitbateau uk(40%)
1. alaia ae(35%)/at(34%)/bh(37%)/ca(31%)/de(32%)/dk(38%)/fi(36%)/fr(31%)/hk(35%)/it(36%)/jp(30%)/nl(32%)/no(34%)/pl(34%)/pt(38%)/se(34%)/tr(37%)/uk(36%)/us(33%)/za(32%)
1. mango am(31%)/bm(30%)/by(31%)/dz(31%)/is(31%)/kr(38%)
1. gstar (31%)/at(35%)/au(34%)/bg(33%)/ch(36%)/cz(35%)/de(37%)/ee(36%)/hr(36%)/hu(33%)/ie(30%)/lt(34%)/lv(34%)/pl(35%)/ru(37%)/si(36%)/sk(35%)
1. ssfshop kr(37%)
1. terranovastyle at(34%)/de(34%)/es(36%)/fr(36%)/it(35%)/nl(37%)/uk(35%)
1. tigerofsweden at(33%)/be(36%)/ca(33%)/ch(33%)/de(34%)/dk(35%)/es(36%)/fi(33%)/fr(33%)/ie(33%)/it(33%)/nl(35%)/no(37%)/se(34%)/uk(33%)
1. vionicshoes uk(37%)
1. only ca(30%)/us(36%)
1. replayjeans at(33%)/au(35%)/be(35%)/ca(33%)/ch(35%)/de(35%)/dk(33%)/es(32%)/eu(35%)/fr(35%)/it(32%)/no(34%)/pt(34%)/se(32%)/uk(35%)/us(33%)
1. vans at(31%)/de(33%)/dk(34%)/es(33%)/it(33%)/nl(33%)/pl(33%)/se(35%)/uk(33%)
1. justcavalli ae(33%)/at(33%)/au(32%)/ca(34%)/de(33%)/dk(33%)/es(30%)/fr(33%)/hk(30%)/it(33%)/jp(34%)/nl(33%)/no(30%)/pl(33%)/pt(33%)/ru(33%)/se(33%)/tr(34%)/uk(32%)/us(33%)/za(33%)
1. maxandco es(34%)/it(34%)/uk(34%)
1. neimanmarcus cn(34%)
1. aboutyou hu(31%)/ro(31%)/sk(31%)
1. boardiesapparel au(31%)
1. bonpoint es(30%)/it(31%)
1. dvf br(31%)
1. ochirly cn(31%)
1. calvinklein cz(30%)/hu(30%)/pl(30%)
1. maxmara dk(30%)/pl(30%)/se(30%)
1. patriziapepe at(30%)/be(30%)/bg(30%)/ca(30%)/ie(30%)/lu(30%)/us(30%)
",True,"Broken Crawlers 08, Mar 2020 - 1. **24sevres eu(100%)/fr(100%)/uk(100%)/us(100%)**
1. **abcmart kr(100%)**
1. **abercrombie cn(100%)/hk(100%)/jp(100%)**
1. **adidas pl(100%)**
1. **americaneagle ca(100%)**
1. **ami cn(100%)/dk(100%)/jp(100%)/kr(100%)/uk(100%)/us(100%)**
1. **antonioli es(100%)**
1. **arket uk(100%)**
1. **asos ae(100%)/au(100%)/ch(100%)/cn(100%)/hk(100%)/id(100%)/my(100%)/nl(100%)/ph(100%)/pl(100%)/ru(100%)/sa(100%)/sg(100%)/th(100%)/us(100%)/vn(100%)**
1. **babyshop ae(100%)/sa(100%)**
1. **babywalz at(100%)/ch(100%)/de(100%)**
1. **bananarepublic ca(100%)**
1. **benetton lv(100%)**
1. **bijoubrigitte de(100%)/nl(100%)**
1. **boconcept at(100%)/de(100%)**
1. **boozt uk(100%)**
1. **borbonese eu(100%)/it(100%)/uk(100%)**
1. **buckle us(100%)**
1. **charmingcharlie us(100%)**
1. **chloe kr(100%)**
1. **clarks eu(100%)**
1. **coach ca(100%)**
1. **conforama fr(100%)**
1. **converse au(100%)/kr(100%)/nl(42%)**
1. **cos (100%)/at(100%)/hu(34%)**
1. **creationl de(100%)**
1. **dfs uk(100%)**
1. **dickssportinggoods us(100%)**
1. **eastbay us(100%)**
1. **ernstings de(100%)**
1. **falabella cl(100%)/co(100%)**
1. **fanatics us(100%)**
1. **fendi cn(100%)**
1. **footaction us(100%)**
1. **footlocker (100%)/be(100%)/de(100%)/dk(100%)/es(100%)/fr(100%)/it(100%)/lu(100%)/nl(100%)/no(100%)/se(100%)/uk(100%)**
1. **gap ca(100%)**
1. **getthelabel au(100%)/dk(100%)**
1. **harrods (100%)**
1. **heine at(100%)**
1. **hermes at(100%)/ca(100%)/de(50%)/es(50%)/fr(67%)/nl(50%)/se(50%)/uk(67%)**
1. **hm ae(100%)/cz(34%)/eu(100%)/jp(37%)/kw(100%)/pl(100%)/sa(100%)**
1. **hollister cn(100%)/hk(100%)/jp(100%)/tw(100%)**
1. **hunter (100%)**
1. **ikea au(100%)/pt(100%)**
1. **intersport es(84%)/fr(100%)**
1. **intimissimi cn(100%)/hk(100%)/jp(100%)**
1. **jackwills (100%)**
1. **jeffreycampbell us(100%)**
1. **klingel de(100%)**
1. **lacoste cn(100%)**
1. **laredouteapi es(100%)**
1. **lefties sa(100%)**
1. **levi my(100%)**
1. **lifestylestores in(100%)**
1. **made ch(100%)/de(100%)/es(100%)/nl(100%)/uk(100%)**
1. **massimodutti ad(49%)/al(50%)/am(49%)/az(50%)/ba(50%)/by(51%)/co(49%)/cr(48%)/cy(50%)/do(47%)/ec(51%)/eg(100%)/ge(46%)/gt(49%)/hk(49%)/hn(49%)/id(47%)/il(51%)/in(47%)/kz(50%)/mc(57%)/mk(50%)/mo(47%)/my(100%)/pa(49%)/ph(100%)/rs(49%)/sa(100%)/sg(45%)/th(100%)/tn(51%)/tw(49%)/ua(52%)/vn(100%)**
1. **maxfashion bh(100%)**
1. **melijoe be(44%)/cn(100%)/fr(33%)/kr(89%)/uk(81%)**
1. **michaelkors ca(100%)/us(33%)**
1. **missguided pl(100%)**
1. **moncler ru(100%)**
1. **monki nl(100%)/pl(100%)**
1. **moosejaw us(100%)**
1. **mothercare sa(100%)**
1. **mq se(100%)**
1. **mrporter ie(100%)**
1. **mrprice uk(100%)**
1. **muji de(100%)/uk(67%)**
1. **offspring uk(100%)**
1. **oldnavy ca(100%)**
1. **parfois ad(100%)/al(100%)/am(100%)/ao(100%)/at(100%)/ba(100%)/be(100%)/bg(100%)/bh(100%)/br(100%)/by(100%)/ch(100%)/co(100%)/cz(100%)/de(100%)/dk(100%)/do(100%)/ee(100%)/eg(100%)/es(100%)/fi(100%)/fr(100%)/ge(100%)/gr(100%)/gt(100%)/hr(100%)/hu(100%)/ie(100%)/ir(100%)/it(100%)/jo(100%)/kw(100%)/lb(100%)/lt(100%)/lu(100%)/lv(100%)/ly(100%)/ma(100%)/mc(100%)/mk(100%)/mt(100%)/mx(100%)/mz(100%)/nl(100%)/om(100%)/pa(100%)/pe(100%)/ph(100%)/pl(100%)/pt(100%)/qa(100%)/ro(100%)/rs(100%)/sa(100%)/se(100%)/si(100%)/sk(100%)/tn(100%)/uk(100%)/us(100%)/ve(100%)/ye(100%)**
1. **patagonia ca(100%)**
1. **popup br(100%)**
1. **prettysecrets in(100%)**
1. **pullandbear kr(100%)/qa(100%)/tw(100%)**
1. **rakuten fr(100%)/us(100%)**
1. **ralphlauren cn(30%)/de(100%)**
1. **runnerspoint de(100%)**
1. **runwaysale za(100%)**
1. **sainsburys uk(100%)**
1. **saksfifthavenue mo(100%)/ru(68%)**
1. **selfridges es(100%)/fr(84%)/hk(74%)/kr(70%)/mo(35%)/sa(35%)/tw(30%)**
1. **shoedazzle us(100%)**
1. **simons ca(100%)**
1. **snipes de(100%)**
1. **solebox de(100%)/uk(100%)**
1. **soliver de(100%)**
1. **speedo us(100%)**
1. **splashfashions ae(100%)/bh(100%)/sa(100%)**
1. **stefaniamode au(100%)**
1. **stories be(100%)**
1. **stradivarius lb(100%)/sg(100%)**
1. **stylebop (100%)/au(100%)/ca(100%)/cn(100%)/de(100%)/es(100%)/fr(100%)/hk(100%)/jp(100%)/kr(100%)/mo(100%)/sg(100%)/us(100%)**
1. **superbalist za(100%)**
1. **thread uk(100%)/us(100%)**
1. **tods cn(100%)/gr(100%)/jp(37%)/nl(100%)/pt(100%)**
1. **tommybahama bh(100%)/de(100%)/ph(100%)/za(100%)**
1. **tommyhilfiger jp(100%)/us(100%)**
1. **topbrands ru(100%)**
1. **trendygolf uk(100%)**
1. **undefeated us(100%)**
1. **underarmour ca(100%)/pe(100%)**
1. **watchshop eu(100%)/ru(100%)/se(100%)**
1. **wayfair ca(100%)/de(100%)/uk(100%)**
1. **weekday eu(100%)**
1. **wenz de(100%)**
1. **westwingnow ch(100%)**
1. **womanwithin us(100%)**
1. **xxl dk(100%)**
1. **zalandolounge de(100%)**
1. **zalora id(100%)/my(85%)/tw(100%)**
1. **zara kw(100%)/mo(100%)/pe(100%)/uy(100%)**
1. **zilingo my(100%)**
1. farfetch kr(46%)/mo(98%)
1. snkrs eu(98%)
1. hibbett us(97%)
1. vip cn(95%)
1. nike hk(93%)/kr(79%)
1. lasula uk(92%)
1. dolcegabbana uk(87%)
1. koton tr(85%)
1. schwab de(84%)
1. burberry (74%)/au(78%)/be(82%)/bg(78%)/ch(73%)/dk(68%)/es(75%)/fi(74%)/ie(80%)/jp(73%)/my(77%)/pt(75%)/ro(69%)/sg(68%)/sk(73%)/tw(78%)/us(65%)
1. shoecarnival us(81%)
1. yoox at(74%)/be(42%)/fr(77%)/hk(40%)/ru(33%)/uk(52%)
1. zalando at(68%)/ch(61%)/de(61%)/dk(48%)/fi(31%)/it(44%)/pl(75%)
1. stevemadden us(73%)
1. zivame in(69%)
1. aloyoga us(62%)
1. bensherman eu(61%)
1. pinkboutique uk(61%)
1. timberland my(60%)
1. timberlandtrans sg(60%)
1. sfera es(59%)
1. venteprivee es(59%)/fr(54%)/it(59%)
1. limango de(58%)
1. misssixty cn(55%)
1. anniebing us(54%)
1. liujo es(40%)/it(52%)
1. anayi jp(50%)
1. brunellocucinelli cn(50%)
1. amazon uk(49%)
1. sportchek ca(47%)
1. strellson at(43%)/be(42%)/ch(46%)/de(45%)/fr(40%)/nl(44%)
1. theory jp(45%)
1. jcpenney (43%)
1. hugoboss cn(41%)
1. lululemon cn(41%)
1. marinarinaldi at(36%)/be(35%)/cz(38%)/de(38%)/dk(36%)/es(38%)/fr(37%)/hu(36%)/ie(38%)/it(41%)/nl(38%)/pl(37%)/pt(36%)/ro(38%)/se(37%)/uk(37%)/us(32%)
1. prada (38%)/at(31%)/ch(37%)/de(38%)/dk(38%)/es(36%)/fi(38%)/gr(31%)/ie(41%)/it(38%)/no(32%)/pt(38%)/se(39%)
1. acne dk(40%)
1. interightatjd cn(40%)
1. petitbateau uk(40%)
1. alaia ae(35%)/at(34%)/bh(37%)/ca(31%)/de(32%)/dk(38%)/fi(36%)/fr(31%)/hk(35%)/it(36%)/jp(30%)/nl(32%)/no(34%)/pl(34%)/pt(38%)/se(34%)/tr(37%)/uk(36%)/us(33%)/za(32%)
1. mango am(31%)/bm(30%)/by(31%)/dz(31%)/is(31%)/kr(38%)
1. gstar (31%)/at(35%)/au(34%)/bg(33%)/ch(36%)/cz(35%)/de(37%)/ee(36%)/hr(36%)/hu(33%)/ie(30%)/lt(34%)/lv(34%)/pl(35%)/ru(37%)/si(36%)/sk(35%)
1. ssfshop kr(37%)
1. terranovastyle at(34%)/de(34%)/es(36%)/fr(36%)/it(35%)/nl(37%)/uk(35%)
1. tigerofsweden at(33%)/be(36%)/ca(33%)/ch(33%)/de(34%)/dk(35%)/es(36%)/fi(33%)/fr(33%)/ie(33%)/it(33%)/nl(35%)/no(37%)/se(34%)/uk(33%)
1. vionicshoes uk(37%)
1. only ca(30%)/us(36%)
1. replayjeans at(33%)/au(35%)/be(35%)/ca(33%)/ch(35%)/de(35%)/dk(33%)/es(32%)/eu(35%)/fr(35%)/it(32%)/no(34%)/pt(34%)/se(32%)/uk(35%)/us(33%)
1. vans at(31%)/de(33%)/dk(34%)/es(33%)/it(33%)/nl(33%)/pl(33%)/se(35%)/uk(33%)
1. justcavalli ae(33%)/at(33%)/au(32%)/ca(34%)/de(33%)/dk(33%)/es(30%)/fr(33%)/hk(30%)/it(33%)/jp(34%)/nl(33%)/no(30%)/pl(33%)/pt(33%)/ru(33%)/se(33%)/tr(34%)/uk(32%)/us(33%)/za(33%)
1. maxandco es(34%)/it(34%)/uk(34%)
1. neimanmarcus cn(34%)
1. aboutyou hu(31%)/ro(31%)/sk(31%)
1. boardiesapparel au(31%)
1. bonpoint es(30%)/it(31%)
1. dvf br(31%)
1. ochirly cn(31%)
1. calvinklein cz(30%)/hu(30%)/pl(30%)
1. maxmara dk(30%)/pl(30%)/se(30%)
1. patriziapepe at(30%)/be(30%)/bg(30%)/ca(30%)/ie(30%)/lu(30%)/us(30%)
",1,broken crawlers mar eu fr uk us abcmart kr abercrombie cn hk jp adidas pl americaneagle ca ami cn dk jp kr uk us antonioli es arket uk asos ae au ch cn hk id my nl ph pl ru sa sg th us vn babyshop ae sa babywalz at ch de bananarepublic ca benetton lv bijoubrigitte de nl boconcept at de boozt uk borbonese eu it uk buckle us charmingcharlie us chloe kr clarks eu coach ca conforama fr converse au kr nl cos at hu creationl de dfs uk dickssportinggoods us eastbay us ernstings de falabella cl co fanatics us fendi cn footaction us footlocker be de dk es fr it lu nl no se uk gap ca getthelabel au dk harrods heine at hermes at ca de es fr nl se uk hm ae cz eu jp kw pl sa hollister cn hk jp tw hunter ikea au pt intersport es fr intimissimi cn hk jp jackwills jeffreycampbell us klingel de lacoste cn laredouteapi es lefties sa levi my lifestylestores in made ch de es nl uk massimodutti ad al am az ba by co cr cy do ec eg ge gt hk hn id il in kz mc mk mo my pa ph rs sa sg th tn tw ua vn maxfashion bh melijoe be cn fr kr uk michaelkors ca us missguided pl moncler ru monki nl pl moosejaw us mothercare sa mq se mrporter ie mrprice uk muji de uk offspring uk oldnavy ca parfois ad al am ao at ba be bg bh br by ch co cz de dk do ee eg es fi fr ge gr gt hr hu ie ir it jo kw lb lt lu lv ly ma mc mk mt mx mz nl om pa pe ph pl pt qa ro rs sa se si sk tn uk us ve ye patagonia ca popup br prettysecrets in pullandbear kr qa tw rakuten fr us ralphlauren cn de runnerspoint de runwaysale za sainsburys uk saksfifthavenue mo ru selfridges es fr hk kr mo sa tw shoedazzle us simons ca snipes de solebox de uk soliver de speedo us splashfashions ae bh sa stefaniamode au stories be stradivarius lb sg stylebop au ca cn de es fr hk jp kr mo sg us superbalist za thread uk us tods cn gr jp nl pt tommybahama bh de ph za tommyhilfiger jp us topbrands ru trendygolf uk undefeated us underarmour ca pe watchshop eu ru se wayfair ca de uk weekday eu wenz de westwingnow ch womanwithin us xxl dk zalandolounge de zalora id my tw zara kw mo pe uy zilingo my farfetch kr mo snkrs eu hibbett us vip cn nike hk kr lasula uk dolcegabbana uk koton tr schwab de burberry au be bg ch dk es fi ie jp my pt ro sg sk tw us shoecarnival us yoox at be fr hk ru uk zalando at ch de dk fi it pl stevemadden us zivame in aloyoga us bensherman eu pinkboutique uk timberland my timberlandtrans sg sfera es venteprivee es fr it limango de misssixty cn anniebing us liujo es it anayi jp brunellocucinelli cn amazon uk sportchek ca strellson at be ch de fr nl theory jp jcpenney hugoboss cn lululemon cn marinarinaldi at be cz de dk es fr hu ie it nl pl pt ro se uk us prada at ch de dk es fi gr ie it no pt se acne dk interightatjd cn petitbateau uk alaia ae at bh ca de dk fi fr hk it jp nl no pl pt se tr uk us za mango am bm by dz is kr gstar at au bg ch cz de ee hr hu ie lt lv pl ru si sk ssfshop kr terranovastyle at de es fr it nl uk tigerofsweden at be ca ch de dk es fi fr ie it nl no se uk vionicshoes uk only ca us replayjeans at au be ca ch de dk es eu fr it no pt se uk us vans at de dk es it nl pl se uk justcavalli ae at au ca de dk es fr hk it jp nl no pl pt ru se tr uk us za maxandco es it uk neimanmarcus cn aboutyou hu ro sk boardiesapparel au bonpoint es it dvf br ochirly cn calvinklein cz hu pl maxmara dk pl se patriziapepe at be bg ca ie lu us ,1
1508,16638108816.0,IssuesEvent,2021-06-04 03:24:54,ppy/osu-framework,https://api.github.com/repos/ppy/osu-framework,opened,Application execution may never end (deadlock in MIDI library),type:reliability,"A user provided a memory dump of a stuck instance of osu!, which could be traced back to the internals of the MIDI library we're using:

This looks to be a lock contention issue, but it's not immediately obvious why this is happening. The library itself is unmaintained so we may need to look at either consuming an alternative library or publishing our own package for it.
As an aside, it looks like another user has encountered this same issue and published a potential fix for it to their own fork:
https://github.com/Knuhl/managed-midi/commit/f8799f165dab619fdbed2e092c3fc66d64723392",True,"Application execution may never end (deadlock in MIDI library) - A user provided a memory dump of a stuck instance of osu!, which could be traced back to the internals of the MIDI library we're using:

This looks to be a lock contention issue, but it's not immediately obvious why this is happening. The library itself is unmaintained so we may need to look at either consuming an alternative library or publishing our own package for it.
As an aside, it looks like another user has encountered this same issue and published a potential fix for it to their own fork:
https://github.com/Knuhl/managed-midi/commit/f8799f165dab619fdbed2e092c3fc66d64723392",1,application execution may never end deadlock in midi library a user provided a memory dump of a stuck instance of osu which could be traced back to the internals of the midi library we re using this looks to be a lock contention issue but it s not immediately obvious why this is happening the library itself is unmaintained so we may need to look at either consuming an alternative library or publishing our own package for it as an aside it looks like another user has encountered this same issue and published a potential fix for it to their own fork ,1
3010,31156741151.0,IssuesEvent,2023-08-16 13:35:07,camunda/zeebe,https://api.github.com/repos/camunda/zeebe,reopened,Improve ZeebePartition recovery time with large state,kind/toil area/performance incident area/reliability component/db component/broker,"**Description**
We had an incident where the liveness probes failed after 45s due to large state. This is a cluster with 3 partitions, each with a RocksDB state of over 7GB. For each partition, it took about 10 seconds purely to open the DB, which includes copying the snapshot and opening the DB. Furthermore, it seems like each partition was recovered sequentially and not in parallel, from the logs, but I haven't verified that.
The goal here would be to try and either optimize the recovery time, or have it not be w.r.t the size of the state (ideally the second one).
One issue which might be the biggest culprit in this case is this one: https://github.com/camunda/zeebe/issues/5682
",True,"Improve ZeebePartition recovery time with large state - **Description**
We had an incident where the liveness probes failed after 45s due to large state. This is a cluster with 3 partitions, each with a RocksDB state of over 7GB. For each partition, it took about 10 seconds purely to open the DB, which includes copying the snapshot and opening the DB. Furthermore, it seems like each partition was recovered sequentially and not in parallel, from the logs, but I haven't verified that.
The goal here would be to try and either optimize the recovery time, or have it not be w.r.t the size of the state (ideally the second one).
One issue which might be the biggest culprit in this case is this one: https://github.com/camunda/zeebe/issues/5682
",1,improve zeebepartition recovery time with large state description we had an incident where the liveness probes failed after due to large state this is a cluster with partitions each with a rocksdb state of over for each partition it took about seconds purely to open the db which includes copying the snapshot and opening the db furthermore it seems like each partition was recovered sequentially and not in parallel from the logs but i haven t verified that the goal here would be to try and either optimize the recovery time or have it not be w r t the size of the state ideally the second one one issue which might be the biggest culprit in this case is this one ,1
445950,31388641443.0,IssuesEvent,2023-08-26 03:59:00,manojadams/metaforms-core,https://api.github.com/repos/manojadams/metaforms-core,opened,Allow overriding default components,documentation enhancement,"Allow a way so that user can use their own components.
### Description
In general, if a user wants to use the same schema at different places but diffenrent ui components, then it should be supported out of the box.",1.0,"Allow overriding default components - Allow a way so that user can use their own components.
### Description
In general, if a user wants to use the same schema at different places but diffenrent ui components, then it should be supported out of the box.",0,allow overriding default components allow a way so that user can use their own components description in general if a user wants to use the same schema at different places but diffenrent ui components then it should be supported out of the box ,0
262837,19844187342.0,IssuesEvent,2022-01-21 02:55:01,ISS-Mimic/Mimic,https://api.github.com/repos/ISS-Mimic/Mimic,closed,Continue to update Acronyms page,Documentation,"Update [acronyms page](https://github.com/ISS-Mimic/Mimic/wiki/Acronyms) to include acronyms used for ISS pressurized modules/components (i.e., PMA, COF, FGB, etc...). I don't mind working on this - it will help drill these into my head.",1.0,"Continue to update Acronyms page - Update [acronyms page](https://github.com/ISS-Mimic/Mimic/wiki/Acronyms) to include acronyms used for ISS pressurized modules/components (i.e., PMA, COF, FGB, etc...). I don't mind working on this - it will help drill these into my head.",0,continue to update acronyms page update to include acronyms used for iss pressurized modules components i e pma cof fgb etc i don t mind working on this it will help drill these into my head ,0
95347,19694327065.0,IssuesEvent,2022-01-12 10:30:28,Onelinerhub/onelinerhub,https://api.github.com/repos/Onelinerhub/onelinerhub,closed,"Short solution needed: ""How to delete repository"" (git)",help wanted good first issue code git,"Please help us write most modern and shortest code solution for this issue:
**How to delete repository** (technology: [git](https://onelinerhub.com/git))
### Fast way
Just write the code solution in the comments.
### Prefered way
1. Create pull request with a new code file inside [inbox folder](https://github.com/Onelinerhub/onelinerhub/tree/main/inbox).
2. Don't forget to use comments to make solution explained.
3. Link to this issue in comments of pull request.",1.0,"Short solution needed: ""How to delete repository"" (git) - Please help us write most modern and shortest code solution for this issue:
**How to delete repository** (technology: [git](https://onelinerhub.com/git))
### Fast way
Just write the code solution in the comments.
### Prefered way
1. Create pull request with a new code file inside [inbox folder](https://github.com/Onelinerhub/onelinerhub/tree/main/inbox).
2. Don't forget to use comments to make solution explained.
3. Link to this issue in comments of pull request.",0,short solution needed how to delete repository git please help us write most modern and shortest code solution for this issue how to delete repository technology fast way just write the code solution in the comments prefered way create pull request with a new code file inside don t forget to use comments to make solution explained link to this issue in comments of pull request ,0
1704,18906285208.0,IssuesEvent,2021-11-16 09:26:15,beattosetto/beattosetto,https://api.github.com/repos/beattosetto/beattosetto,closed,Make the button to admin page,frontend type:ui-ux fix from discussion type:reliability,Now admin (staff or superuser on website) can only manage the beatmap via the admin page. If we have the button to go straight to that element on admin page it will be better.,True,Make the button to admin page - Now admin (staff or superuser on website) can only manage the beatmap via the admin page. If we have the button to go straight to that element on admin page it will be better.,1,make the button to admin page now admin staff or superuser on website can only manage the beatmap via the admin page if we have the button to go straight to that element on admin page it will be better ,1
745,10299671417.0,IssuesEvent,2019-08-28 13:07:20,rook/rook,https://api.github.com/repos/rook/rook,reopened,Limit IOPS per volume via Storageclass,block filesystem reliability wontfix,"Is this a bug report or feature request?
* Feature Request
**Feature Request**
Are there any similar features already existing:
Not in Rook as far as I can tell.
In the Openstack world there is sth. similar implemented for Cinder and Nova: https://ceph.com/geen-categorie/openstack-ceph-rbd-and-qos/
Also there seems to be a mechanism in docker itself:
https://docs.docker.com/engine/reference/commandline/run/
Or without further investigating in Ceph(RBD):
https://github.com/ceph/ceph/pull/17032
I guess the most promising would be to integrate (if it really works and exists and is working with Rook) the RBD feature, because most likely the docker feature only works on a physical device.
What should the feature do:
In an ideal world, I want to create different Storageclasses with different IOps rate limits, like:
```
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: limited-rook-block
provisioner: rook.io/block
parameters:
pool: replicapool
iops_limit: 40
clusterName: rook
```
What would be solved through this feature:
""The noisy neighbor problem"". Without this feature Rook won't be useable in production, as you can slow down the whole cluster by e.g extracting a huge gzip file.
Does this have an impact on existing features:
Only if there should be a default limiting. This could help beginners from halting their whole platform but I think a appropriate value is hard to guess.
**Environment**:
As this is a general feature request, it should work on every OS and setup or at least on mine :-)
That would be:
* OS (e.g. from /etc/os-release):
```
NAME=""CentOS Linux""
VERSION=""7 (Core)""
ID=""centos""
ID_LIKE=""rhel fedora""
VERSION_ID=""7""
PRETTY_NAME=""CentOS Linux 7 (Core)""
ANSI_COLOR=""0;31""
CPE_NAME=""cpe:/o:centos:centos:7""
HOME_URL=""https://www.centos.org/""
BUG_REPORT_URL=""https://bugs.centos.org/""
CENTOS_MANTISBT_PROJECT=""CentOS-7""
CENTOS_MANTISBT_PROJECT_VERSION=""7""
REDHAT_SUPPORT_PRODUCT=""centos""
REDHAT_SUPPORT_PRODUCT_VERSION=""7""
```
* Kernel (e.g. `uname -a`):
`Linux test-k8-1 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu Jan 4 01:06:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux`
* Cloud provider or hardware configuration:
`bare metal`
* Rook version (use `rook version` inside of a Rook Pod):
`rook: v0.6.2 ` But this can be any future version
* Kubernetes version (use `kubectl version`):
```
Client Version: version.Info{Major:""1"", Minor:""8"", GitVersion:""v1.8.0"", GitCommit:""6e937839ac04a38cac63e6a7a306c5d035fe7b0a"", GitTreeState:""clean"", BuildDate:""2017-09-28T22:57:57Z"", GoVersion:""go1.8.3"", Compiler:""gc"", Platform:""linux/amd64""}
Server Version: version.Info{Major:""1"", Minor:""8+"", GitVersion:""v1.8.3-rancher1"", GitCommit:""beb8311a9f114ba92558d8d771a81b7fb38422ae"", GitTreeState:""clean"", BuildDate:""2017-11-14T00:54:19Z"", GoVersion:""go1.8.3"", Compiler:""gc"", Platform:""linux/amd64""}
```
* Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift):
`Rancher Kubernetes (Rancher 2.0)`
* Ceph status (use `ceph health` in the [Rook toolbox](https://Rook.io/docs/Rook/master/toolbox.html)):
-
",True,"Limit IOPS per volume via Storageclass - Is this a bug report or feature request?
* Feature Request
**Feature Request**
Are there any similar features already existing:
Not in Rook as far as I can tell.
In the Openstack world there is sth. similar implemented for Cinder and Nova: https://ceph.com/geen-categorie/openstack-ceph-rbd-and-qos/
Also there seems to be a mechanism in docker itself:
https://docs.docker.com/engine/reference/commandline/run/
Or without further investigating in Ceph(RBD):
https://github.com/ceph/ceph/pull/17032
I guess the most promising would be to integrate (if it really works and exists and is working with Rook) the RBD feature, because most likely the docker feature only works on a physical device.
What should the feature do:
In an ideal world, I want to create different Storageclasses with different IOps rate limits, like:
```
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: limited-rook-block
provisioner: rook.io/block
parameters:
pool: replicapool
iops_limit: 40
clusterName: rook
```
What would be solved through this feature:
""The noisy neighbor problem"". Without this feature Rook won't be useable in production, as you can slow down the whole cluster by e.g extracting a huge gzip file.
Does this have an impact on existing features:
Only if there should be a default limiting. This could help beginners from halting their whole platform but I think a appropriate value is hard to guess.
**Environment**:
As this is a general feature request, it should work on every OS and setup or at least on mine :-)
That would be:
* OS (e.g. from /etc/os-release):
```
NAME=""CentOS Linux""
VERSION=""7 (Core)""
ID=""centos""
ID_LIKE=""rhel fedora""
VERSION_ID=""7""
PRETTY_NAME=""CentOS Linux 7 (Core)""
ANSI_COLOR=""0;31""
CPE_NAME=""cpe:/o:centos:centos:7""
HOME_URL=""https://www.centos.org/""
BUG_REPORT_URL=""https://bugs.centos.org/""
CENTOS_MANTISBT_PROJECT=""CentOS-7""
CENTOS_MANTISBT_PROJECT_VERSION=""7""
REDHAT_SUPPORT_PRODUCT=""centos""
REDHAT_SUPPORT_PRODUCT_VERSION=""7""
```
* Kernel (e.g. `uname -a`):
`Linux test-k8-1 3.10.0-693.11.6.el7.x86_64 #1 SMP Thu Jan 4 01:06:37 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux`
* Cloud provider or hardware configuration:
`bare metal`
* Rook version (use `rook version` inside of a Rook Pod):
`rook: v0.6.2 ` But this can be any future version
* Kubernetes version (use `kubectl version`):
```
Client Version: version.Info{Major:""1"", Minor:""8"", GitVersion:""v1.8.0"", GitCommit:""6e937839ac04a38cac63e6a7a306c5d035fe7b0a"", GitTreeState:""clean"", BuildDate:""2017-09-28T22:57:57Z"", GoVersion:""go1.8.3"", Compiler:""gc"", Platform:""linux/amd64""}
Server Version: version.Info{Major:""1"", Minor:""8+"", GitVersion:""v1.8.3-rancher1"", GitCommit:""beb8311a9f114ba92558d8d771a81b7fb38422ae"", GitTreeState:""clean"", BuildDate:""2017-11-14T00:54:19Z"", GoVersion:""go1.8.3"", Compiler:""gc"", Platform:""linux/amd64""}
```
* Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift):
`Rancher Kubernetes (Rancher 2.0)`
* Ceph status (use `ceph health` in the [Rook toolbox](https://Rook.io/docs/Rook/master/toolbox.html)):
-
",1,limit iops per volume via storageclass is this a bug report or feature request feature request feature request are there any similar features already existing not in rook as far as i can tell in the openstack world there is sth similar implemented for cinder and nova also there seems to be a mechanism in docker itself or without further investigating in ceph rbd i guess the most promising would be to integrate if it really works and exists and is working with rook the rbd feature because most likely the docker feature only works on a physical device what should the feature do in an ideal world i want to create different storageclasses with different iops rate limits like apiversion storage io kind storageclass metadata name limited rook block provisioner rook io block parameters pool replicapool iops limit clustername rook what would be solved through this feature the noisy neighbor problem without this feature rook won t be useable in production as you can slow down the whole cluster by e g extracting a huge gzip file does this have an impact on existing features only if there should be a default limiting this could help beginners from halting their whole platform but i think a appropriate value is hard to guess environment as this is a general feature request it should work on every os and setup or at least on mine that would be os e g from etc os release name centos linux version core id centos id like rhel fedora version id pretty name centos linux core ansi color cpe name cpe o centos centos home url bug report url centos mantisbt project centos centos mantisbt project version redhat support product centos redhat support product version kernel e g uname a linux test smp thu jan utc gnu linux cloud provider or hardware configuration bare metal rook version use rook version inside of a rook pod rook but this can be any future version kubernetes version use kubectl version client version version info major minor gitversion gitcommit gittreestate clean builddate goversion compiler gc platform linux server version version info major minor gitversion gitcommit gittreestate clean builddate goversion compiler gc platform linux kubernetes cluster type e g tectonic gke openshift rancher kubernetes rancher ceph status use ceph health in the ,1
212464,16451798722.0,IssuesEvent,2021-05-21 07:03:37,INSPIRE-MIF/helpdesk-validator,https://api.github.com/repos/INSPIRE-MIF/helpdesk-validator,reopened,Land Cover Vector - Not Recognizing Surface Geometry Type,ready for testing,"Hi Validator Team,
When running the Land Cover Vector test, we are seeing an error with the Conformance class: Application schema, Land Cover Vector test.
The error is happening under Constraints > LandCoverUnit geometry.
We're seeing the following two errors...
XML document 'service.xml', LandCoverUnit 'lcvLandCoverUnitS.4': The constraint 'The geometries of the LandCoverUnit feature types are points or surfaces' is violated.
XML document 'service.xml', LandCoverUnit 'lcvLandCoverUnitS.9': The constraint 'The geometries of the LandCoverUnit feature types are points or surfaces' is violated.
We're not exactly sure what this is looking for regarding the surface geometry. We have the following returned geometry nodes in the xml.
46.558500818000 6.794617301000 46.286064795000 7.710798154000 45.943604845000 6.830545962000 46.558500818000 6.79461730100045.868603464000 9.938375131000 45.918615644000 11.204860428000 45.233325326000 11.222824758000 45.226999541000 9.947357296000 45.868603464000 9.938375131000
We've attached the complete returned XML for this test.
Thank you for your help.
[LC_LandCoverUnit.zip](https://github.com/inspire-eu-validation/community/files/5592172/LC_LandCoverUnit.zip)
",1.0,"Land Cover Vector - Not Recognizing Surface Geometry Type - Hi Validator Team,
When running the Land Cover Vector test, we are seeing an error with the Conformance class: Application schema, Land Cover Vector test.
The error is happening under Constraints > LandCoverUnit geometry.
We're seeing the following two errors...
XML document 'service.xml', LandCoverUnit 'lcvLandCoverUnitS.4': The constraint 'The geometries of the LandCoverUnit feature types are points or surfaces' is violated.
XML document 'service.xml', LandCoverUnit 'lcvLandCoverUnitS.9': The constraint 'The geometries of the LandCoverUnit feature types are points or surfaces' is violated.
We're not exactly sure what this is looking for regarding the surface geometry. We have the following returned geometry nodes in the xml.
46.558500818000 6.794617301000 46.286064795000 7.710798154000 45.943604845000 6.830545962000 46.558500818000 6.79461730100045.868603464000 9.938375131000 45.918615644000 11.204860428000 45.233325326000 11.222824758000 45.226999541000 9.947357296000 45.868603464000 9.938375131000
We've attached the complete returned XML for this test.
Thank you for your help.
[LC_LandCoverUnit.zip](https://github.com/inspire-eu-validation/community/files/5592172/LC_LandCoverUnit.zip)
",0,land cover vector not recognizing surface geometry type hi validator team when running the land cover vector test we are seeing an error with the conformance class application schema land cover vector test the error is happening under constraints landcoverunit geometry we re seeing the following two errors xml document service xml landcoverunit lcvlandcoverunits the constraint the geometries of the landcoverunit feature types are points or surfaces is violated xml document service xml landcoverunit lcvlandcoverunits the constraint the geometries of the landcoverunit feature types are points or surfaces is violated we re not exactly sure what this is looking for regarding the surface geometry we have the following returned geometry nodes in the xml we ve attached the complete returned xml for this test thank you for your help ,0
2982,30794105366.0,IssuesEvent,2023-07-31 18:25:59,ppy/osu,https://api.github.com/repos/ppy/osu,closed,Multiplayer crashes due to incorrect threading surrounding `Room.Playlist` ,area:multiplayer type:reliability,"This happens because the `Playlist` is modified from arbitrary threads, but operated on as if it is assumed to always be update thread safe.
The case in this reported discussion is likely due to the `CopyFrom` operation, which clears and adds back each playlist item:
https://github.com/ppy/osu/blob/38702beabf47164fcac6db142faf7b9ddd392877/osu.Game/Screens/OnlinePlay/Components/RoomManager.cs#L124
called from:
https://github.com/ppy/osu/blob/38702beabf47164fcac6db142faf7b9ddd392877/osu.Game/Screens/OnlinePlay/Components/RoomManager.cs#L56
@smoogipoo is it a correct assumption that this should be run on the update thread?
### Discussed in https://github.com/ppy/osu/discussions/15986
Originally posted by **Theighlin** December 8, 2021
Nothing out of the ordinary, everything was working fine until it didn't, although some minutes before that a beatmap got stuck on ""importing"" on my panel in multi, i instantly checked logs and it said this:
2021-12-07 17:35:18 [important]: The imported beatmap set does not match the online version.
I quit and rejoined, it was there ready to be played.
A bit later the game crashed when another map was starting.
[performance.log](https://github.com/ppy/osu/files/7670679/performance.log)
[runtime.log](https://github.com/ppy/osu/files/7670680/runtime.log)
[database.log](https://github.com/ppy/osu/files/7670681/database.log)
[network.log](https://github.com/ppy/osu/files/7670682/network.log)
```csharp
2021-12-07 17:40:50 [verbose]: Screen changed ← Multiplayer
2021-12-07 17:40:50 [verbose]: Game-wide working beatmap updated to please load a beatmap! - no beatmaps available!
2021-12-07 17:41:34 [verbose]: Game-wide working beatmap updated to Paramore - Still Into You (Sped Up & Cut Ver.) (Froskya) [Butterflies]
2021-12-07 17:41:50 [error]: An unhandled error has occurred.
2021-12-07 17:41:50 [error]: System.NullReferenceException: Object reference not set to an instance of an object.
2021-12-07 17:41:50 [error]: at System.Linq.EnumerableSorter`2.ComputeKeys(TElement[] elements, Int32 count)
2021-12-07 17:41:50 [error]: at System.Linq.EnumerableSorter`1.ComputeMap(TElement[] elements, Int32 count)
2021-12-07 17:41:50 [error]: at System.Linq.EnumerableSorter`1.Sort(TElement[] elements, Int32 count)
2021-12-07 17:41:50 [error]: at System.Linq.OrderedEnumerable`1.ToArray()
2021-12-07 17:41:50 [error]: at osu.Game.Screens.OnlinePlay.Components.StarRatingRangeDisplay.updateRange(Object sender, NotifyCollectionChangedEventArgs e)
2021-12-07 17:41:50 [error]: at osu.Framework.Bindables.BindableList`1.add(T item, BindableList`1 caller)
2021-12-07 17:41:50 [error]: at osu.Framework.Bindables.BindableList`1.add(T item, BindableList`1 caller)
2021-12-07 17:41:50 [error]: at osu.Framework.Bindables.BindableList`1.add(T item, BindableList`1 caller)
2021-12-07 17:41:50 [error]: at osu.Game.Online.Multiplayer.MultiplayerClient.<>c__DisplayClass98_0.b__0()
2021-12-07 17:41:50 [error]: at osu.Framework.Threading.ScheduledDelegate.RunTaskInternal()
2021-12-07 17:41:50 [error]: at osu.Framework.Threading.Scheduler.Update()
2021-12-07 17:41:50 [error]: at osu.Framework.Graphics.Drawable.UpdateSubTree()
2021-12-07 17:41:50 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2021-12-07 17:41:50 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2021-12-07 17:41:50 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2021-12-07 17:41:50 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2021-12-07 17:41:50 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2021-12-07 17:41:50 [error]: at osu.Framework.Platform.GameHost.UpdateFrame()
2021-12-07 17:41:50 [error]: at osu.Framework.Threading.GameThread.processFrame()
2021-12-07 17:41:50 [verbose]: Unhandled exception has been allowed with 0 more allowable exceptions .
2021-12-07 17:41:50 [error]: An unhandled error has occurred.
2021-12-07 17:41:50 [error]: System.ArgumentOutOfRangeException: Index was out of range. Must be non-negative and less than the size of the collection. (Parameter 'index')
2021-12-07 17:41:50 [error]: at osu.Framework.Bindables.BindableList`1.removeAt(Int32 index, BindableList`1 caller)
2021-12-07 17:41:50 [error]: at osu.Framework.Bindables.BindableList`1.removeAt(Int32 index, BindableList`1 caller)
2021-12-07 17:41:50 [error]: at osu.Framework.Bindables.BindableList`1.removeAt(Int32 index, BindableList`1 caller)
2021-12-07 17:41:50 [error]: at osu.Game.Online.Multiplayer.MultiplayerClient.<>c__DisplayClass100_0.b__0()
2021-12-07 17:41:50 [error]: at osu.Framework.Threading.ScheduledDelegate.RunTaskInternal()
2021-12-07 17:41:50 [error]: at osu.Framework.Threading.Scheduler.Update()
2021-12-07 17:41:50 [error]: at osu.Framework.Graphics.Drawable.UpdateSubTree()
2021-12-07 17:41:50 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2021-12-07 17:41:50 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2021-12-07 17:41:50 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2021-12-07 17:41:50 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2021-12-07 17:41:50 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2021-12-07 17:41:50 [error]: at osu.Framework.Platform.GameHost.UpdateFrame()
2021-12-07 17:41:50 [error]: at osu.Framework.Threading.GameThread.processFrame()
2021-12-07 17:41:50 [verbose]: Unhandled exception has been denied .
```",True,"Multiplayer crashes due to incorrect threading surrounding `Room.Playlist` - This happens because the `Playlist` is modified from arbitrary threads, but operated on as if it is assumed to always be update thread safe.
The case in this reported discussion is likely due to the `CopyFrom` operation, which clears and adds back each playlist item:
https://github.com/ppy/osu/blob/38702beabf47164fcac6db142faf7b9ddd392877/osu.Game/Screens/OnlinePlay/Components/RoomManager.cs#L124
called from:
https://github.com/ppy/osu/blob/38702beabf47164fcac6db142faf7b9ddd392877/osu.Game/Screens/OnlinePlay/Components/RoomManager.cs#L56
@smoogipoo is it a correct assumption that this should be run on the update thread?
### Discussed in https://github.com/ppy/osu/discussions/15986
Originally posted by **Theighlin** December 8, 2021
Nothing out of the ordinary, everything was working fine until it didn't, although some minutes before that a beatmap got stuck on ""importing"" on my panel in multi, i instantly checked logs and it said this:
2021-12-07 17:35:18 [important]: The imported beatmap set does not match the online version.
I quit and rejoined, it was there ready to be played.
A bit later the game crashed when another map was starting.
[performance.log](https://github.com/ppy/osu/files/7670679/performance.log)
[runtime.log](https://github.com/ppy/osu/files/7670680/runtime.log)
[database.log](https://github.com/ppy/osu/files/7670681/database.log)
[network.log](https://github.com/ppy/osu/files/7670682/network.log)
```csharp
2021-12-07 17:40:50 [verbose]: Screen changed ← Multiplayer
2021-12-07 17:40:50 [verbose]: Game-wide working beatmap updated to please load a beatmap! - no beatmaps available!
2021-12-07 17:41:34 [verbose]: Game-wide working beatmap updated to Paramore - Still Into You (Sped Up & Cut Ver.) (Froskya) [Butterflies]
2021-12-07 17:41:50 [error]: An unhandled error has occurred.
2021-12-07 17:41:50 [error]: System.NullReferenceException: Object reference not set to an instance of an object.
2021-12-07 17:41:50 [error]: at System.Linq.EnumerableSorter`2.ComputeKeys(TElement[] elements, Int32 count)
2021-12-07 17:41:50 [error]: at System.Linq.EnumerableSorter`1.ComputeMap(TElement[] elements, Int32 count)
2021-12-07 17:41:50 [error]: at System.Linq.EnumerableSorter`1.Sort(TElement[] elements, Int32 count)
2021-12-07 17:41:50 [error]: at System.Linq.OrderedEnumerable`1.ToArray()
2021-12-07 17:41:50 [error]: at osu.Game.Screens.OnlinePlay.Components.StarRatingRangeDisplay.updateRange(Object sender, NotifyCollectionChangedEventArgs e)
2021-12-07 17:41:50 [error]: at osu.Framework.Bindables.BindableList`1.add(T item, BindableList`1 caller)
2021-12-07 17:41:50 [error]: at osu.Framework.Bindables.BindableList`1.add(T item, BindableList`1 caller)
2021-12-07 17:41:50 [error]: at osu.Framework.Bindables.BindableList`1.add(T item, BindableList`1 caller)
2021-12-07 17:41:50 [error]: at osu.Game.Online.Multiplayer.MultiplayerClient.<>c__DisplayClass98_0.b__0()
2021-12-07 17:41:50 [error]: at osu.Framework.Threading.ScheduledDelegate.RunTaskInternal()
2021-12-07 17:41:50 [error]: at osu.Framework.Threading.Scheduler.Update()
2021-12-07 17:41:50 [error]: at osu.Framework.Graphics.Drawable.UpdateSubTree()
2021-12-07 17:41:50 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2021-12-07 17:41:50 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2021-12-07 17:41:50 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2021-12-07 17:41:50 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2021-12-07 17:41:50 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2021-12-07 17:41:50 [error]: at osu.Framework.Platform.GameHost.UpdateFrame()
2021-12-07 17:41:50 [error]: at osu.Framework.Threading.GameThread.processFrame()
2021-12-07 17:41:50 [verbose]: Unhandled exception has been allowed with 0 more allowable exceptions .
2021-12-07 17:41:50 [error]: An unhandled error has occurred.
2021-12-07 17:41:50 [error]: System.ArgumentOutOfRangeException: Index was out of range. Must be non-negative and less than the size of the collection. (Parameter 'index')
2021-12-07 17:41:50 [error]: at osu.Framework.Bindables.BindableList`1.removeAt(Int32 index, BindableList`1 caller)
2021-12-07 17:41:50 [error]: at osu.Framework.Bindables.BindableList`1.removeAt(Int32 index, BindableList`1 caller)
2021-12-07 17:41:50 [error]: at osu.Framework.Bindables.BindableList`1.removeAt(Int32 index, BindableList`1 caller)
2021-12-07 17:41:50 [error]: at osu.Game.Online.Multiplayer.MultiplayerClient.<>c__DisplayClass100_0.b__0()
2021-12-07 17:41:50 [error]: at osu.Framework.Threading.ScheduledDelegate.RunTaskInternal()
2021-12-07 17:41:50 [error]: at osu.Framework.Threading.Scheduler.Update()
2021-12-07 17:41:50 [error]: at osu.Framework.Graphics.Drawable.UpdateSubTree()
2021-12-07 17:41:50 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2021-12-07 17:41:50 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2021-12-07 17:41:50 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2021-12-07 17:41:50 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2021-12-07 17:41:50 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2021-12-07 17:41:50 [error]: at osu.Framework.Platform.GameHost.UpdateFrame()
2021-12-07 17:41:50 [error]: at osu.Framework.Threading.GameThread.processFrame()
2021-12-07 17:41:50 [verbose]: Unhandled exception has been denied .
```",1,multiplayer crashes due to incorrect threading surrounding room playlist this happens because the playlist is modified from arbitrary threads but operated on as if it is assumed to always be update thread safe the case in this reported discussion is likely due to the copyfrom operation which clears and adds back each playlist item called from smoogipoo is it a correct assumption that this should be run on the update thread discussed in originally posted by theighlin december nothing out of the ordinary everything was working fine until it didn t although some minutes before that a beatmap got stuck on importing on my panel in multi i instantly checked logs and it said this the imported beatmap set does not match the online version i quit and rejoined it was there ready to be played a bit later the game crashed when another map was starting csharp screen changed ← multiplayer game wide working beatmap updated to please load a beatmap no beatmaps available game wide working beatmap updated to paramore still into you sped up cut ver froskya an unhandled error has occurred system nullreferenceexception object reference not set to an instance of an object at system linq enumerablesorter computekeys telement elements count at system linq enumerablesorter computemap telement elements count at system linq enumerablesorter sort telement elements count at system linq orderedenumerable toarray at osu game screens onlineplay components starratingrangedisplay updaterange object sender notifycollectionchangedeventargs e at osu framework bindables bindablelist add t item bindablelist caller at osu framework bindables bindablelist add t item bindablelist caller at osu framework bindables bindablelist add t item bindablelist caller at osu game online multiplayer multiplayerclient c b at osu framework threading scheduleddelegate runtaskinternal at osu framework threading scheduler update at osu framework graphics drawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework platform gamehost updateframe at osu framework threading gamethread processframe unhandled exception has been allowed with more allowable exceptions an unhandled error has occurred system argumentoutofrangeexception index was out of range must be non negative and less than the size of the collection parameter index at osu framework bindables bindablelist removeat index bindablelist caller at osu framework bindables bindablelist removeat index bindablelist caller at osu framework bindables bindablelist removeat index bindablelist caller at osu game online multiplayer multiplayerclient c b at osu framework threading scheduleddelegate runtaskinternal at osu framework threading scheduler update at osu framework graphics drawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework platform gamehost updateframe at osu framework threading gamethread processframe unhandled exception has been denied ,1
58490,14283103810.0,IssuesEvent,2020-11-23 10:32:00,benchabot/prettier,https://api.github.com/repos/benchabot/prettier,opened,"CVE-2020-7760 (Medium) detected in javascript-5.48.2.min.js, codemirror-5.48.4.min.js",security vulnerability,"## CVE-2020-7760 - Medium Severity Vulnerability
Vulnerable Libraries - javascript-5.48.2.min.js, codemirror-5.48.4.min.js
This affects the package codemirror before 5.58.2; the package org.apache.marmotta.webjars:codemirror before 5.58.2. The vulnerable regular expression is located in https://github.com/codemirror/CodeMirror/blob/cdb228ac736369c685865b122b736cd0d397836c/mode/javascript/javascript.jsL129. The ReDOS vulnerability of the regex is mainly due to the sub-pattern (s|/*.*?*/)*
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2020-7760 (Medium) detected in javascript-5.48.2.min.js, codemirror-5.48.4.min.js - ## CVE-2020-7760 - Medium Severity Vulnerability
Vulnerable Libraries - javascript-5.48.2.min.js, codemirror-5.48.4.min.js
This affects the package codemirror before 5.58.2; the package org.apache.marmotta.webjars:codemirror before 5.58.2. The vulnerable regular expression is located in https://github.com/codemirror/CodeMirror/blob/cdb228ac736369c685865b122b736cd0d397836c/mode/javascript/javascript.jsL129. The ReDOS vulnerability of the regex is mainly due to the sub-pattern (s|/*.*?*/)*
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in javascript min js codemirror min js cve medium severity vulnerability vulnerable libraries javascript min js codemirror min js javascript min js in browser code editing made bearable library home page a href path to dependency file prettier website pages playground index html path to vulnerable library prettier website pages playground index html dependency hierarchy x javascript min js vulnerable library codemirror min js in browser code editing made bearable library home page a href path to dependency file prettier website pages playground index html path to vulnerable library prettier website pages playground index html dependency hierarchy x codemirror min js vulnerable library found in head commit a href found in base branch master vulnerability details this affects the package codemirror before the package org apache marmotta webjars codemirror before the vulnerable regular expression is located in the redos vulnerability of the regex is mainly due to the sub pattern s publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution codemirror step up your open source security game with whitesource ,0
1515,16676637224.0,IssuesEvent,2021-06-07 17:00:52,hashicorp/consul,https://api.github.com/repos/hashicorp/consul,closed,debug: improvements to the debug command,theme/reliability theme/telemetry type/enhancement,"This issue is to track a number of improvements to the data produced by [consul debug](https://www.consul.io/commands/debug). These improvements should make it easier to get useful data from `consul debug`.
* [ ] only capture a single cpu profile and single `trace` for the entire duration, instead of a separate one for each interval. A single cpu profile and trace should contain all the same data, and is easier to consume than 4 or 5 separate profiles.
* [ ] capture a single delta `heap`, and `goroutine` profile, instead of a separate one for each interval
* [ ] change the metrics endpoint to return a stream of metrics each time the window ends, instead of having to poll for metrics, which results in most metrics being missed.
* [ ] only capture logs once, instead of once per interval
* [ ] add tests for the log capture to show that it properly captures all logs
* [ ] rename `cluster.json` to `members.json` to better match the name used by the cli (`consul members`)
",True,"debug: improvements to the debug command - This issue is to track a number of improvements to the data produced by [consul debug](https://www.consul.io/commands/debug). These improvements should make it easier to get useful data from `consul debug`.
* [ ] only capture a single cpu profile and single `trace` for the entire duration, instead of a separate one for each interval. A single cpu profile and trace should contain all the same data, and is easier to consume than 4 or 5 separate profiles.
* [ ] capture a single delta `heap`, and `goroutine` profile, instead of a separate one for each interval
* [ ] change the metrics endpoint to return a stream of metrics each time the window ends, instead of having to poll for metrics, which results in most metrics being missed.
* [ ] only capture logs once, instead of once per interval
* [ ] add tests for the log capture to show that it properly captures all logs
* [ ] rename `cluster.json` to `members.json` to better match the name used by the cli (`consul members`)
",1,debug improvements to the debug command this issue is to track a number of improvements to the data produced by these improvements should make it easier to get useful data from consul debug only capture a single cpu profile and single trace for the entire duration instead of a separate one for each interval a single cpu profile and trace should contain all the same data and is easier to consume than or separate profiles capture a single delta heap and goroutine profile instead of a separate one for each interval change the metrics endpoint to return a stream of metrics each time the window ends instead of having to poll for metrics which results in most metrics being missed only capture logs once instead of once per interval add tests for the log capture to show that it properly captures all logs rename cluster json to members json to better match the name used by the cli consul members ,1
2404,25167524855.0,IssuesEvent,2022-11-10 22:24:55,hyperlane-xyz/hyperlane-monorepo,https://api.github.com/repos/hyperlane-xyz/hyperlane-monorepo,closed,Use QuorumProvider / FallbackProvider in ts/rs,reliability epic,"- [x] https://github.com/abacus-network/abacus-monorepo/issues/868
- [x] https://github.com/abacus-network/abacus-monorepo/issues/869
- [x] https://github.com/abacus-network/abacus-monorepo/issues/870
- [x] https://github.com/abacus-network/abacus-monorepo/issues/871
- [ ] https://github.com/abacus-network/abacus-monorepo/issues/872
- [x] https://github.com/abacus-network/abacus-monorepo/issues/873
- [ ] #875
- [ ] #874 ",True,"Use QuorumProvider / FallbackProvider in ts/rs - - [x] https://github.com/abacus-network/abacus-monorepo/issues/868
- [x] https://github.com/abacus-network/abacus-monorepo/issues/869
- [x] https://github.com/abacus-network/abacus-monorepo/issues/870
- [x] https://github.com/abacus-network/abacus-monorepo/issues/871
- [ ] https://github.com/abacus-network/abacus-monorepo/issues/872
- [x] https://github.com/abacus-network/abacus-monorepo/issues/873
- [ ] #875
- [ ] #874 ",1,use quorumprovider fallbackprovider in ts rs ,1
1791,19846196485.0,IssuesEvent,2022-01-21 06:44:31,FoundationDB/fdb-kubernetes-operator,https://api.github.com/repos/FoundationDB/fdb-kubernetes-operator,closed,Change ProcessGroupStatus remove and excluded to timestamp,reliability,We should add two new fields for the `ProcessGroupStatus`: `deletionTimestamp` and `exclusionTimestamp` those fields will supersede the `remove` and `excluded` field. The benefit of a timestamp is to see when the operator changed that status without looking into the logs. We should implement that with one release transition phase (I don't assume that many people use the status field). For the transition phase we will have the operator set always both fields and will check both fields.,True,Change ProcessGroupStatus remove and excluded to timestamp - We should add two new fields for the `ProcessGroupStatus`: `deletionTimestamp` and `exclusionTimestamp` those fields will supersede the `remove` and `excluded` field. The benefit of a timestamp is to see when the operator changed that status without looking into the logs. We should implement that with one release transition phase (I don't assume that many people use the status field). For the transition phase we will have the operator set always both fields and will check both fields.,1,change processgroupstatus remove and excluded to timestamp we should add two new fields for the processgroupstatus deletiontimestamp and exclusiontimestamp those fields will supersede the remove and excluded field the benefit of a timestamp is to see when the operator changed that status without looking into the logs we should implement that with one release transition phase i don t assume that many people use the status field for the transition phase we will have the operator set always both fields and will check both fields ,1
576016,17068946973.0,IssuesEvent,2021-07-07 10:51:55,kubeapps/kubeapps,https://api.github.com/repos/kubeapps/kubeapps,closed,Split our OIDC docs per provider,component/docs kind/refactor priority/low size/XS,"### Description:
https://github.com/kubeapps/kubeapps/pull/2982#discussion_r652514798
As the currents docs are getting becoming more detailed, perhaps it's better to split the docs and just point to the files containing the información per provider. This way we can be as wordy as we want without compromising the reader's experience.
",1.0,"Split our OIDC docs per provider - ### Description:
https://github.com/kubeapps/kubeapps/pull/2982#discussion_r652514798
As the currents docs are getting becoming more detailed, perhaps it's better to split the docs and just point to the files containing the información per provider. This way we can be as wordy as we want without compromising the reader's experience.
",0,split our oidc docs per provider description as the currents docs are getting becoming more detailed perhaps it s better to split the docs and just point to the files containing the información per provider this way we can be as wordy as we want without compromising the reader s experience ,0
2369,24948951939.0,IssuesEvent,2022-11-01 04:29:11,Azure/azure-sdk-for-java,https://api.github.com/repos/Azure/azure-sdk-for-java,closed,Track windowTimeout progress with reactor-team / investigate workaround for OverflowException,Client pillar-reliability amqp,"Currently, the EH API that receives a batch of events with a timeout uses Reactor `windowTimeout` operator. This operator lacks the support for backpressure, leading CX to run into OverflowException. Refer to [this](https://github.com/reactor/reactor-core/issues/1099) issue for details.
Creating this to track the progress of the above Reactor work item/investigation on any workaround for the limitation.
Related git issues are:
1. https://github.com/Azure/azure-sdk-for-java/issues/20841
and there are a couple of CX reported this offline. ",True,"Track windowTimeout progress with reactor-team / investigate workaround for OverflowException - Currently, the EH API that receives a batch of events with a timeout uses Reactor `windowTimeout` operator. This operator lacks the support for backpressure, leading CX to run into OverflowException. Refer to [this](https://github.com/reactor/reactor-core/issues/1099) issue for details.
Creating this to track the progress of the above Reactor work item/investigation on any workaround for the limitation.
Related git issues are:
1. https://github.com/Azure/azure-sdk-for-java/issues/20841
and there are a couple of CX reported this offline. ",1,track windowtimeout progress with reactor team investigate workaround for overflowexception currently the eh api that receives a batch of events with a timeout uses reactor windowtimeout operator this operator lacks the support for backpressure leading cx to run into overflowexception refer to issue for details creating this to track the progress of the above reactor work item investigation on any workaround for the limitation related git issues are and there are a couple of cx reported this offline ,1
254411,27376257525.0,IssuesEvent,2023-02-28 06:17:04,microsoft/ebpf-for-windows,https://api.github.com/repos/microsoft/ebpf-for-windows,closed,Github CI/CD TODOs on egress-policy,enhancement triaged security ci/cd,"### Describe the bug
reusable-test.yml: egress-policy: audit # TODO: change to 'egress-policy: block' after couple of runs
scorecards-analysis.yml: egress-policy: audit # TODO: change to 'egress-policy: block' after couple of runs
update-docs.yml: egress-policy: audit # TODO: change to 'egress-policy: block' after couple of runs
### OS information
_No response_
### Steps taken to reproduce bug
Code review
### Expected behavior
no TODOs remain
### Actual outcome
TODOs exist
### Additional details
_No response_",True,"Github CI/CD TODOs on egress-policy - ### Describe the bug
reusable-test.yml: egress-policy: audit # TODO: change to 'egress-policy: block' after couple of runs
scorecards-analysis.yml: egress-policy: audit # TODO: change to 'egress-policy: block' after couple of runs
update-docs.yml: egress-policy: audit # TODO: change to 'egress-policy: block' after couple of runs
### OS information
_No response_
### Steps taken to reproduce bug
Code review
### Expected behavior
no TODOs remain
### Actual outcome
TODOs exist
### Additional details
_No response_",0,github ci cd todos on egress policy describe the bug reusable test yml egress policy audit todo change to egress policy block after couple of runs scorecards analysis yml egress policy audit todo change to egress policy block after couple of runs update docs yml egress policy audit todo change to egress policy block after couple of runs os information no response steps taken to reproduce bug code review expected behavior no todos remain actual outcome todos exist additional details no response ,0
551,8555814137.0,IssuesEvent,2018-11-08 11:08:54,LiskHQ/lisk,https://api.github.com/repos/LiskHQ/lisk,closed,Node receives blocks during snapshotting ,*medium :hammer: reliability chain p2p performance,"### Expected behavior
The node should not receive blocks/transaction during snapshotting process.
### Actual behavior
Node receives blocks during snapshotting
```
[inf] 2018-09-25 08:23:19 | Verify->verifyBlock succeeded for block 8323462238787390927 at height 1616.
[inf] 2018-09-25 08:23:20 | Rebuilding accounts states, current round: 17, height: 1617
[WRN] 2018-09-25 08:23:20 | Discarded block that does not match with current chain: 13500374753101519740 height: 6298816 round: 62365 slot: 7375460 generator: 6a8d02899c66dfa2423b125f44d360be6da0669cedadde32e63e629cb2e3195c
[inf] 2018-09-25 08:23:20 | Verify->verifyBlock succeeded for block 8197888260741386974 at height 1617.
```
### Steps to reproduce
Run snapshotting on node which is running under devnet/testnet/mainnet (`1.1.0`)
### Which version(s) does this affect? (Environment, OS, etc...)
",True,"Node receives blocks during snapshotting - ### Expected behavior
The node should not receive blocks/transaction during snapshotting process.
### Actual behavior
Node receives blocks during snapshotting
```
[inf] 2018-09-25 08:23:19 | Verify->verifyBlock succeeded for block 8323462238787390927 at height 1616.
[inf] 2018-09-25 08:23:20 | Rebuilding accounts states, current round: 17, height: 1617
[WRN] 2018-09-25 08:23:20 | Discarded block that does not match with current chain: 13500374753101519740 height: 6298816 round: 62365 slot: 7375460 generator: 6a8d02899c66dfa2423b125f44d360be6da0669cedadde32e63e629cb2e3195c
[inf] 2018-09-25 08:23:20 | Verify->verifyBlock succeeded for block 8197888260741386974 at height 1617.
```
### Steps to reproduce
Run snapshotting on node which is running under devnet/testnet/mainnet (`1.1.0`)
### Which version(s) does this affect? (Environment, OS, etc...)
",1,node receives blocks during snapshotting expected behavior the node should not receive blocks transaction during snapshotting process actual behavior node receives blocks during snapshotting verify verifyblock succeeded for block at height rebuilding accounts states current round height discarded block that does not match with current chain height round slot generator verify verifyblock succeeded for block at height steps to reproduce run snapshotting on node which is running under devnet testnet mainnet which version s does this affect environment os etc ,1
703,9975602308.0,IssuesEvent,2019-07-09 13:26:40,dotnet/coreclr,https://api.github.com/repos/dotnet/coreclr,closed,Very long timers may end up firing very quickly,area-System.Threading reliability,"Repro:
```C#
using System;
using System.Threading;
class Program
{
static void Main()
{
var t1 = new Timer(c => Console.WriteLine(""timer 1""), null, TimeSpan.FromDays(30), Timeout.InfiniteTimeSpan);
var t2 = new Timer(c => Console.WriteLine(""timer 2""), null, 1000, -1);
Console.ReadLine();
GC.KeepAlive(t1);
GC.KeepAlive(t2);
}
}
```
This is a regression from 2.2 due to an impactful optimization we added early on in 3.0 to help with many short-lived firing timers. Such long timers may result in us overflowing some int-based calculations, and we end up putting these timers that shouldn't fire for days onto a short list that causes them to fire almost immediately.",True,"Very long timers may end up firing very quickly - Repro:
```C#
using System;
using System.Threading;
class Program
{
static void Main()
{
var t1 = new Timer(c => Console.WriteLine(""timer 1""), null, TimeSpan.FromDays(30), Timeout.InfiniteTimeSpan);
var t2 = new Timer(c => Console.WriteLine(""timer 2""), null, 1000, -1);
Console.ReadLine();
GC.KeepAlive(t1);
GC.KeepAlive(t2);
}
}
```
This is a regression from 2.2 due to an impactful optimization we added early on in 3.0 to help with many short-lived firing timers. Such long timers may result in us overflowing some int-based calculations, and we end up putting these timers that shouldn't fire for days onto a short list that causes them to fire almost immediately.",1,very long timers may end up firing very quickly repro c using system using system threading class program static void main var new timer c console writeline timer null timespan fromdays timeout infinitetimespan var new timer c console writeline timer null console readline gc keepalive gc keepalive this is a regression from due to an impactful optimization we added early on in to help with many short lived firing timers such long timers may result in us overflowing some int based calculations and we end up putting these timers that shouldn t fire for days onto a short list that causes them to fire almost immediately ,1
2962,30657470978.0,IssuesEvent,2023-07-25 13:04:42,MicrosoftDocs/azure-docs,https://api.github.com/repos/MicrosoftDocs/azure-docs,closed,Resiliency or reliability?,triaged assigned-to-author doc-enhancement Pri2 reliability/svc availability-zones/subsvc,"This is related to the first paragraph of the article, that introduces two principles of reliability.
Should the second sentence start with ""The goal of **resiliency** is to ...""
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: afb0d66d-e33a-62cc-e389-36accf9bb81e
* Version Independent ID: 46ce812f-5c90-b6d0-b871-eb067e845968
* Content: [Azure reliability documentation](https://learn.microsoft.com/en-us/azure/reliability/overview)
* Content Source: [articles/reliability/overview.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/reliability/overview.md)
* Service: **reliability**
* Sub-service: **availability-zones**
* GitHub Login: @anaharris-ms
* Microsoft Alias: **anaharris**",True,"Resiliency or reliability? - This is related to the first paragraph of the article, that introduces two principles of reliability.
Should the second sentence start with ""The goal of **resiliency** is to ...""
---
#### Document Details
⚠ *Do not edit this section. It is required for learn.microsoft.com ➟ GitHub issue linking.*
* ID: afb0d66d-e33a-62cc-e389-36accf9bb81e
* Version Independent ID: 46ce812f-5c90-b6d0-b871-eb067e845968
* Content: [Azure reliability documentation](https://learn.microsoft.com/en-us/azure/reliability/overview)
* Content Source: [articles/reliability/overview.md](https://github.com/MicrosoftDocs/azure-docs/blob/main/articles/reliability/overview.md)
* Service: **reliability**
* Sub-service: **availability-zones**
* GitHub Login: @anaharris-ms
* Microsoft Alias: **anaharris**",1,resiliency or reliability this is related to the first paragraph of the article that introduces two principles of reliability should the second sentence start with the goal of resiliency is to document details ⚠ do not edit this section it is required for learn microsoft com ➟ github issue linking id version independent id content content source service reliability sub service availability zones github login anaharris ms microsoft alias anaharris ,1
57260,3081252629.0,IssuesEvent,2015-08-22 14:44:13,bitfighter/bitfighter,https://api.github.com/repos/bitfighter/bitfighter,closed,No Kickback on spybugs,enhancement imported Priority-Medium wontfix,"_From [amginea4...@gmail.com](https://code.google.com/u/118060042413138816983/) on November 07, 2014 19:32:41_
Spy Bugs give to much kickback they can be used to get more boost then using a double tap boost remove the kickback from spybugs.
see issue https://code.google.com/p/bitfighter/issues/detail?id=477&thanks=477&ts=1415413568
_Original issue: http://code.google.com/p/bitfighter/issues/detail?id=478_",1.0,"No Kickback on spybugs - _From [amginea4...@gmail.com](https://code.google.com/u/118060042413138816983/) on November 07, 2014 19:32:41_
Spy Bugs give to much kickback they can be used to get more boost then using a double tap boost remove the kickback from spybugs.
see issue https://code.google.com/p/bitfighter/issues/detail?id=477&thanks=477&ts=1415413568
_Original issue: http://code.google.com/p/bitfighter/issues/detail?id=478_",0,no kickback on spybugs from on november spy bugs give to much kickback they can be used to get more boost then using a double tap boost remove the kickback from spybugs see issue original issue ,0
1177,13563599469.0,IssuesEvent,2020-09-18 08:47:35,dotnet/roslyn,https://api.github.com/repos/dotnet/roslyn,closed,Visual Studio crashes when I type couple of cyrillic letters in cs file,Area-IDE Bug Developer Community Tenet-Reliability,"_This issue has been moved from [a ticket on Developer Community](https://developercommunity.visualstudio.com/content/problem/1085582/visual-studio-crashes-when-i-type-couple-of-cyrill.html)._
---
[regression] [worked-in:16.5.4]
After upgrade to 16.6.2.
Visual Studio crashes, when I try to type cyrillic letters in *.cs file.
It is some codepage problem related to source files.
It happens in files with only english letters and only if the option is set:
Environment -> Documents -> Save documents as Unicode when data cannot be saved in codepage.
If it is not set, everythin is ok in all scenarios.
---
### Original Comments
#### Visual Studio Feedback System on 6/19/2020, 07:05 AM:
We have directed your feedback to the appropriate engineering team for further evaluation. The team will review the feedback and notify you about the next steps.
#### Rebecca Peng [MSFT] on 6/22/2020, 02:01 AM:
Hi customer,
Thanks for your feedback. In order for us to investigate this further, could you please provide following information:
Did you meet this issue only once or always?
This issue only occurs on cs file or all type of files?
We are looking forward to hearing from you soon.
Thanks
#### scazy on 6/22/2020, 04:30 AM:
1. It appears sometimes while a day.
2. Yes, just cs files. I can't reproduce it in other file types.
#### Visual Studio Feedback System on 6/22/2020, 06:53 PM:
We have directed your feedback to the appropriate engineering team for further evaluation. The team will review the feedback and notify you about the next steps.
---
### Original Solutions
(no solutions)",True,"Visual Studio crashes when I type couple of cyrillic letters in cs file - _This issue has been moved from [a ticket on Developer Community](https://developercommunity.visualstudio.com/content/problem/1085582/visual-studio-crashes-when-i-type-couple-of-cyrill.html)._
---
[regression] [worked-in:16.5.4]
After upgrade to 16.6.2.
Visual Studio crashes, when I try to type cyrillic letters in *.cs file.
It is some codepage problem related to source files.
It happens in files with only english letters and only if the option is set:
Environment -> Documents -> Save documents as Unicode when data cannot be saved in codepage.
If it is not set, everythin is ok in all scenarios.
---
### Original Comments
#### Visual Studio Feedback System on 6/19/2020, 07:05 AM:
We have directed your feedback to the appropriate engineering team for further evaluation. The team will review the feedback and notify you about the next steps.
#### Rebecca Peng [MSFT] on 6/22/2020, 02:01 AM:
Hi customer,
Thanks for your feedback. In order for us to investigate this further, could you please provide following information:
Did you meet this issue only once or always?
This issue only occurs on cs file or all type of files?
We are looking forward to hearing from you soon.
Thanks
#### scazy on 6/22/2020, 04:30 AM:
1. It appears sometimes while a day.
2. Yes, just cs files. I can't reproduce it in other file types.
#### Visual Studio Feedback System on 6/22/2020, 06:53 PM:
We have directed your feedback to the appropriate engineering team for further evaluation. The team will review the feedback and notify you about the next steps.
---
### Original Solutions
(no solutions)",1,visual studio crashes when i type couple of cyrillic letters in cs file this issue has been moved from after upgrade to visual studio crashes when i try to type cyrillic letters in cs file it is some codepage problem related to source files it happens in files with only english letters and only if the option is set environment documents save documents as unicode when data cannot be saved in codepage if it is not set everythin is ok in all scenarios original comments visual studio feedback system on am we have directed your feedback to the appropriate engineering team for further evaluation the team will review the feedback and notify you about the next steps rebecca peng on am hi customer thanks for your feedback in order for us to investigate this further could you please provide following information did you meet this issue only once or always this issue only occurs on cs file or all type of files we are looking forward to hearing from you soon thanks scazy on am it appears sometimes while a day yes just cs files i can t reproduce it in other file types visual studio feedback system on pm we have directed your feedback to the appropriate engineering team for further evaluation the team will review the feedback and notify you about the next steps original solutions no solutions ,1
2361,24930188029.0,IssuesEvent,2022-10-31 10:56:44,jasp-stats/jasp-issues,https://api.github.com/repos/jasp-stats/jasp-issues,closed,[Bug]: reliability error message with outdated version of JASP,Module: jaspReliability,"### Description
_No response_
### Purpose
_No response_
### Use-case
_No response_
### Is your feature request related to a problem?
I've received an error message whilst trying to conduct a confirmatory analysis with Chrons Alpha. It says this: The following problem(s) occurred while running the analysis: The variance-covariance matrix of the supplied data is not positive-definite. Please check if variables have many missings observations or are collinear The covariance matrix of the data is not invertible
### Describe the solution you would like
I need some guidance on how to remove this error and carry on with my analysis!
### Describe alternatives that you have considered
I've restarted JASP a few times, tried bringing each variable over individually
### Additional context
_No response_",True,"[Bug]: reliability error message with outdated version of JASP - ### Description
_No response_
### Purpose
_No response_
### Use-case
_No response_
### Is your feature request related to a problem?
I've received an error message whilst trying to conduct a confirmatory analysis with Chrons Alpha. It says this: The following problem(s) occurred while running the analysis: The variance-covariance matrix of the supplied data is not positive-definite. Please check if variables have many missings observations or are collinear The covariance matrix of the data is not invertible
### Describe the solution you would like
I need some guidance on how to remove this error and carry on with my analysis!
### Describe alternatives that you have considered
I've restarted JASP a few times, tried bringing each variable over individually
### Additional context
_No response_",1, reliability error message with outdated version of jasp description no response purpose no response use case no response is your feature request related to a problem i ve received an error message whilst trying to conduct a confirmatory analysis with chrons alpha it says this the following problem s occurred while running the analysis the variance covariance matrix of the supplied data is not positive definite please check if variables have many missings observations or are collinear the covariance matrix of the data is not invertible describe the solution you would like i need some guidance on how to remove this error and carry on with my analysis describe alternatives that you have considered i ve restarted jasp a few times tried bringing each variable over individually additional context no response ,1
346,6828208448.0,IssuesEvent,2017-11-08 19:37:20,m3db/m3db,https://api.github.com/repos/m3db/m3db,closed,Investigate long bootstrap times for the commit log bootstrapper,C: Filesystem P: High T: Perf T: Reliability,"This may be simply a performance issue or could be an edge case with the commit log iterator, but it seems when bootstrapping from multiple large commit log files the commit log bootstrapper takes an exceptionally long time - much longer than previously benchmarked on a single file.",True,"Investigate long bootstrap times for the commit log bootstrapper - This may be simply a performance issue or could be an edge case with the commit log iterator, but it seems when bootstrapping from multiple large commit log files the commit log bootstrapper takes an exceptionally long time - much longer than previously benchmarked on a single file.",1,investigate long bootstrap times for the commit log bootstrapper this may be simply a performance issue or could be an edge case with the commit log iterator but it seems when bootstrapping from multiple large commit log files the commit log bootstrapper takes an exceptionally long time much longer than previously benchmarked on a single file ,1
1691,18718587355.0,IssuesEvent,2021-11-03 09:09:37,beattosetto/beattosetto,https://api.github.com/repos/beattosetto/beattosetto,closed,Add case in beatmap infobox when some field is not have an information,frontend type:ui-ux area:beatmap fix from discussion type:reliability,"Add case when some field can be blank like tag or source.

",True,"Add case in beatmap infobox when some field is not have an information - Add case when some field can be blank like tag or source.

",1,add case in beatmap infobox when some field is not have an information add case when some field can be blank like tag or source ,1
1301,14708060280.0,IssuesEvent,2021-01-04 22:48:34,dotnet/roslyn,https://api.github.com/repos/dotnet/roslyn,closed,VS 2019 IDE crash on file open,Area-Compilers Language-VB Tenet-Reliability,"See also [AB#960718](https://devdiv.visualstudio.com/0bdbc590-a062-4c3f-b0f6-9383f67865ee/_workitems/edit/960718)
_This issue has been moved from [a ticket on Developer Community](https://developercommunity2.visualstudio.com/t/VS-2019-IDE-crash-on-file-open/676387)._
---
Reproducing steps:
1. Extract project from attached zip
2. Open NpType1.vbproj in IDE
3. Open file NpMessage.vb in IDE
IDE is hanging up in several seconds ...
Probably some enums or procedures is too long to VS 2019 IDE ...
---
### Original Comments
#### Fiona Niu[MSFT] on 8/5/2019, 01:55 AM:
Thank you for taking the time to log this issue! Could you please provide more information via the Visual Studio Feedback Tool(Help -> Send Feedback -> Report A Problem)so that we can conduct further research? The feedback tool will ensure that we collect the needed information for you without worrying about what to provide (recording, dump file or ETL trace).
Since this issue is now marked as Need More Info, that workflow is enabled in the Feedback Tool:
• Open Visual Studio Feedback tool.
• Click the banner letting you know that you have problems requesting your attention.
• Click this problem from the list
• Click "View their request and respond" from the problem details banner
• Add a comment, in the Attachments/Record: click Start Recording
• When the Steps Recorder tool appears, perform the steps that reproduce the problem.
• When you're done, choose the Stop Record button.
• Wait a few minutes for Visual Studio to collect and package the information that you recorded.
• Submit. You will be able to see the comment on Developer Community. For security reasons, your files come directly to us and don't appear on Developer Community.
#### design on 8/5/2019, 02:09 AM:
Dumps recording ...
#### design on 8/5/2019, 02:16 AM:
Trying dumps again ...
#### Fiona Niu[MSFT] on 8/5/2019, 07:42 PM:
Thanks a lot for providing the information. We have directed your feedback to the appropriate engineering team for further evaluation. The team will review the feedback and notify you about the next steps.
#### Feedback Bot on 3/27/2020, 08:41 AM:
This issue is currently being investigated. Our team will get back to you if either more information is needed, a workaround is available, or the issue is resolved.
---
### Original Solutions
(no solutions)",True,"VS 2019 IDE crash on file open - See also [AB#960718](https://devdiv.visualstudio.com/0bdbc590-a062-4c3f-b0f6-9383f67865ee/_workitems/edit/960718)
_This issue has been moved from [a ticket on Developer Community](https://developercommunity2.visualstudio.com/t/VS-2019-IDE-crash-on-file-open/676387)._
---
Reproducing steps:
1. Extract project from attached zip
2. Open NpType1.vbproj in IDE
3. Open file NpMessage.vb in IDE
IDE is hanging up in several seconds ...
Probably some enums or procedures is too long to VS 2019 IDE ...
---
### Original Comments
#### Fiona Niu[MSFT] on 8/5/2019, 01:55 AM:
Thank you for taking the time to log this issue! Could you please provide more information via the Visual Studio Feedback Tool(Help -> Send Feedback -> Report A Problem)so that we can conduct further research? The feedback tool will ensure that we collect the needed information for you without worrying about what to provide (recording, dump file or ETL trace).
Since this issue is now marked as Need More Info, that workflow is enabled in the Feedback Tool:
• Open Visual Studio Feedback tool.
• Click the banner letting you know that you have problems requesting your attention.
• Click this problem from the list
• Click "View their request and respond" from the problem details banner
• Add a comment, in the Attachments/Record: click Start Recording
• When the Steps Recorder tool appears, perform the steps that reproduce the problem.
• When you're done, choose the Stop Record button.
• Wait a few minutes for Visual Studio to collect and package the information that you recorded.
• Submit. You will be able to see the comment on Developer Community. For security reasons, your files come directly to us and don't appear on Developer Community.
#### design on 8/5/2019, 02:09 AM:
Dumps recording ...
#### design on 8/5/2019, 02:16 AM:
Trying dumps again ...
#### Fiona Niu[MSFT] on 8/5/2019, 07:42 PM:
Thanks a lot for providing the information. We have directed your feedback to the appropriate engineering team for further evaluation. The team will review the feedback and notify you about the next steps.
#### Feedback Bot on 3/27/2020, 08:41 AM:
This issue is currently being investigated. Our team will get back to you if either more information is needed, a workaround is available, or the issue is resolved.
---
### Original Solutions
(no solutions)",1,vs ide crash on file open see also this issue has been moved from reproducing steps extract project from attached zip open vbproj in ide open file npmessage vb in ide ide is hanging up in several seconds probably some enums or procedures is too long to vs ide original comments fiona niu on am thank you for taking the time to log this issue could you please provide more information via the visual studio feedback tool help gt send feedback gt report a problem so that we can conduct further research the feedback tool will ensure that we collect the needed information for you without worrying about what to provide recording dump file or etl trace since this issue is now marked as need more info that workflow is enabled in the feedback tool • open visual studio feedback tool • click the banner letting you know that you have problems requesting your attention • click this problem from the list • click quot view their request and respond quot from the problem details banner • add a comment in the attachments record click start recording • when the steps recorder tool appears perform the steps that reproduce the problem • when you re done choose the stop record button • wait a few minutes for visual studio to collect and package the information that you recorded • submit you will be able to see the comment on developer community for security reasons your files come directly to us and don t appear on developer community for the full instructions please see a target blank href for information about what data is collected see a target blank href we look forward to hearing from you design on am dumps recording design on am trying dumps again fiona niu on pm thanks a lot for providing the information we have directed your feedback to the appropriate engineering team for further evaluation the team will review the feedback and notify you about the next steps feedback bot on am this issue is currently being investigated our team will get back to you if either more information is needed a workaround is available or the issue is resolved original solutions no solutions ,1
357169,25176336655.0,IssuesEvent,2022-11-11 09:35:35,malcolmang/pe,https://api.github.com/repos/malcolmang/pe,opened,Manual Testing section lacking ,severity.Low type.DocumentationBug,"The `Manual Testing` section does not provide any additional information to aid in explaining how to go about testing the program. It only has references to other sections (like Getting Started or the User Guide), but doesn't provide any information of its own.
Example:

",1.0,"Manual Testing section lacking - The `Manual Testing` section does not provide any additional information to aid in explaining how to go about testing the program. It only has references to other sections (like Getting Started or the User Guide), but doesn't provide any information of its own.
Example:

",0,manual testing section lacking the manual testing section does not provide any additional information to aid in explaining how to go about testing the program it only has references to other sections like getting started or the user guide but doesn t provide any information of its own example ,0
336,6716123268.0,IssuesEvent,2017-10-14 03:00:17,dotnet/roslyn,https://api.github.com/repos/dotnet/roslyn,closed,Unbounded SQLite instances/connections contributing to OOM failures,Area-IDE Bug Resolution-Fixed Tenet-Reliability Urgency-Now,"**Version Used**: 15.3
Currently we fail to bound the number of instances of the following types which are created at runtime:
* `SQLitePCL.sqlite3`
* `Microsoft.CodeAnalysis.SQLite.Interop.SqlConnection`
Associated with these types is a pair of allocations in the native heap. One is 64,000 bytes, and the other is 425,600 bytes. Ordinarily, this would not be a problem. However, it appears that it is possible for the number of connections to grow over time, resulting in overwhelming memory pressure stemming from the (mis-)use of SQLite. The following image shows one such case:

After fixing this for 15.5, we should port the fix to 15.4 servicing.",True,"Unbounded SQLite instances/connections contributing to OOM failures - **Version Used**: 15.3
Currently we fail to bound the number of instances of the following types which are created at runtime:
* `SQLitePCL.sqlite3`
* `Microsoft.CodeAnalysis.SQLite.Interop.SqlConnection`
Associated with these types is a pair of allocations in the native heap. One is 64,000 bytes, and the other is 425,600 bytes. Ordinarily, this would not be a problem. However, it appears that it is possible for the number of connections to grow over time, resulting in overwhelming memory pressure stemming from the (mis-)use of SQLite. The following image shows one such case:

After fixing this for 15.5, we should port the fix to 15.4 servicing.",1,unbounded sqlite instances connections contributing to oom failures version used currently we fail to bound the number of instances of the following types which are created at runtime sqlitepcl microsoft codeanalysis sqlite interop sqlconnection associated with these types is a pair of allocations in the native heap one is bytes and the other is bytes ordinarily this would not be a problem however it appears that it is possible for the number of connections to grow over time resulting in overwhelming memory pressure stemming from the mis use of sqlite the following image shows one such case after fixing this for we should port the fix to servicing ,1
24297,3960228556.0,IssuesEvent,2016-05-02 04:29:43,voxpupuli/puppet-network,https://api.github.com/repos/voxpupuli/puppet-network,closed,Add validation for type values,Defect,"
Pending:
Puppet::Type::Network_config when validating the attribute options should be a descendant of the KeyValue property
# on conversion to specific type
# ./spec/unit/type/network_config_spec.rb:46
Puppet::Type::Network_config when validating the attribute value ipaddress should fail if a malformed address is used
# implementation of IP address validation
# ./spec/unit/type/network_config_spec.rb:81
Puppet::Type::Network_config when validating the attribute value ipaddress using the inet family should fail when passed an IPv6 address
# implementation of IP address validation
# ./spec/unit/type/network_config_spec.rb:65
Puppet::Type::Network_config when validating the attribute value ipaddress using the inet6 family should fail when passed an IPv4 address
# implementation of IP address validation
# ./spec/unit/type/network_config_spec.rb:75
Puppet::Type::Network_config when validating the attribute value netmask should validate a CIDR netmask
# Not yet implemented
# ./spec/unit/type/network_config_spec.rb:88
Puppet::Type::Network_config when validating the attribute value netmask should fail if an invalid CIDR netmask is used
# implementation of IP address validation
# ./spec/unit/type/network_config_spec.rb:89
This should be not pending.",1.0,"Add validation for type values -
Pending:
Puppet::Type::Network_config when validating the attribute options should be a descendant of the KeyValue property
# on conversion to specific type
# ./spec/unit/type/network_config_spec.rb:46
Puppet::Type::Network_config when validating the attribute value ipaddress should fail if a malformed address is used
# implementation of IP address validation
# ./spec/unit/type/network_config_spec.rb:81
Puppet::Type::Network_config when validating the attribute value ipaddress using the inet family should fail when passed an IPv6 address
# implementation of IP address validation
# ./spec/unit/type/network_config_spec.rb:65
Puppet::Type::Network_config when validating the attribute value ipaddress using the inet6 family should fail when passed an IPv4 address
# implementation of IP address validation
# ./spec/unit/type/network_config_spec.rb:75
Puppet::Type::Network_config when validating the attribute value netmask should validate a CIDR netmask
# Not yet implemented
# ./spec/unit/type/network_config_spec.rb:88
Puppet::Type::Network_config when validating the attribute value netmask should fail if an invalid CIDR netmask is used
# implementation of IP address validation
# ./spec/unit/type/network_config_spec.rb:89
This should be not pending.",0,add validation for type values pending puppet type network config when validating the attribute options should be a descendant of the keyvalue property on conversion to specific type spec unit type network config spec rb puppet type network config when validating the attribute value ipaddress should fail if a malformed address is used implementation of ip address validation spec unit type network config spec rb puppet type network config when validating the attribute value ipaddress using the inet family should fail when passed an address implementation of ip address validation spec unit type network config spec rb puppet type network config when validating the attribute value ipaddress using the family should fail when passed an address implementation of ip address validation spec unit type network config spec rb puppet type network config when validating the attribute value netmask should validate a cidr netmask not yet implemented spec unit type network config spec rb puppet type network config when validating the attribute value netmask should fail if an invalid cidr netmask is used implementation of ip address validation spec unit type network config spec rb this should be not pending ,0
259726,22534356269.0,IssuesEvent,2022-06-25 02:11:56,jenkinsci/winstone,https://api.github.com/repos/jenkinsci/winstone,closed,Rework 5 second sleep in `SimpleAccessLoggerTest#testSimpleConnection`,test good first issue,"https://github.com/jenkinsci/winstone/pull/229 added a 5 second sleep to the test to stabilize the build
@olamy [suggests](https://github.com/jenkinsci/winstone/pull/229/files#r904428052) that we should check every 100ms or so rather than waiting 5 seconds:
> would be better to add a loop retrying every maybe 100ms then if keep failing after 5000ms
because this add unconditionally 5s to the build",1.0,"Rework 5 second sleep in `SimpleAccessLoggerTest#testSimpleConnection` - https://github.com/jenkinsci/winstone/pull/229 added a 5 second sleep to the test to stabilize the build
@olamy [suggests](https://github.com/jenkinsci/winstone/pull/229/files#r904428052) that we should check every 100ms or so rather than waiting 5 seconds:
> would be better to add a loop retrying every maybe 100ms then if keep failing after 5000ms
because this add unconditionally 5s to the build",0,rework second sleep in simpleaccessloggertest testsimpleconnection added a second sleep to the test to stabilize the build olamy that we should check every or so rather than waiting seconds would be better to add a loop retrying every maybe then if keep failing after because this add unconditionally to the build,0
276473,23993617392.0,IssuesEvent,2022-09-14 05:04:47,brave/qa-resources,https://api.github.com/repos/brave/qa-resources,closed,Add another test for ENS-based domains,testsheet,Add a testcase for the `vitalik.eth` IPFS-based domain to the IPFS run.,1.0,Add another test for ENS-based domains - Add a testcase for the `vitalik.eth` IPFS-based domain to the IPFS run.,0,add another test for ens based domains add a testcase for the vitalik eth ipfs based domain to the ipfs run ,0
383689,11361189330.0,IssuesEvent,2020-01-26 13:17:56,RobotLocomotion/drake,https://api.github.com/repos/RobotLocomotion/drake,closed,PlanarSceneGraphVisualizer sometimes displays a box incorrectly,priority: medium team: dynamics type: bug,"perhaps the order of points is getting scrambled from some convex hull calculation?
the symptom is visible in the acrobot example:

the green triangle at the bottom is the base link -- with a single geometry element which is of type box.
All of the pieces to reproduce this are available in drake, but there is not currently a drake example with the acrobot + scenegraph visualizer. Repro is here:
https://github.com/RussTedrake/underactuated/blob/master/src/acrobot/balancing_lqr.py
This shouldn't depend on anything non-drake... should be able to run it by running it as a bazel python binary/test in your local drake dir.",1.0,"PlanarSceneGraphVisualizer sometimes displays a box incorrectly - perhaps the order of points is getting scrambled from some convex hull calculation?
the symptom is visible in the acrobot example:

the green triangle at the bottom is the base link -- with a single geometry element which is of type box.
All of the pieces to reproduce this are available in drake, but there is not currently a drake example with the acrobot + scenegraph visualizer. Repro is here:
https://github.com/RussTedrake/underactuated/blob/master/src/acrobot/balancing_lqr.py
This shouldn't depend on anything non-drake... should be able to run it by running it as a bazel python binary/test in your local drake dir.",0,planarscenegraphvisualizer sometimes displays a box incorrectly perhaps the order of points is getting scrambled from some convex hull calculation the symptom is visible in the acrobot example the green triangle at the bottom is the base link with a single geometry element which is of type box all of the pieces to reproduce this are available in drake but there is not currently a drake example with the acrobot scenegraph visualizer repro is here this shouldn t depend on anything non drake should be able to run it by running it as a bazel python binary test in your local drake dir ,0
2992,30821139958.0,IssuesEvent,2023-08-01 16:25:13,ppy/osu,https://api.github.com/repos/ppy/osu,closed,Skin editor crashes when attempting to open in multiplayer spectator,type:reliability area:skin-editor,"```csharp
2023-06-10 19:44:01 [error]: An unhandled error has occurred.
2023-06-10 19:44:01 [error]: System.ArgumentException: The item HUD already exists in this Dropdown.
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.UserInterface.Dropdown`1.addDropdownItem(LocalisableString text, T value)
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.UserInterface.Dropdown`1.setItems(IEnumerable`1 items)
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.UserInterface.Dropdown`1.set_Items(IEnumerable`1 value)
2023-06-10 19:44:01 [error]: at osu.Game.Overlays.SkinEditor.SkinEditor.targetChanged(ValueChangedEvent`1 target) in /home/runner/work/osu-auth-client/osu-auth-client/osu/osu.Game/Overlays/SkinEditor/SkinEditor.cs:line 351
2023-06-10 19:44:01 [error]: at osu.Framework.Bindables.Bindable`1.TriggerValueChange(T previousValue, Bindable`1 source, Boolean propagateToBindings, Boolean bypassChecks)
2023-06-10 19:44:01 [error]: at osu.Framework.Bindables.Bindable`1.set_Value(T value)
2023-06-10 19:44:01 [error]: at osu.Framework.Bindables.Bindable`1.SetDefault()
2023-06-10 19:44:01 [error]: at osu.Game.Overlays.SkinEditor.SkinEditor.g__loadBlueprintContainer|62_0() in /home/runner/work/osu-auth-client/osu-auth-client/osu/osu.Game/Overlays/SkinEditor/SkinEditor.cs:line 323
2023-06-10 19:44:01 [error]: at osu.Framework.Threading.ScheduledDelegate.RunTaskInternal()
2023-06-10 19:44:01 [error]: at osu.Framework.Threading.Scheduler.Update()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Drawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Platform.GameHost.UpdateFrame()
2023-06-10 19:44:01 [error]: at osu.Framework.Threading.GameThread.processFrame()
```
### Discussed in https://github.com/ppy/osu/discussions/23869
Originally posted by **soopax** June 11, 2023

# Version
https://github.com/ppy/osu/releases/tag/2023.610.0
# Logs
[runtime.log](https://github.com/ppy/osu/files/11713191/runtime.log)
",True,"Skin editor crashes when attempting to open in multiplayer spectator - ```csharp
2023-06-10 19:44:01 [error]: An unhandled error has occurred.
2023-06-10 19:44:01 [error]: System.ArgumentException: The item HUD already exists in this Dropdown.
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.UserInterface.Dropdown`1.addDropdownItem(LocalisableString text, T value)
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.UserInterface.Dropdown`1.setItems(IEnumerable`1 items)
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.UserInterface.Dropdown`1.set_Items(IEnumerable`1 value)
2023-06-10 19:44:01 [error]: at osu.Game.Overlays.SkinEditor.SkinEditor.targetChanged(ValueChangedEvent`1 target) in /home/runner/work/osu-auth-client/osu-auth-client/osu/osu.Game/Overlays/SkinEditor/SkinEditor.cs:line 351
2023-06-10 19:44:01 [error]: at osu.Framework.Bindables.Bindable`1.TriggerValueChange(T previousValue, Bindable`1 source, Boolean propagateToBindings, Boolean bypassChecks)
2023-06-10 19:44:01 [error]: at osu.Framework.Bindables.Bindable`1.set_Value(T value)
2023-06-10 19:44:01 [error]: at osu.Framework.Bindables.Bindable`1.SetDefault()
2023-06-10 19:44:01 [error]: at osu.Game.Overlays.SkinEditor.SkinEditor.g__loadBlueprintContainer|62_0() in /home/runner/work/osu-auth-client/osu-auth-client/osu/osu.Game/Overlays/SkinEditor/SkinEditor.cs:line 323
2023-06-10 19:44:01 [error]: at osu.Framework.Threading.ScheduledDelegate.RunTaskInternal()
2023-06-10 19:44:01 [error]: at osu.Framework.Threading.Scheduler.Update()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Drawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Graphics.Containers.CompositeDrawable.UpdateSubTree()
2023-06-10 19:44:01 [error]: at osu.Framework.Platform.GameHost.UpdateFrame()
2023-06-10 19:44:01 [error]: at osu.Framework.Threading.GameThread.processFrame()
```
### Discussed in https://github.com/ppy/osu/discussions/23869
Originally posted by **soopax** June 11, 2023

# Version
https://github.com/ppy/osu/releases/tag/2023.610.0
# Logs
[runtime.log](https://github.com/ppy/osu/files/11713191/runtime.log)
",1,skin editor crashes when attempting to open in multiplayer spectator csharp an unhandled error has occurred system argumentexception the item hud already exists in this dropdown at osu framework graphics userinterface dropdown adddropdownitem localisablestring text t value at osu framework graphics userinterface dropdown setitems ienumerable items at osu framework graphics userinterface dropdown set items ienumerable value at osu game overlays skineditor skineditor targetchanged valuechangedevent target in home runner work osu auth client osu auth client osu osu game overlays skineditor skineditor cs line at osu framework bindables bindable triggervaluechange t previousvalue bindable source boolean propagatetobindings boolean bypasschecks at osu framework bindables bindable set value t value at osu framework bindables bindable setdefault at osu game overlays skineditor skineditor g loadblueprintcontainer in home runner work osu auth client osu auth client osu osu game overlays skineditor skineditor cs line at osu framework threading scheduleddelegate runtaskinternal at osu framework threading scheduler update at osu framework graphics drawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework graphics containers compositedrawable updatesubtree at osu framework platform gamehost updateframe at osu framework threading gamethread processframe discussed in originally posted by soopax june version logs ,1
29795,8408498070.0,IssuesEvent,2018-10-12 02:00:07,supercollider/supercollider,https://api.github.com/repos/supercollider/supercollider,closed,Compiling in release with gcc 8.2 causes FP math issues,comp: build comp: sclang,"Environment
-----------
* Your SuperCollider version: 3.9.3 (installed from the comunity repositories of arch)
* Your operating system and version: Arch Linux
Steps to reproduce (for bugs)
-----------------------------
```supercollider
(0/0).isNaN
// Please paste SuperCollider code here.
// It really helps if you try to simplify your example as much as possible.
```
Expected Behavior
-----------------
returns true
Current Behavior
----------------
returns false
isNaN is implemented by testing < and > of 0 ... and these tests are already not behaving as expected.
",1.0,"Compiling in release with gcc 8.2 causes FP math issues - Environment
-----------
* Your SuperCollider version: 3.9.3 (installed from the comunity repositories of arch)
* Your operating system and version: Arch Linux
Steps to reproduce (for bugs)
-----------------------------
```supercollider
(0/0).isNaN
// Please paste SuperCollider code here.
// It really helps if you try to simplify your example as much as possible.
```
Expected Behavior
-----------------
returns true
Current Behavior
----------------
returns false
isNaN is implemented by testing < and > of 0 ... and these tests are already not behaving as expected.
",0,compiling in release with gcc causes fp math issues environment your supercollider version installed from the comunity repositories of arch your operating system and version arch linux steps to reproduce for bugs supercollider isnan please paste supercollider code here it really helps if you try to simplify your example as much as possible expected behavior returns true current behavior returns false isnan is implemented by testing of and these tests are already not behaving as expected ,0
3034,31780750208.0,IssuesEvent,2023-09-12 17:15:23,sarah-pbdemo/sg-pbdemo,https://api.github.com/repos/sarah-pbdemo/sg-pbdemo,opened,Make employees feel secure with healthcare,🙌 reliability,"## 🎒 Background
We must improve call reliability as a competitive differentiator against GoToTalking\. 🏆
## 💭 Problem
Support has been forwarding a lot of tickets from customers who are dissatisfied with the current state of audio calls\. Calls are getting dropped and users say it's even worse user experience than GoToTalking\.
## 🔎 Discovery
Let's analyze these tickets, reach out to some customers to understand how painful this is today, and run more platform tests to identify optimizations that could be made to improve reliability\.
## **🎨 Design**
- Is there a change in design required for this initiative?
- Are there existing designs that can be leveraged?
## 🙈 Roles
- What user roles does this initiative impact?
## 💵 Pricing
- This should be available on all pricing packages that have access to insights\.
## 🙅♂️Out of scope
- List of what is out of scope of this initiative
## ⛓ Dependencies
- None
## **✅ Definition of Done**
- Completion of all tasks and stories related to this initiative
## 👨🔬 T**esting**
- End\-to\-end testing
## 🎯Measuring Success
- 25% decrease in total
## 📊 Analytics
- The change will be tested via these events: The current analytics can be found in this dashboard\.
## 🛳 Release Strategy
- Straight to GA\.
",True,"Make employees feel secure with healthcare - ## 🎒 Background
We must improve call reliability as a competitive differentiator against GoToTalking\. 🏆
## 💭 Problem
Support has been forwarding a lot of tickets from customers who are dissatisfied with the current state of audio calls\. Calls are getting dropped and users say it's even worse user experience than GoToTalking\.
## 🔎 Discovery
Let's analyze these tickets, reach out to some customers to understand how painful this is today, and run more platform tests to identify optimizations that could be made to improve reliability\.
## **🎨 Design**
- Is there a change in design required for this initiative?
- Are there existing designs that can be leveraged?
## 🙈 Roles
- What user roles does this initiative impact?
## 💵 Pricing
- This should be available on all pricing packages that have access to insights\.
## 🙅♂️Out of scope
- List of what is out of scope of this initiative
## ⛓ Dependencies
- None
## **✅ Definition of Done**
- Completion of all tasks and stories related to this initiative
## 👨🔬 T**esting**
- End\-to\-end testing
## 🎯Measuring Success
- 25% decrease in total
## 📊 Analytics
- The change will be tested via these events: The current analytics can be found in this dashboard\.
## 🛳 Release Strategy
- Straight to GA\.
",1,make employees feel secure with healthcare 🎒 background we must improve call reliability as a competitive differentiator against gototalking 🏆 💭 problem support has been forwarding a lot of tickets from customers who are dissatisfied with the current state of audio calls calls are getting dropped and users say it s even worse user experience than gototalking 🔎 discovery let s analyze these tickets reach out to some customers to understand how painful this is today and run more platform tests to identify optimizations that could be made to improve reliability 🎨 design is there a change in design required for this initiative are there existing designs that can be leveraged 🙈 roles what user roles does this initiative impact 💵 pricing this should be available on all pricing packages that have access to insights 🙅♂️out of scope list of what is out of scope of this initiative ⛓ dependencies none ✅ definition of done completion of all tasks and stories related to this initiative 👨🔬 t esting end to end testing 🎯measuring success decrease in total 📊 analytics the change will be tested via these events the current analytics can be found in this dashboard 🛳 release strategy straight to ga ,1
53915,6774488802.0,IssuesEvent,2017-10-27 10:33:40,hacktoberfest17/programming,https://api.github.com/repos/hacktoberfest17/programming,closed,improve the ui of the gh-pages.,Design good first issue Hacktoberfest help wanted,https://hacktoberfest17.github.io/programming/ - is the current ui. The index.html file resides inside the gh-pages branch.,1.0,improve the ui of the gh-pages. - https://hacktoberfest17.github.io/programming/ - is the current ui. The index.html file resides inside the gh-pages branch.,0,improve the ui of the gh pages is the current ui the index html file resides inside the gh pages branch ,0
962,11802932984.0,IssuesEvent,2020-03-18 22:46:55,NuGet/Home,https://api.github.com/repos/NuGet/Home,reopened,"NuGet 4.9.2 fails to install packages from nuget.org with ""Canceled"" error",Area:Plugin Area:Reliability Type:Bug,"## Details about Problem
NuGet product used: NuGet.exe
NuGet version 4.9.2:
OS: Windows
## Detailed repro steps so we can see the same problem
Execute package restore like that:
```
C:\Teamcity\BuildAgent\tools\NuGet.CommandLine.4.9.2\tools\NuGet.exe restore Z:\work\solution.sln -NoCache -Verbosity detailed -Source https://feed/nuget/ -Source -Source https://api.nuget.org/v3/index.json
```
Restore command fails with errors like that:
```
WARNING: Unable to find version '4.0.1' of package 'Microsoft.CSharp'.
https://api.nuget.org/v3/index.json: Canceled
Unable to find version '4.0.1' of package 'Microsoft.CSharp'.
https://api.nuget.org/v3/index.json: Canceled
https://feed/nuget/: Package 'Microsoft.CSharp.4.0.1' is not found on source 'https://feed/nuget/'.
```
## Other suggested things
### Verbose Logs
Despite `-Verbosity detailed` was passed it does not bring any additional details about cause of `Canceled` task status. So how to investigate the cause of that?",True,"NuGet 4.9.2 fails to install packages from nuget.org with ""Canceled"" error - ## Details about Problem
NuGet product used: NuGet.exe
NuGet version 4.9.2:
OS: Windows
## Detailed repro steps so we can see the same problem
Execute package restore like that:
```
C:\Teamcity\BuildAgent\tools\NuGet.CommandLine.4.9.2\tools\NuGet.exe restore Z:\work\solution.sln -NoCache -Verbosity detailed -Source https://feed/nuget/ -Source -Source https://api.nuget.org/v3/index.json
```
Restore command fails with errors like that:
```
WARNING: Unable to find version '4.0.1' of package 'Microsoft.CSharp'.
https://api.nuget.org/v3/index.json: Canceled
Unable to find version '4.0.1' of package 'Microsoft.CSharp'.
https://api.nuget.org/v3/index.json: Canceled
https://feed/nuget/: Package 'Microsoft.CSharp.4.0.1' is not found on source 'https://feed/nuget/'.
```
## Other suggested things
### Verbose Logs
Despite `-Verbosity detailed` was passed it does not bring any additional details about cause of `Canceled` task status. So how to investigate the cause of that?",1,nuget fails to install packages from nuget org with canceled error details about problem nuget product used nuget exe nuget version os windows detailed repro steps so we can see the same problem execute package restore like that c teamcity buildagent tools nuget commandline tools nuget exe restore z work solution sln nocache verbosity detailed source source source restore command fails with errors like that warning unable to find version of package microsoft csharp canceled unable to find version of package microsoft csharp canceled package microsoft csharp is not found on source other suggested things verbose logs despite verbosity detailed was passed it does not bring any additional details about cause of canceled task status so how to investigate the cause of that ,1
83899,3644697160.0,IssuesEvent,2016-02-15 11:07:16,mantidproject/mantid,https://api.github.com/repos/mantidproject/mantid,closed,SliceViewerWindow should have a refresh/update method to change the workspace data,Component: GUI Misc: Archived Priority: Low,"This issue was originally [TRAC 10446](http://trac.mantidproject.org/mantid/ticket/10446)
Original Reporter: @FedeMPouzols
This method would make it possible to update the slice viewer window with a different workspace/underlying data.
The idea is that if one is using a single slice viewer window instance, it should not be necessary to close it and open a new one to visualize a different workspace.
I'd say that this is possible through `SliceViewerWindow::getSlicer()`, and `SliceViewer::setWorkspace()`, but it needs to be tested (potentially amended) and documented.
This is related to/motivated by ticket http://trac.mantidproject.org/mantid/ticket/8091.
Destroying a slice viewer window and creating a new one does not seem to take a significant amount of time, at least in the few tests that I did. So I'd say that ticket has a low benefit/effort ratio and consequently low priority, as it may require careful testing of proper initialization, etc.
This ticket is a brother of http://trac.mantidproject.org/mantid/ticket/10445 and http://trac.mantidproject.org/mantid/ticket/10447.
",1.0,"SliceViewerWindow should have a refresh/update method to change the workspace data - This issue was originally [TRAC 10446](http://trac.mantidproject.org/mantid/ticket/10446)
Original Reporter: @FedeMPouzols
This method would make it possible to update the slice viewer window with a different workspace/underlying data.
The idea is that if one is using a single slice viewer window instance, it should not be necessary to close it and open a new one to visualize a different workspace.
I'd say that this is possible through `SliceViewerWindow::getSlicer()`, and `SliceViewer::setWorkspace()`, but it needs to be tested (potentially amended) and documented.
This is related to/motivated by ticket http://trac.mantidproject.org/mantid/ticket/8091.
Destroying a slice viewer window and creating a new one does not seem to take a significant amount of time, at least in the few tests that I did. So I'd say that ticket has a low benefit/effort ratio and consequently low priority, as it may require careful testing of proper initialization, etc.
This ticket is a brother of http://trac.mantidproject.org/mantid/ticket/10445 and http://trac.mantidproject.org/mantid/ticket/10447.
",0,sliceviewerwindow should have a refresh update method to change the workspace data this issue was originally original reporter fedempouzols this method would make it possible to update the slice viewer window with a different workspace underlying data the idea is that if one is using a single slice viewer window instance it should not be necessary to close it and open a new one to visualize a different workspace i d say that this is possible through sliceviewerwindow getslicer and sliceviewer setworkspace but it needs to be tested potentially amended and documented this is related to motivated by ticket destroying a slice viewer window and creating a new one does not seem to take a significant amount of time at least in the few tests that i did so i d say that ticket has a low benefit effort ratio and consequently low priority as it may require careful testing of proper initialization etc this ticket is a brother of and ,0
124937,10330532879.0,IssuesEvent,2019-09-02 14:54:33,istio/istio,https://api.github.com/repos/istio/istio,closed,WorkloadLabels race,area/networking kind/test failure,"```
==================
WARNING: DATA RACE
Read at 0x00c0011123a0 by goroutine 807:
istio.io/istio/pilot/pkg/networking/core/v1alpha3.(*ConfigGeneratorImpl).buildSidecarOutboundHTTPRouteConfig()
/home/prow/go/src/istio.io/istio/pilot/pkg/networking/core/v1alpha3/httproute.go:162 +0x6d4
istio.io/istio/pilot/pkg/networking/core/v1alpha3.(*ConfigGeneratorImpl).BuildHTTPRoutes()
/home/prow/go/src/istio.io/istio/pilot/pkg/networking/core/v1alpha3/httproute.go:48 +0x14a
istio.io/istio/pilot/pkg/proxy/envoy/v2.(*DiscoveryServer).generateRawRoutes()
/home/prow/go/src/istio.io/istio/pilot/pkg/proxy/envoy/v2/rds.go:59 +0x23a
istio.io/istio/pilot/pkg/proxy/envoy/v2.(*DiscoveryServer).pushRoute()
/home/prow/go/src/istio.io/istio/pilot/pkg/proxy/envoy/v2/rds.go:30 +0x6a
istio.io/istio/pilot/pkg/proxy/envoy/v2.(*DiscoveryServer).StreamAggregatedResources()
/home/prow/go/src/istio.io/istio/pilot/pkg/proxy/envoy/v2/ads.go:374 +0x33df
github.com/envoyproxy/go-control-plane/envoy/service/discovery/v2._AggregatedDiscoveryService_StreamAggregatedResources_Handler()
/home/prow/go/pkg/mod/github.com/envoyproxy/go-control-plane@v0.8.6/envoy/service/discovery/v2/ads.pb.go:195 +0xcd
google.golang.org/grpc.(*Server).processStreamingRPC()
/home/prow/go/pkg/mod/google.golang.org/grpc@v1.23.0/server.go:1199 +0x1535
google.golang.org/grpc.(*Server).handleStream()
/home/prow/go/pkg/mod/google.golang.org/grpc@v1.23.0/server.go:1279 +0x12e5
google.golang.org/grpc.(*Server).serveStreams.func1.1()
/home/prow/go/pkg/mod/google.golang.org/grpc@v1.23.0/server.go:710 +0xac
Previous write at 0x00c0011123a0 by goroutine 606:
istio.io/istio/pilot/pkg/proxy/envoy/v2.(*DiscoveryServer).WorkloadUpdate()
/home/prow/go/src/istio.io/istio/pilot/pkg/proxy/envoy/v2/eds.go:469 +0x40f
istio.io/istio/pilot/pkg/proxy/envoy/v2_test.TestLDSWithWorkloadLabelUpdate.func1()
/home/prow/go/src/istio.io/istio/pilot/pkg/proxy/envoy/v2/mem.go:111 +0x530
testing.tRunner()
/usr/local/go/src/testing/testing.go:865 +0x163
Goroutine 807 (running) created at:
google.golang.org/grpc.(*Server).serveStreams.func1()
/home/prow/go/pkg/mod/google.golang.org/grpc@v1.23.0/server.go:708 +0xb8
google.golang.org/grpc/internal/transport.(*http2Server).operateHeaders()
/home/prow/go/pkg/mod/google.golang.org/grpc@v1.23.0/internal/transport/http2_server.go:429 +0x1705
google.golang.org/grpc/internal/transport.(*http2Server).HandleStreams()
/home/prow/go/pkg/mod/google.golang.org/grpc@v1.23.0/internal/transport/http2_server.go:470 +0x3b2
google.golang.org/grpc.(*Server).serveStreams()
/home/prow/go/pkg/mod/google.golang.org/grpc@v1.23.0/server.go:706 +0x170
google.golang.org/grpc.(*Server).handleRawConn.func1()
/home/prow/go/pkg/mod/google.golang.org/grpc@v1.23.0/server.go:668 +0x4c
Goroutine 606 (running) created at:
testing.(*T).Run()
/usr/local/go/src/testing/testing.go:916 +0x65a
istio.io/istio/pilot/pkg/proxy/envoy/v2_test.TestLDSWithWorkloadLabelUpdate()
/home/prow/go/src/istio.io/istio/pilot/pkg/proxy/envoy/v2/lds_test.go:551 +0x331
testing.tRunner()
/usr/local/go/src/testing/testing.go:865 +0x163
==================
```
From https://prow.k8s.io/view/gcs/istio-prow/logs/istio-racetest-master/4590
Seems broken by https://github.com/istio/istio/pull/16501",1.0,"WorkloadLabels race - ```
==================
WARNING: DATA RACE
Read at 0x00c0011123a0 by goroutine 807:
istio.io/istio/pilot/pkg/networking/core/v1alpha3.(*ConfigGeneratorImpl).buildSidecarOutboundHTTPRouteConfig()
/home/prow/go/src/istio.io/istio/pilot/pkg/networking/core/v1alpha3/httproute.go:162 +0x6d4
istio.io/istio/pilot/pkg/networking/core/v1alpha3.(*ConfigGeneratorImpl).BuildHTTPRoutes()
/home/prow/go/src/istio.io/istio/pilot/pkg/networking/core/v1alpha3/httproute.go:48 +0x14a
istio.io/istio/pilot/pkg/proxy/envoy/v2.(*DiscoveryServer).generateRawRoutes()
/home/prow/go/src/istio.io/istio/pilot/pkg/proxy/envoy/v2/rds.go:59 +0x23a
istio.io/istio/pilot/pkg/proxy/envoy/v2.(*DiscoveryServer).pushRoute()
/home/prow/go/src/istio.io/istio/pilot/pkg/proxy/envoy/v2/rds.go:30 +0x6a
istio.io/istio/pilot/pkg/proxy/envoy/v2.(*DiscoveryServer).StreamAggregatedResources()
/home/prow/go/src/istio.io/istio/pilot/pkg/proxy/envoy/v2/ads.go:374 +0x33df
github.com/envoyproxy/go-control-plane/envoy/service/discovery/v2._AggregatedDiscoveryService_StreamAggregatedResources_Handler()
/home/prow/go/pkg/mod/github.com/envoyproxy/go-control-plane@v0.8.6/envoy/service/discovery/v2/ads.pb.go:195 +0xcd
google.golang.org/grpc.(*Server).processStreamingRPC()
/home/prow/go/pkg/mod/google.golang.org/grpc@v1.23.0/server.go:1199 +0x1535
google.golang.org/grpc.(*Server).handleStream()
/home/prow/go/pkg/mod/google.golang.org/grpc@v1.23.0/server.go:1279 +0x12e5
google.golang.org/grpc.(*Server).serveStreams.func1.1()
/home/prow/go/pkg/mod/google.golang.org/grpc@v1.23.0/server.go:710 +0xac
Previous write at 0x00c0011123a0 by goroutine 606:
istio.io/istio/pilot/pkg/proxy/envoy/v2.(*DiscoveryServer).WorkloadUpdate()
/home/prow/go/src/istio.io/istio/pilot/pkg/proxy/envoy/v2/eds.go:469 +0x40f
istio.io/istio/pilot/pkg/proxy/envoy/v2_test.TestLDSWithWorkloadLabelUpdate.func1()
/home/prow/go/src/istio.io/istio/pilot/pkg/proxy/envoy/v2/mem.go:111 +0x530
testing.tRunner()
/usr/local/go/src/testing/testing.go:865 +0x163
Goroutine 807 (running) created at:
google.golang.org/grpc.(*Server).serveStreams.func1()
/home/prow/go/pkg/mod/google.golang.org/grpc@v1.23.0/server.go:708 +0xb8
google.golang.org/grpc/internal/transport.(*http2Server).operateHeaders()
/home/prow/go/pkg/mod/google.golang.org/grpc@v1.23.0/internal/transport/http2_server.go:429 +0x1705
google.golang.org/grpc/internal/transport.(*http2Server).HandleStreams()
/home/prow/go/pkg/mod/google.golang.org/grpc@v1.23.0/internal/transport/http2_server.go:470 +0x3b2
google.golang.org/grpc.(*Server).serveStreams()
/home/prow/go/pkg/mod/google.golang.org/grpc@v1.23.0/server.go:706 +0x170
google.golang.org/grpc.(*Server).handleRawConn.func1()
/home/prow/go/pkg/mod/google.golang.org/grpc@v1.23.0/server.go:668 +0x4c
Goroutine 606 (running) created at:
testing.(*T).Run()
/usr/local/go/src/testing/testing.go:916 +0x65a
istio.io/istio/pilot/pkg/proxy/envoy/v2_test.TestLDSWithWorkloadLabelUpdate()
/home/prow/go/src/istio.io/istio/pilot/pkg/proxy/envoy/v2/lds_test.go:551 +0x331
testing.tRunner()
/usr/local/go/src/testing/testing.go:865 +0x163
==================
```
From https://prow.k8s.io/view/gcs/istio-prow/logs/istio-racetest-master/4590
Seems broken by https://github.com/istio/istio/pull/16501",0,workloadlabels race warning data race read at by goroutine istio io istio pilot pkg networking core configgeneratorimpl buildsidecaroutboundhttprouteconfig home prow go src istio io istio pilot pkg networking core httproute go istio io istio pilot pkg networking core configgeneratorimpl buildhttproutes home prow go src istio io istio pilot pkg networking core httproute go istio io istio pilot pkg proxy envoy discoveryserver generaterawroutes home prow go src istio io istio pilot pkg proxy envoy rds go istio io istio pilot pkg proxy envoy discoveryserver pushroute home prow go src istio io istio pilot pkg proxy envoy rds go istio io istio pilot pkg proxy envoy discoveryserver streamaggregatedresources home prow go src istio io istio pilot pkg proxy envoy ads go github com envoyproxy go control plane envoy service discovery aggregateddiscoveryservice streamaggregatedresources handler home prow go pkg mod github com envoyproxy go control plane envoy service discovery ads pb go google golang org grpc server processstreamingrpc home prow go pkg mod google golang org grpc server go google golang org grpc server handlestream home prow go pkg mod google golang org grpc server go google golang org grpc server servestreams home prow go pkg mod google golang org grpc server go previous write at by goroutine istio io istio pilot pkg proxy envoy discoveryserver workloadupdate home prow go src istio io istio pilot pkg proxy envoy eds go istio io istio pilot pkg proxy envoy test testldswithworkloadlabelupdate home prow go src istio io istio pilot pkg proxy envoy mem go testing trunner usr local go src testing testing go goroutine running created at google golang org grpc server servestreams home prow go pkg mod google golang org grpc server go google golang org grpc internal transport operateheaders home prow go pkg mod google golang org grpc internal transport server go google golang org grpc internal transport handlestreams home prow go pkg mod google golang org grpc internal transport server go google golang org grpc server servestreams home prow go pkg mod google golang org grpc server go google golang org grpc server handlerawconn home prow go pkg mod google golang org grpc server go goroutine running created at testing t run usr local go src testing testing go istio io istio pilot pkg proxy envoy test testldswithworkloadlabelupdate home prow go src istio io istio pilot pkg proxy envoy lds test go testing trunner usr local go src testing testing go from seems broken by ,0
2932,30316765236.0,IssuesEvent,2023-07-10 16:05:20,camunda/zeebe,https://api.github.com/repos/camunda/zeebe,closed,Cancel on-going remote stream registration on stream removal,kind/bug area/performance area/reliability component/transport,"**Describe the bug**
There is currently a potential race condition which would result in a remote stream existing server side, even though the client stream has gone away.
Since we register remote streams asynchronously, a remove request may be submitted client side, which will immediately remove it there. Then asynchronous removal requests are sent to the server. However, this can be interleaved with the asynchronous registration, resulting in a stream existing server side.
The impact is additional latency during a push, or possible unnecessary job activation if it was the last stream for this type. However, the stream will eventually get removed appropriately.
**Expected behavior**
Registration/removal of remote streams is sequenced, such that a removal request would cancel registration attempts, and queue the removal after whatever in-flight requests were sent are finished.
There is still a slight edge case around time outs, of course, but I think this is acceptable for now. The other option would be introducing even more coordination in the protocol, and I'd rather avoid this.",True,"Cancel on-going remote stream registration on stream removal - **Describe the bug**
There is currently a potential race condition which would result in a remote stream existing server side, even though the client stream has gone away.
Since we register remote streams asynchronously, a remove request may be submitted client side, which will immediately remove it there. Then asynchronous removal requests are sent to the server. However, this can be interleaved with the asynchronous registration, resulting in a stream existing server side.
The impact is additional latency during a push, or possible unnecessary job activation if it was the last stream for this type. However, the stream will eventually get removed appropriately.
**Expected behavior**
Registration/removal of remote streams is sequenced, such that a removal request would cancel registration attempts, and queue the removal after whatever in-flight requests were sent are finished.
There is still a slight edge case around time outs, of course, but I think this is acceptable for now. The other option would be introducing even more coordination in the protocol, and I'd rather avoid this.",1,cancel on going remote stream registration on stream removal describe the bug there is currently a potential race condition which would result in a remote stream existing server side even though the client stream has gone away since we register remote streams asynchronously a remove request may be submitted client side which will immediately remove it there then asynchronous removal requests are sent to the server however this can be interleaved with the asynchronous registration resulting in a stream existing server side the impact is additional latency during a push or possible unnecessary job activation if it was the last stream for this type however the stream will eventually get removed appropriately expected behavior registration removal of remote streams is sequenced such that a removal request would cancel registration attempts and queue the removal after whatever in flight requests were sent are finished there is still a slight edge case around time outs of course but i think this is acceptable for now the other option would be introducing even more coordination in the protocol and i d rather avoid this ,1
222481,24708974482.0,IssuesEvent,2022-10-19 21:54:13,lukebrogan-mend/railsgoat,https://api.github.com/repos/lukebrogan-mend/railsgoat,closed,CVE-2020-26247 (Medium) detected in nokogiri-1.10.10.gem - autoclosed,security vulnerability,"## CVE-2020-26247 - Medium Severity Vulnerability
Vulnerable Library - nokogiri-1.10.10.gem
Nokogiri (鋸) is an HTML, XML, SAX, and Reader parser. Among
Nokogiri's many features is the ability to search documents via XPath
or CSS3 selectors.
Nokogiri is a Rubygem providing HTML, XML, SAX, and Reader parsers with XPath and CSS selector support. In Nokogiri before version 1.11.0.rc4 there is an XXE vulnerability. XML Schemas parsed by Nokogiri::XML::Schema are trusted by default, allowing external resources to be accessed over the network, potentially enabling XXE or SSRF attacks. This behavior is counter to the security policy followed by Nokogiri maintainers, which is to treat all input as untrusted by default whenever possible. This is fixed in Nokogiri version 1.11.0.rc4.
Nokogiri is a Rubygem providing HTML, XML, SAX, and Reader parsers with XPath and CSS selector support. In Nokogiri before version 1.11.0.rc4 there is an XXE vulnerability. XML Schemas parsed by Nokogiri::XML::Schema are trusted by default, allowing external resources to be accessed over the network, potentially enabling XXE or SSRF attacks. This behavior is counter to the security policy followed by Nokogiri maintainers, which is to treat all input as untrusted by default whenever possible. This is fixed in Nokogiri version 1.11.0.rc4.
For more information on CVSS3 Scores, click here.
Suggested Fix
Type: Upgrade version
Release Date: 2020-12-30
Fix Resolution: 1.11.0.rc4
",0,cve medium detected in nokogiri gem autoclosed cve medium severity vulnerability vulnerable library nokogiri gem nokogiri 鋸 is an html xml sax and reader parser among nokogiri s many features is the ability to search documents via xpath or selectors library home page a href dependency hierarchy sassc rails gem root library railties gem actionpack gem rails dom testing gem x nokogiri gem vulnerable library found in head commit a href found in base branch master vulnerability details nokogiri is a rubygem providing html xml sax and reader parsers with xpath and css selector support in nokogiri before version there is an xxe vulnerability xml schemas parsed by nokogiri xml schema are trusted by default allowing external resources to be accessed over the network potentially enabling xxe or ssrf attacks this behavior is counter to the security policy followed by nokogiri maintainers which is to treat all input as untrusted by default whenever possible this is fixed in nokogiri version publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required low user interaction none scope unchanged impact metrics confidentiality impact low integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version release date fix resolution ,0
323302,23940962742.0,IssuesEvent,2022-09-11 22:14:21,zzap/WordPress-Advanced-administration-handbook,https://api.github.com/repos/zzap/WordPress-Advanced-administration-handbook,closed,Page: Editing Files,documentation,"**wordpress/edit-files.md**
https://github.com/zzap/WordPress-Advanced-administration-handbook/blob/main/wordpress/edit-files.md
- [x] Add into a Category
- [x] Page creation
- [x] Copy the original content
- [x] Format the content
- [x] Create a PR
",1.0,"Page: Editing Files - **wordpress/edit-files.md**
https://github.com/zzap/WordPress-Advanced-administration-handbook/blob/main/wordpress/edit-files.md
- [x] Add into a Category
- [x] Page creation
- [x] Copy the original content
- [x] Format the content
- [x] Create a PR
",0,page editing files wordpress edit files md add into a category page creation copy the original content format the content create a pr ,0
91897,18737234364.0,IssuesEvent,2021-11-04 09:17:31,cosmos/ibc-go,https://api.github.com/repos/cosmos/ibc-go,closed,Nitpicks: ICA Audit (TrySendTxFlow),good first issue code-hygiene 27-interchain-accounts audit-ica,"
## Summary of Issue
The following nits arose as part of the audit.
- [x] TrySendTx (relay.go) channel not found should return for which port id
- [ ] ~~Return nil instead of []byte{} in all returns for keeper/relay.go~~
- [x] Require that we pass in an array of sdk.Msg instead of single sdk.Msg (keeper/keeper.go)
- [ ] ~~Add channel/port id in capability not found error (createOutgoingPacket - keeper/relay.go)~~
- [x] Add channel/port id in get next send sequence not found error (createOutgoingPacket - keeper/relay.go)
- [x] Potentially panic on errors that indicate bugs in code? (keeper/keeper.go)
- [x] ErrUnkownPacketData - ErrUnknownDataType (errors.go)
- [x] relay.go: AuthenticateTx: Get Interchain address first then loop through signers and return an error if the expected signer is not ICA address
- [x] relay.go: ExectureTx reduce code by returning when error occurs on executeMsg. Add comment for how cache context is functioning (atomic execution)
- [ ] ~~Fix error type and wrapping in module.go AcknowledgePacket~~
____
#### For Admin Use
- [ ] Not duplicate issue
- [ ] Appropriate labels applied
- [ ] Appropriate contributors tagged/assigned
",1.0,"Nitpicks: ICA Audit (TrySendTxFlow) -
## Summary of Issue
The following nits arose as part of the audit.
- [x] TrySendTx (relay.go) channel not found should return for which port id
- [ ] ~~Return nil instead of []byte{} in all returns for keeper/relay.go~~
- [x] Require that we pass in an array of sdk.Msg instead of single sdk.Msg (keeper/keeper.go)
- [ ] ~~Add channel/port id in capability not found error (createOutgoingPacket - keeper/relay.go)~~
- [x] Add channel/port id in get next send sequence not found error (createOutgoingPacket - keeper/relay.go)
- [x] Potentially panic on errors that indicate bugs in code? (keeper/keeper.go)
- [x] ErrUnkownPacketData - ErrUnknownDataType (errors.go)
- [x] relay.go: AuthenticateTx: Get Interchain address first then loop through signers and return an error if the expected signer is not ICA address
- [x] relay.go: ExectureTx reduce code by returning when error occurs on executeMsg. Add comment for how cache context is functioning (atomic execution)
- [ ] ~~Fix error type and wrapping in module.go AcknowledgePacket~~
____
#### For Admin Use
- [ ] Not duplicate issue
- [ ] Appropriate labels applied
- [ ] Appropriate contributors tagged/assigned
",0,nitpicks ica audit trysendtxflow ☺ v ✰ thanks for opening an issue ✰ v before smashing the submit button please review the template v please also ensure that this is not a duplicate issue ☺ summary of issue the following nits arose as part of the audit trysendtx relay go channel not found should return for which port id return nil instead of byte in all returns for keeper relay go require that we pass in an array of sdk msg instead of single sdk msg keeper keeper go add channel port id in capability not found error createoutgoingpacket keeper relay go add channel port id in get next send sequence not found error createoutgoingpacket keeper relay go potentially panic on errors that indicate bugs in code keeper keeper go errunkownpacketdata errunknowndatatype errors go relay go authenticatetx get interchain address first then loop through signers and return an error if the expected signer is not ica address relay go execturetx reduce code by returning when error occurs on executemsg add comment for how cache context is functioning atomic execution fix error type and wrapping in module go acknowledgepacket for admin use not duplicate issue appropriate labels applied appropriate contributors tagged assigned ,0
75,3437903691.0,IssuesEvent,2015-12-13 16:00:55,ewxrjk/rsbackup,https://api.github.com/repos/ewxrjk/rsbackup,opened,Host availability check isn't good enough,reliability,"The current way to test whether a host is up is to see if SSHing to it succeeds.
However a host may be up but misconfigured in some way that means that SSH fails. Other possible approaches:
* ping it
* connect to some port
This should be configurable.",True,"Host availability check isn't good enough - The current way to test whether a host is up is to see if SSHing to it succeeds.
However a host may be up but misconfigured in some way that means that SSH fails. Other possible approaches:
* ping it
* connect to some port
This should be configurable.",1,host availability check isn t good enough the current way to test whether a host is up is to see if sshing to it succeeds however a host may be up but misconfigured in some way that means that ssh fails other possible approaches ping it connect to some port this should be configurable ,1
186413,15057182722.0,IssuesEvent,2021-02-03 21:17:17,forcecreators/apex-logs,https://api.github.com/repos/forcecreators/apex-logs,opened,Split out log explorer and apex logs into separate extensions,bug documentation,"Some users have reported issues preventing them from using the performance profiler due to a problem with SFDX not responding during the extension initialization.
When working correctly, the extension will phone into salesforce and check for any active traceflags and get metrics on the orgs current log usage. For most users this operation appears to work without an issue. Occasionally, SFDX will hang, which delay's the extensions ability to start up. This blocks both the log explorer and the profiler from starting.
Though these checks are necessary for the logging component of Apex Logs, they are not necessary for profiling, which relies solely on processing the log file itself. In order to increase the reliability of the performance profiler, we will be splitting out the log explorer into its own extension, which will remain in preview until we can define a better strategy for detecting hangs in SFDX.
Rollout:
Apex Logs v0.2.4 (2/4/2021) - Notify users of the upcoming change. This release will also allow users to disable log explorer in order to immediate resolve their block on using the profiler.
Log Explorer v0.0.1 (2/6/2021) - First version of log explorer released.
Apex Logs v0.3.0 (2/12/2021) - Log Explorer will be removed from the Apex Logs package. Users will be prompted to install Log Explorer.
We understand that this may pose a burden on those who's extensions are working as expected, but in the interest of stability and in the spirit of ""The Separation of Concerns"", we feel this is the right decision for the future improvement of both features.",1.0,"Split out log explorer and apex logs into separate extensions - Some users have reported issues preventing them from using the performance profiler due to a problem with SFDX not responding during the extension initialization.
When working correctly, the extension will phone into salesforce and check for any active traceflags and get metrics on the orgs current log usage. For most users this operation appears to work without an issue. Occasionally, SFDX will hang, which delay's the extensions ability to start up. This blocks both the log explorer and the profiler from starting.
Though these checks are necessary for the logging component of Apex Logs, they are not necessary for profiling, which relies solely on processing the log file itself. In order to increase the reliability of the performance profiler, we will be splitting out the log explorer into its own extension, which will remain in preview until we can define a better strategy for detecting hangs in SFDX.
Rollout:
Apex Logs v0.2.4 (2/4/2021) - Notify users of the upcoming change. This release will also allow users to disable log explorer in order to immediate resolve their block on using the profiler.
Log Explorer v0.0.1 (2/6/2021) - First version of log explorer released.
Apex Logs v0.3.0 (2/12/2021) - Log Explorer will be removed from the Apex Logs package. Users will be prompted to install Log Explorer.
We understand that this may pose a burden on those who's extensions are working as expected, but in the interest of stability and in the spirit of ""The Separation of Concerns"", we feel this is the right decision for the future improvement of both features.",0,split out log explorer and apex logs into separate extensions some users have reported issues preventing them from using the performance profiler due to a problem with sfdx not responding during the extension initialization when working correctly the extension will phone into salesforce and check for any active traceflags and get metrics on the orgs current log usage for most users this operation appears to work without an issue occasionally sfdx will hang which delay s the extensions ability to start up this blocks both the log explorer and the profiler from starting though these checks are necessary for the logging component of apex logs they are not necessary for profiling which relies solely on processing the log file itself in order to increase the reliability of the performance profiler we will be splitting out the log explorer into its own extension which will remain in preview until we can define a better strategy for detecting hangs in sfdx rollout apex logs notify users of the upcoming change this release will also allow users to disable log explorer in order to immediate resolve their block on using the profiler log explorer first version of log explorer released apex logs log explorer will be removed from the apex logs package users will be prompted to install log explorer we understand that this may pose a burden on those who s extensions are working as expected but in the interest of stability and in the spirit of the separation of concerns we feel this is the right decision for the future improvement of both features ,0
132128,18266136992.0,IssuesEvent,2021-10-04 08:40:41,artsking/linux-3.0.35_CVE-2020-15436_withPatch,https://api.github.com/repos/artsking/linux-3.0.35_CVE-2020-15436_withPatch,closed,CVE-2015-3331 (Medium) detected in linux-stable-rtv3.8.6 - autoclosed,security vulnerability,"## CVE-2015-3331 - Medium Severity Vulnerability
Vulnerable Library - linux-stable-rtv3.8.6
The __driver_rfc4106_decrypt function in arch/x86/crypto/aesni-intel_glue.c in the Linux kernel before 3.19.3 does not properly determine the memory locations used for encrypted data, which allows context-dependent attackers to cause a denial of service (buffer overflow and system crash) or possibly execute arbitrary code by triggering a crypto API call, as demonstrated by use of a libkcapi test program with an AF_ALG(aead) socket.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2015-3331 (Medium) detected in linux-stable-rtv3.8.6 - autoclosed - ## CVE-2015-3331 - Medium Severity Vulnerability
Vulnerable Library - linux-stable-rtv3.8.6
The __driver_rfc4106_decrypt function in arch/x86/crypto/aesni-intel_glue.c in the Linux kernel before 3.19.3 does not properly determine the memory locations used for encrypted data, which allows context-dependent attackers to cause a denial of service (buffer overflow and system crash) or possibly execute arbitrary code by triggering a crypto API call, as demonstrated by use of a libkcapi test program with an AF_ALG(aead) socket.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in linux stable autoclosed cve medium severity vulnerability vulnerable library linux stable julia cartwright s fork of linux stable rt git library home page a href found in head commit a href found in base branch master vulnerable source files arch crypto aesni intel glue c arch crypto aesni intel glue c vulnerability details the driver decrypt function in arch crypto aesni intel glue c in the linux kernel before does not properly determine the memory locations used for encrypted data which allows context dependent attackers to cause a denial of service buffer overflow and system crash or possibly execute arbitrary code by triggering a crypto api call as demonstrated by use of a libkcapi test program with an af alg aead socket publish date url a href cvss score details base score metrics exploitability metrics attack vector n a attack complexity n a privileges required n a user interaction n a scope n a impact metrics confidentiality impact n a integrity impact n a availability impact n a for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource ,0
1360,3918683899.0,IssuesEvent,2016-04-21 13:26:30,bazelbuild/bazel,https://api.github.com/repos/bazelbuild/bazel,closed,April release,category: release / binary P1 type: process,"I'll try to create the candidate today, from the release candidate in Google:
mainline: 759bbfedbd8acd1324211d68b69e302478428e32
cherry-picks:
- 1250fdac4c7769cfa200af8b4f9b061024356fea
- ba8700ee63efe26c1a09d288129ced18a265ff89
- Rollback of https://bazel-review.googlesource.com/#/c/3220/",1.0,"April release - I'll try to create the candidate today, from the release candidate in Google:
mainline: 759bbfedbd8acd1324211d68b69e302478428e32
cherry-picks:
- 1250fdac4c7769cfa200af8b4f9b061024356fea
- ba8700ee63efe26c1a09d288129ced18a265ff89
- Rollback of https://bazel-review.googlesource.com/#/c/3220/",0,april release i ll try to create the candidate today from the release candidate in google mainline cherry picks rollback of ,0
213326,23984712199.0,IssuesEvent,2022-09-13 17:59:56,ghc-dev/7938212_1972,https://api.github.com/repos/ghc-dev/7938212_1972,closed,ejs-locals-1.0.2.tgz: 1 vulnerabilities (highest severity is: 9.8) - autoclosed,security vulnerability," Vulnerable Library - ejs-locals-1.0.2.tgz
Path to dependency file: /package.json
Path to vulnerable library: /node_modules/ejs-locals/node_modules/ejs/package.json
",0,ejs locals tgz vulnerabilities highest severity is autoclosed vulnerable library ejs locals tgz path to dependency file package json path to vulnerable library node modules ejs locals node modules ejs package json found in head commit a href vulnerabilities cve severity cvss dependency type fixed in remediation available high ejs tgz transitive n a details cve vulnerable library ejs tgz embedded javascript templates library home page a href path to dependency file package json path to vulnerable library node modules ejs locals node modules ejs package json dependency hierarchy ejs locals tgz root library x ejs tgz vulnerable library found in head commit a href found in base branch main vulnerability details nodejs ejs versions older than is vulnerable to remote code execution due to weak input validation in ejs renderfile function publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution ,0
20373,13883444483.0,IssuesEvent,2020-10-18 12:01:40,sunpy/sunpy,https://api.github.com/repos/sunpy/sunpy,closed,codecov not picking up circleCI coverage reports,Infrastructure,"e.g. see https://codecov.io/gh/sunpy/sunpy/commit/acc0a0e751418a0bee841500f8dc9d4bb68ddee9/build. circleCI reports that a coverage report is uploaded fine, but codecov doesn't seem to be picking it up and counting the coverage. This is particularly obvious in `timeseries`, where codecov thinks there is no coverage of the `peek()` methods, but these should definitely be covered by the figure tests on circleCI.",1.0,"codecov not picking up circleCI coverage reports - e.g. see https://codecov.io/gh/sunpy/sunpy/commit/acc0a0e751418a0bee841500f8dc9d4bb68ddee9/build. circleCI reports that a coverage report is uploaded fine, but codecov doesn't seem to be picking it up and counting the coverage. This is particularly obvious in `timeseries`, where codecov thinks there is no coverage of the `peek()` methods, but these should definitely be covered by the figure tests on circleCI.",0,codecov not picking up circleci coverage reports e g see circleci reports that a coverage report is uploaded fine but codecov doesn t seem to be picking it up and counting the coverage this is particularly obvious in timeseries where codecov thinks there is no coverage of the peek methods but these should definitely be covered by the figure tests on circleci ,0
2510,25938333688.0,IssuesEvent,2022-12-16 15:59:23,kubernetes/kubernetes,https://api.github.com/repos/kubernetes/kubernetes,closed,Kube-apiserver 1.24 CPU is pretty high compared to 1.23 and not stable,kind/support lifecycle/rotten wg/api-expression needs-triage wg/reliability sig/k8s-infra,"### What happened?
1) kube-apiserver take almost 20 minute to start
2) kube-apiserver take very high cpu compared to kubernetes 1.23
3) kube-apiserver print many error when processing incoming requests
```
E0713 02:42:47.862757 1 wrap.go:53] timeout or abort while handling: method=GET URI=""/api/v1/namespaces/kube-system/pods?limit=500&resourceVersion=0"" audit-ID=""642da75a-2c2b-4fb2-a744-56064c151da3""
E0713 02:42:47.862839 1 wrap.go:53] timeout or abort while handling: method=GET URI=""/api/v1/namespaces/kube-system/services?limit=500&resourceVersion=0"" audit-ID=""5a4b2eca-fe36-4391-80df-714081520bad""
E0713 02:42:47.862888 1 wrap.go:53] timeout or abort while handling: method=GET URI=""/api/v1/namespaces/kube-system/endpoints?limit=500&resourceVersion=0"" audit-ID=""cecf5bbe-2957-4a53-9d38-dd33dc13a90d""
E0713 02:42:47.862923 1 wrap.go:53] timeout or abort while handling: method=GET URI=""/api/v1/namespaces/core/endpoints?limit=500&resourceVersion=0"" audit-ID=""59f4be57-1c39-4119-80f4-ac4d6089cab5""
E0713 02:42:47.862956 1 wrap.go:53] timeout or abort while handling: method=GET URI=""/api/v1/namespaces/core/pods?limit=500&resourceVersion=0"" audit-ID=""f441c778-2419-4c12-9a6c-7e030500acbc""
E0713 02:42:47.863771 1 writers.go:118] apiserver was unable to write a JSON response: http: Handler timeout
E0713 02:42:47.863893 1 writers.go:118] apiserver was unable to write a JSON response: http2: stream closed
E0713 02:42:47.863941 1 wrap.go:53] timeout or abort while handling: method=GET URI=""/api/v1/namespaces/dca/pods?limit=500&resourceVersion=0"" audit-ID=""b105e20f-b08f-4570-a11c-b6cc68a1eddc""
E0713 02:42:47.877597 1 timeout.go:141] post-timeout activity - time-elapsed: 14.675542ms, GET ""/api/v1/namespaces/kube-system/endpoints"" result:
E0713 02:42:47.877645 1 timeout.go:141] post-timeout activity - time-elapsed: 14.781207ms, GET ""/api/v1/namespaces/kube-system/services"" result:
E0713 02:42:47.877680 1 timeout.go:141] post-timeout activity - time-elapsed: 14.741713ms, GET ""/api/v1/namespaces/core/endpoints"" result:
E0713 02:42:47.893808 1 wrap.go:53] timeout or abort while handling: method=GET URI=""/api/v1/namespaces/core/pods?limit=500&resourceVersion=0"" audit-ID=""2d3a6ac8-465a-41e5-a6bd-947e1f2aaae5""
E0713 02:42:47.897656 1 writers.go:118] apiserver was unable to write a JSON response: http2: stream closed
E0713 02:42:47.924599 1 timeout.go:141] post-timeout activity - time-elapsed: 6.327µs, GET ""/api/v1/namespaces/kube-system/pods"" result:
E0713 02:42:47.926963 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:""http: Handler timeout""}: http: Handler timeout
E0713 02:42:47.933950 1 writers.go:131] apiserver was unable to write a fallback JSON response: http: Handler timeout
```
**CPU usage in 1.23 and 1.24.2**
policy-manager is business pod which don't call kube-apiserver

### What did you expect to happen?
1) kube-apiserver should not take too much time to start as 1.23.
2) kube-apiserver should process request as before. no timeout errors printed
### How can we reproduce it (as minimally and precisely as possible)?
1) install kube-apiserver 1.24.2
2) start some pods which take high cpu but this pod would not call the kubernete apiserver
### Anything else we need to know?
when kube-apiserver error happens, etcd works fine
**etcd logs are below**
```
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2022-07-13 02:40:39.467729 I | etcdmain: etcd Version: 3.4.16
2022-07-13 02:40:39.467844 I | etcdmain: Git SHA: d19fbe541
2022-07-13 02:40:39.467853 I | etcdmain: Go Version: go1.12.17
2022-07-13 02:40:39.467861 I | etcdmain: Go OS/Arch: linux/amd64
2022-07-13 02:40:39.467870 I | etcdmain: setting maximum number of CPUs to 4, total number of available CPUs is 4
2022-07-13 02:40:39.467957 N | etcdmain: the server is already initialized as member before, starting as etcd member...
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2022-07-13 02:40:39.468011 I | embed: peerTLS: cert = /etc/etcd/ssl/etcd-server.crt, key = /etc/etcd/ssl/etcd-server.key, trusted-ca = /etc/etcd/ssl/ca.crt, client-cert-auth = true, crl-file =
2022-07-13 02:40:39.469236 I | embed: name = blue110.blue.qa.opsware.com
2022-07-13 02:40:39.469256 I | embed: data dir = /var/etcd/data
2022-07-13 02:40:39.469266 I | embed: member dir = /var/etcd/data/member
2022-07-13 02:40:39.469275 I | embed: heartbeat = 100ms
2022-07-13 02:40:39.469286 I | embed: election = 1000ms
2022-07-13 02:40:39.469295 I | embed: snapshot count = 100000
2022-07-13 02:40:39.469310 I | embed: advertise client URLs = https://blue110.blue.qa.opsware.com:4001
2022-07-13 02:40:39.469321 I | embed: initial advertise peer URLs = https://blue110.blue.qa.opsware.com:2380
2022-07-13 02:40:39.469333 I | embed: initial cluster =
2022-07-13 02:40:41.539279 I | etcdserver: recovered store from snapshot at index 1200012
2022-07-13 02:40:41.539973 I | mvcc: restore compact to 1147344
2022-07-13 02:40:42.153077 I | etcdserver: restarting member 4cce519c2c414788 in cluster 84b2ac18d71aa314 at commit index 1250116
raft2022/07/13 02:40:42 INFO: 4cce519c2c414788 switched to configuration voters=(5534450723284141960)
raft2022/07/13 02:40:42 INFO: 4cce519c2c414788 became follower at term 5
raft2022/07/13 02:40:42 INFO: newRaft 4cce519c2c414788 [peers: [4cce519c2c414788], term: 5, commit: 1250116, applied: 1200012, lastindex: 1250116, lastterm: 5]
2022-07-13 02:40:42.155576 I | etcdserver/api: enabled capabilities for version 3.4
2022-07-13 02:40:42.155608 I | etcdserver/membership: added member 4cce519c2c414788 [https://blue110.blue.qa.opsware.com:2380] to cluster 84b2ac18d71aa314 from store
2022-07-13 02:40:42.155624 I | etcdserver/membership: set the cluster version to 3.4 from store
2022-07-13 02:40:42.156393 W | auth: simple token is not cryptographically signed
2022-07-13 02:40:42.157122 I | mvcc: restore compact to 1147344
2022-07-13 02:40:42.171033 I | etcdserver: starting server... [version: 3.4.16, cluster version: 3.4]
2022-07-13 02:40:42.171652 I | etcdserver: 4cce519c2c414788 as single-node; fast-forwarding 9 ticks (election ticks 10)
2022-07-13 02:40:42.175260 I | embed: ClientTLS: cert = /etc/etcd/ssl/etcd-server.crt, key = /etc/etcd/ssl/etcd-server.key, trusted-ca = /etc/etcd/ssl/ca.crt, client-cert-auth = true, crl-file =
2022-07-13 02:40:42.175488 I | embed: listening for peers on [::]:2380
raft2022/07/13 02:40:42 INFO: 4cce519c2c414788 is starting a new election at term 5
raft2022/07/13 02:40:42 INFO: 4cce519c2c414788 became candidate at term 6
raft2022/07/13 02:40:42 INFO: 4cce519c2c414788 received MsgVoteResp from 4cce519c2c414788 at term 6
raft2022/07/13 02:40:42 INFO: 4cce519c2c414788 became leader at term 6
raft2022/07/13 02:40:42 INFO: raft.node: 4cce519c2c414788 elected leader 4cce519c2c414788 at term 6
2022-07-13 02:40:42.758779 I | etcdserver: published {Name:blue110.blue.qa.opsware.com ClientURLs:[https://blue110.blue.qa.opsware.com:4001]} to cluster 84b2ac18d71aa314
2022-07-13 02:40:42.758829 I | embed: ready to serve client requests
2022-07-13 02:40:42.761378 I | embed: serving client requests on [::]:4001
2022-07-13 02:50:42.793514 I | mvcc: store.index: compact 1149345
2022-07-13 02:50:42.877389 I | mvcc: finished scheduled compaction at 1149345 (took 81.743788ms)
2022-07-13 02:55:42.802281 I | mvcc: store.index: compact 1149901
2022-07-13 02:55:42.828604 I | mvcc: finished scheduled compaction at 1149901 (took 24.536359ms)
2022-07-13 03:00:42.809280 I | mvcc: store.index: compact 1150456
2022-07-13 03:00:42.836157 I | mvcc: finished scheduled compaction at 1150456 (took 24.868489ms)
2022-07-13 03:05:42.817620 I | mvcc: store.index: compact 1151014
2022-07-13 03:05:42.865690 I | mvcc: finished scheduled compaction at 1151014 (took 46.255025ms)
2022-07-13 03:10:42.823480 I | mvcc: store.index: compact 1151570
2022-07-13 03:10:42.856603 I | mvcc: finished scheduled compaction at 1151570 (took 31.01357ms)
2022-07-13 03:15:42.830422 I | mvcc: store.index: compact 1152126
2022-07-13 03:15:42.855781 I | mvcc: finished scheduled compaction at 1152126 (took 23.56263ms)
2022-07-13 03:20:42.886659 I | mvcc: store.index: compact 1152681
2022-07-13 03:20:42.913195 I | mvcc: finished scheduled compaction at 1152681 (took 23.60644ms)
2022-07-13 03:25:42.895883 I | mvcc: store.index: compact 1153235
2022-07-13 03:25:42.921169 I | mvcc: finished scheduled compaction at 1153235 (took 23.460817ms)
^C
```
### Kubernetes version
```console
kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:""1"", Minor:""24"", GitVersion:""v1.24.2"", GitCommit:""f66044f4361b9f1f96f0053dd46cb7dce5e990a8"", GitTreeState:""clean"", BuildDate:""2022-06-15T14:22:29Z"", GoVersion:""go1.18.3"", Compiler:""gc"", Platform:""linux/amd64""}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:""1"", Minor:""24"", GitVersion:""v1.24.2"", GitCommit:""f66044f4361b9f1f96f0053dd46cb7dce5e990a8"", GitTreeState:""clean"", BuildDate:""2022-06-15T14:15:38Z"", GoVersion:""go1.18.3"", Compiler:""gc"", Platform:""linux/amd64""}
```
### Cloud provider
NA, manual-installed k8s
### OS version
```console
# On Linux:
$ cat /etc/os-release
Red Hat Enterprise Linux release 8.2 (Ootpa)
$ cat /etc/os-release
NAME=""Red Hat Enterprise Linux""
VERSION=""8.2 (Ootpa)""
ID=""rhel""
ID_LIKE=""fedora""
VERSION_ID=""8.2""
PLATFORM_ID=""platform:el8""
PRETTY_NAME=""Red Hat Enterprise Linux 8.2 (Ootpa)""
ANSI_COLOR=""0;31""
CPE_NAME=""cpe:/o:redhat:enterprise_linux:8.2:GA""
HOME_URL=""https://www.redhat.com/""
BUG_REPORT_URL=""https://bugzilla.redhat.com/""
REDHAT_BUGZILLA_PRODUCT=""Red Hat Enterprise Linux 8""
REDHAT_BUGZILLA_PRODUCT_VERSION=8.2
REDHAT_SUPPORT_PRODUCT=""Red Hat Enterprise Linux""
REDHAT_SUPPORT_PRODUCT_VERSION=""8.2""
$ uname -a
Linux 4.18.0-193.el8.x86_64 #1 SMP Fri Mar 27 14:35:58 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
```
### Install tools
installed manually
### Container runtime (CRI) and version (if applicable)
containerd github.com/containerd/containerd v1.5.5 72cec4be58a9eb6b2910f5d10f1c01ca47d231c0
### Related plugins (CNI, CSI, ...) and versions (if applicable)
",True,"Kube-apiserver 1.24 CPU is pretty high compared to 1.23 and not stable - ### What happened?
1) kube-apiserver take almost 20 minute to start
2) kube-apiserver take very high cpu compared to kubernetes 1.23
3) kube-apiserver print many error when processing incoming requests
```
E0713 02:42:47.862757 1 wrap.go:53] timeout or abort while handling: method=GET URI=""/api/v1/namespaces/kube-system/pods?limit=500&resourceVersion=0"" audit-ID=""642da75a-2c2b-4fb2-a744-56064c151da3""
E0713 02:42:47.862839 1 wrap.go:53] timeout or abort while handling: method=GET URI=""/api/v1/namespaces/kube-system/services?limit=500&resourceVersion=0"" audit-ID=""5a4b2eca-fe36-4391-80df-714081520bad""
E0713 02:42:47.862888 1 wrap.go:53] timeout or abort while handling: method=GET URI=""/api/v1/namespaces/kube-system/endpoints?limit=500&resourceVersion=0"" audit-ID=""cecf5bbe-2957-4a53-9d38-dd33dc13a90d""
E0713 02:42:47.862923 1 wrap.go:53] timeout or abort while handling: method=GET URI=""/api/v1/namespaces/core/endpoints?limit=500&resourceVersion=0"" audit-ID=""59f4be57-1c39-4119-80f4-ac4d6089cab5""
E0713 02:42:47.862956 1 wrap.go:53] timeout or abort while handling: method=GET URI=""/api/v1/namespaces/core/pods?limit=500&resourceVersion=0"" audit-ID=""f441c778-2419-4c12-9a6c-7e030500acbc""
E0713 02:42:47.863771 1 writers.go:118] apiserver was unable to write a JSON response: http: Handler timeout
E0713 02:42:47.863893 1 writers.go:118] apiserver was unable to write a JSON response: http2: stream closed
E0713 02:42:47.863941 1 wrap.go:53] timeout or abort while handling: method=GET URI=""/api/v1/namespaces/dca/pods?limit=500&resourceVersion=0"" audit-ID=""b105e20f-b08f-4570-a11c-b6cc68a1eddc""
E0713 02:42:47.877597 1 timeout.go:141] post-timeout activity - time-elapsed: 14.675542ms, GET ""/api/v1/namespaces/kube-system/endpoints"" result:
E0713 02:42:47.877645 1 timeout.go:141] post-timeout activity - time-elapsed: 14.781207ms, GET ""/api/v1/namespaces/kube-system/services"" result:
E0713 02:42:47.877680 1 timeout.go:141] post-timeout activity - time-elapsed: 14.741713ms, GET ""/api/v1/namespaces/core/endpoints"" result:
E0713 02:42:47.893808 1 wrap.go:53] timeout or abort while handling: method=GET URI=""/api/v1/namespaces/core/pods?limit=500&resourceVersion=0"" audit-ID=""2d3a6ac8-465a-41e5-a6bd-947e1f2aaae5""
E0713 02:42:47.897656 1 writers.go:118] apiserver was unable to write a JSON response: http2: stream closed
E0713 02:42:47.924599 1 timeout.go:141] post-timeout activity - time-elapsed: 6.327µs, GET ""/api/v1/namespaces/kube-system/pods"" result:
E0713 02:42:47.926963 1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:""http: Handler timeout""}: http: Handler timeout
E0713 02:42:47.933950 1 writers.go:131] apiserver was unable to write a fallback JSON response: http: Handler timeout
```
**CPU usage in 1.23 and 1.24.2**
policy-manager is business pod which don't call kube-apiserver

### What did you expect to happen?
1) kube-apiserver should not take too much time to start as 1.23.
2) kube-apiserver should process request as before. no timeout errors printed
### How can we reproduce it (as minimally and precisely as possible)?
1) install kube-apiserver 1.24.2
2) start some pods which take high cpu but this pod would not call the kubernete apiserver
### Anything else we need to know?
when kube-apiserver error happens, etcd works fine
**etcd logs are below**
```
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2022-07-13 02:40:39.467729 I | etcdmain: etcd Version: 3.4.16
2022-07-13 02:40:39.467844 I | etcdmain: Git SHA: d19fbe541
2022-07-13 02:40:39.467853 I | etcdmain: Go Version: go1.12.17
2022-07-13 02:40:39.467861 I | etcdmain: Go OS/Arch: linux/amd64
2022-07-13 02:40:39.467870 I | etcdmain: setting maximum number of CPUs to 4, total number of available CPUs is 4
2022-07-13 02:40:39.467957 N | etcdmain: the server is already initialized as member before, starting as etcd member...
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2022-07-13 02:40:39.468011 I | embed: peerTLS: cert = /etc/etcd/ssl/etcd-server.crt, key = /etc/etcd/ssl/etcd-server.key, trusted-ca = /etc/etcd/ssl/ca.crt, client-cert-auth = true, crl-file =
2022-07-13 02:40:39.469236 I | embed: name = blue110.blue.qa.opsware.com
2022-07-13 02:40:39.469256 I | embed: data dir = /var/etcd/data
2022-07-13 02:40:39.469266 I | embed: member dir = /var/etcd/data/member
2022-07-13 02:40:39.469275 I | embed: heartbeat = 100ms
2022-07-13 02:40:39.469286 I | embed: election = 1000ms
2022-07-13 02:40:39.469295 I | embed: snapshot count = 100000
2022-07-13 02:40:39.469310 I | embed: advertise client URLs = https://blue110.blue.qa.opsware.com:4001
2022-07-13 02:40:39.469321 I | embed: initial advertise peer URLs = https://blue110.blue.qa.opsware.com:2380
2022-07-13 02:40:39.469333 I | embed: initial cluster =
2022-07-13 02:40:41.539279 I | etcdserver: recovered store from snapshot at index 1200012
2022-07-13 02:40:41.539973 I | mvcc: restore compact to 1147344
2022-07-13 02:40:42.153077 I | etcdserver: restarting member 4cce519c2c414788 in cluster 84b2ac18d71aa314 at commit index 1250116
raft2022/07/13 02:40:42 INFO: 4cce519c2c414788 switched to configuration voters=(5534450723284141960)
raft2022/07/13 02:40:42 INFO: 4cce519c2c414788 became follower at term 5
raft2022/07/13 02:40:42 INFO: newRaft 4cce519c2c414788 [peers: [4cce519c2c414788], term: 5, commit: 1250116, applied: 1200012, lastindex: 1250116, lastterm: 5]
2022-07-13 02:40:42.155576 I | etcdserver/api: enabled capabilities for version 3.4
2022-07-13 02:40:42.155608 I | etcdserver/membership: added member 4cce519c2c414788 [https://blue110.blue.qa.opsware.com:2380] to cluster 84b2ac18d71aa314 from store
2022-07-13 02:40:42.155624 I | etcdserver/membership: set the cluster version to 3.4 from store
2022-07-13 02:40:42.156393 W | auth: simple token is not cryptographically signed
2022-07-13 02:40:42.157122 I | mvcc: restore compact to 1147344
2022-07-13 02:40:42.171033 I | etcdserver: starting server... [version: 3.4.16, cluster version: 3.4]
2022-07-13 02:40:42.171652 I | etcdserver: 4cce519c2c414788 as single-node; fast-forwarding 9 ticks (election ticks 10)
2022-07-13 02:40:42.175260 I | embed: ClientTLS: cert = /etc/etcd/ssl/etcd-server.crt, key = /etc/etcd/ssl/etcd-server.key, trusted-ca = /etc/etcd/ssl/ca.crt, client-cert-auth = true, crl-file =
2022-07-13 02:40:42.175488 I | embed: listening for peers on [::]:2380
raft2022/07/13 02:40:42 INFO: 4cce519c2c414788 is starting a new election at term 5
raft2022/07/13 02:40:42 INFO: 4cce519c2c414788 became candidate at term 6
raft2022/07/13 02:40:42 INFO: 4cce519c2c414788 received MsgVoteResp from 4cce519c2c414788 at term 6
raft2022/07/13 02:40:42 INFO: 4cce519c2c414788 became leader at term 6
raft2022/07/13 02:40:42 INFO: raft.node: 4cce519c2c414788 elected leader 4cce519c2c414788 at term 6
2022-07-13 02:40:42.758779 I | etcdserver: published {Name:blue110.blue.qa.opsware.com ClientURLs:[https://blue110.blue.qa.opsware.com:4001]} to cluster 84b2ac18d71aa314
2022-07-13 02:40:42.758829 I | embed: ready to serve client requests
2022-07-13 02:40:42.761378 I | embed: serving client requests on [::]:4001
2022-07-13 02:50:42.793514 I | mvcc: store.index: compact 1149345
2022-07-13 02:50:42.877389 I | mvcc: finished scheduled compaction at 1149345 (took 81.743788ms)
2022-07-13 02:55:42.802281 I | mvcc: store.index: compact 1149901
2022-07-13 02:55:42.828604 I | mvcc: finished scheduled compaction at 1149901 (took 24.536359ms)
2022-07-13 03:00:42.809280 I | mvcc: store.index: compact 1150456
2022-07-13 03:00:42.836157 I | mvcc: finished scheduled compaction at 1150456 (took 24.868489ms)
2022-07-13 03:05:42.817620 I | mvcc: store.index: compact 1151014
2022-07-13 03:05:42.865690 I | mvcc: finished scheduled compaction at 1151014 (took 46.255025ms)
2022-07-13 03:10:42.823480 I | mvcc: store.index: compact 1151570
2022-07-13 03:10:42.856603 I | mvcc: finished scheduled compaction at 1151570 (took 31.01357ms)
2022-07-13 03:15:42.830422 I | mvcc: store.index: compact 1152126
2022-07-13 03:15:42.855781 I | mvcc: finished scheduled compaction at 1152126 (took 23.56263ms)
2022-07-13 03:20:42.886659 I | mvcc: store.index: compact 1152681
2022-07-13 03:20:42.913195 I | mvcc: finished scheduled compaction at 1152681 (took 23.60644ms)
2022-07-13 03:25:42.895883 I | mvcc: store.index: compact 1153235
2022-07-13 03:25:42.921169 I | mvcc: finished scheduled compaction at 1153235 (took 23.460817ms)
^C
```
### Kubernetes version
```console
kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:""1"", Minor:""24"", GitVersion:""v1.24.2"", GitCommit:""f66044f4361b9f1f96f0053dd46cb7dce5e990a8"", GitTreeState:""clean"", BuildDate:""2022-06-15T14:22:29Z"", GoVersion:""go1.18.3"", Compiler:""gc"", Platform:""linux/amd64""}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:""1"", Minor:""24"", GitVersion:""v1.24.2"", GitCommit:""f66044f4361b9f1f96f0053dd46cb7dce5e990a8"", GitTreeState:""clean"", BuildDate:""2022-06-15T14:15:38Z"", GoVersion:""go1.18.3"", Compiler:""gc"", Platform:""linux/amd64""}
```
### Cloud provider
NA, manual-installed k8s
### OS version
```console
# On Linux:
$ cat /etc/os-release
Red Hat Enterprise Linux release 8.2 (Ootpa)
$ cat /etc/os-release
NAME=""Red Hat Enterprise Linux""
VERSION=""8.2 (Ootpa)""
ID=""rhel""
ID_LIKE=""fedora""
VERSION_ID=""8.2""
PLATFORM_ID=""platform:el8""
PRETTY_NAME=""Red Hat Enterprise Linux 8.2 (Ootpa)""
ANSI_COLOR=""0;31""
CPE_NAME=""cpe:/o:redhat:enterprise_linux:8.2:GA""
HOME_URL=""https://www.redhat.com/""
BUG_REPORT_URL=""https://bugzilla.redhat.com/""
REDHAT_BUGZILLA_PRODUCT=""Red Hat Enterprise Linux 8""
REDHAT_BUGZILLA_PRODUCT_VERSION=8.2
REDHAT_SUPPORT_PRODUCT=""Red Hat Enterprise Linux""
REDHAT_SUPPORT_PRODUCT_VERSION=""8.2""
$ uname -a
Linux 4.18.0-193.el8.x86_64 #1 SMP Fri Mar 27 14:35:58 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
```
### Install tools
installed manually
### Container runtime (CRI) and version (if applicable)
containerd github.com/containerd/containerd v1.5.5 72cec4be58a9eb6b2910f5d10f1c01ca47d231c0
### Related plugins (CNI, CSI, ...) and versions (if applicable)
",1,kube apiserver cpu is pretty high compared to and not stable what happened kube apiserver take almost minute to start kube apiserver take very high cpu compared to kubernetes kube apiserver print many error when processing incoming requests wrap go timeout or abort while handling method get uri api namespaces kube system pods limit resourceversion audit id wrap go timeout or abort while handling method get uri api namespaces kube system services limit resourceversion audit id wrap go timeout or abort while handling method get uri api namespaces kube system endpoints limit resourceversion audit id wrap go timeout or abort while handling method get uri api namespaces core endpoints limit resourceversion audit id wrap go timeout or abort while handling method get uri api namespaces core pods limit resourceversion audit id writers go apiserver was unable to write a json response http handler timeout writers go apiserver was unable to write a json response stream closed wrap go timeout or abort while handling method get uri api namespaces dca pods limit resourceversion audit id timeout go post timeout activity time elapsed get api namespaces kube system endpoints result timeout go post timeout activity time elapsed get api namespaces kube system services result timeout go post timeout activity time elapsed get api namespaces core endpoints result wrap go timeout or abort while handling method get uri api namespaces core pods limit resourceversion audit id writers go apiserver was unable to write a json response stream closed timeout go post timeout activity time elapsed get api namespaces kube system pods result status go apiserver received an error that is not an status errors errorstring s http handler timeout http handler timeout writers go apiserver was unable to write a fallback json response http handler timeout cpu usage in and policy manager is business pod which don t call kube apiserver what did you expect to happen kube apiserver should not take too much time to start as kube apiserver should process request as before no timeout errors printed how can we reproduce it as minimally and precisely as possible install kube apiserver start some pods which take high cpu but this pod would not call the kubernete apiserver anything else we need to know when kube apiserver error happens etcd works fine etcd logs are below deprecated logger capnslog flag is set use logger zap flag instead i etcdmain etcd version i etcdmain git sha i etcdmain go version i etcdmain go os arch linux i etcdmain setting maximum number of cpus to total number of available cpus is n etcdmain the server is already initialized as member before starting as etcd member deprecated logger capnslog flag is set use logger zap flag instead i embed peertls cert etc etcd ssl etcd server crt key etc etcd ssl etcd server key trusted ca etc etcd ssl ca crt client cert auth true crl file i embed name blue qa opsware com i embed data dir var etcd data i embed member dir var etcd data member i embed heartbeat i embed election i embed snapshot count i embed advertise client urls i embed initial advertise peer urls i embed initial cluster i etcdserver recovered store from snapshot at index i mvcc restore compact to i etcdserver restarting member in cluster at commit index info switched to configuration voters info became follower at term info newraft term commit applied lastindex lastterm i etcdserver api enabled capabilities for version i etcdserver membership added member to cluster from store i etcdserver membership set the cluster version to from store w auth simple token is not cryptographically signed i mvcc restore compact to i etcdserver starting server i etcdserver as single node fast forwarding ticks election ticks i embed clienttls cert etc etcd ssl etcd server crt key etc etcd ssl etcd server key trusted ca etc etcd ssl ca crt client cert auth true crl file i embed listening for peers on info is starting a new election at term info became candidate at term info received msgvoteresp from at term info became leader at term info raft node elected leader at term i etcdserver published name blue qa opsware com clienturls to cluster i embed ready to serve client requests i embed serving client requests on i mvcc store index compact i mvcc finished scheduled compaction at took i mvcc store index compact i mvcc finished scheduled compaction at took i mvcc store index compact i mvcc finished scheduled compaction at took i mvcc store index compact i mvcc finished scheduled compaction at took i mvcc store index compact i mvcc finished scheduled compaction at took i mvcc store index compact i mvcc finished scheduled compaction at took i mvcc store index compact i mvcc finished scheduled compaction at took i mvcc store index compact i mvcc finished scheduled compaction at took c kubernetes version console kubectl version warning this version information is deprecated and will be replaced with the output from kubectl version short use output yaml json to get the full version client version version info major minor gitversion gitcommit gittreestate clean builddate goversion compiler gc platform linux kustomize version server version version info major minor gitversion gitcommit gittreestate clean builddate goversion compiler gc platform linux cloud provider na manual installed os version console on linux cat etc os release red hat enterprise linux release ootpa cat etc os release name red hat enterprise linux version ootpa id rhel id like fedora version id platform id platform pretty name red hat enterprise linux ootpa ansi color cpe name cpe o redhat enterprise linux ga home url bug report url redhat bugzilla product red hat enterprise linux redhat bugzilla product version redhat support product red hat enterprise linux redhat support product version uname a linux smp fri mar utc gnu linux install tools installed manually container runtime cri and version if applicable containerd github com containerd containerd related plugins cni csi and versions if applicable ,1
247,5715946429.0,IssuesEvent,2017-04-19 14:11:54,LeastAuthority/leastauthority.com,https://api.github.com/repos/LeastAuthority/leastauthority.com,closed,The Provisioning of New EC2s takes too long.,deployment reliability signup,"Even if we have a streamlined payment verification,and debiting process, Users are still forced to wait for a new EC2 to be properly set up.
There are some simple optimizations that can significantly shorten this wait.
(1) Get a recent AMI. This will make upgrade quicker and remove the need for a post-upgrade reboot (usually).
",True,"The Provisioning of New EC2s takes too long. - Even if we have a streamlined payment verification,and debiting process, Users are still forced to wait for a new EC2 to be properly set up.
There are some simple optimizations that can significantly shorten this wait.
(1) Get a recent AMI. This will make upgrade quicker and remove the need for a post-upgrade reboot (usually).
",1,the provisioning of new takes too long even if we have a streamlined payment verification and debiting process users are still forced to wait for a new to be properly set up there are some simple optimizations that can significantly shorten this wait get a recent ami this will make upgrade quicker and remove the need for a post upgrade reboot usually ,1
162,4755423530.0,IssuesEvent,2016-10-24 10:42:50,dotnet/roslyn,https://api.github.com/repos/dotnet/roslyn,closed,add DisasterRecoveryManager support before we crash.,Area-IDE Feature Request Tenet-Reliability,"right now, disaster recovery manager doesnt expose API for others to consume its functionality in VS.
we should make it to expose that and consume its API in some cases where we crash VS. in some exception, we might not be able to, but there will be cases where we can. (ex, low mem situation)
cc: @bertanaygun Bertan is the expert when we decide to implement it.",True,"add DisasterRecoveryManager support before we crash. - right now, disaster recovery manager doesnt expose API for others to consume its functionality in VS.
we should make it to expose that and consume its API in some cases where we crash VS. in some exception, we might not be able to, but there will be cases where we can. (ex, low mem situation)
cc: @bertanaygun Bertan is the expert when we decide to implement it.",1,add disasterrecoverymanager support before we crash right now disaster recovery manager doesnt expose api for others to consume its functionality in vs we should make it to expose that and consume its api in some cases where we crash vs in some exception we might not be able to but there will be cases where we can ex low mem situation cc bertanaygun bertan is the expert when we decide to implement it ,1
42,2873483725.0,IssuesEvent,2015-06-08 17:17:18,GoogleCloudPlatform/kubernetes,https://api.github.com/repos/GoogleCloudPlatform/kubernetes,opened,"Rate limiter in APIServer needs to take into account watch, exec, proxy, and logs",area/performance area/reliability,"The rate limiter in the apiserver is impacted by #8337 and also by the addition of exec/proxy/logs. Endpoints which hold long running connections need to be independently rate limited. We probably need three buckets:
1. watch - required for intraserver components
2. exec/proxy/logs - used by clients and long running
3. everything else
https://github.com/GoogleCloudPlatform/kubernetes/issues/8337#issuecomment-109781954",True,"Rate limiter in APIServer needs to take into account watch, exec, proxy, and logs - The rate limiter in the apiserver is impacted by #8337 and also by the addition of exec/proxy/logs. Endpoints which hold long running connections need to be independently rate limited. We probably need three buckets:
1. watch - required for intraserver components
2. exec/proxy/logs - used by clients and long running
3. everything else
https://github.com/GoogleCloudPlatform/kubernetes/issues/8337#issuecomment-109781954",1,rate limiter in apiserver needs to take into account watch exec proxy and logs the rate limiter in the apiserver is impacted by and also by the addition of exec proxy logs endpoints which hold long running connections need to be independently rate limited we probably need three buckets watch required for intraserver components exec proxy logs used by clients and long running everything else ,1
60904,17023554266.0,IssuesEvent,2021-07-03 02:37:16,tomhughes/trac-tickets,https://api.github.com/repos/tomhughes/trac-tickets,closed,Nominatim does not remove POI that have their tags removed,Component: nominatim Priority: major Resolution: fixed Type: defect,"**[Submitted to the original trac issue database at 9.55am, Wednesday, 17th February 2010]**
A Search in Germany for ""Allensbach"" on 2010-02-17 with Data: 2010/02/15 shows following results:
Outdated node 257089879 and way 46034494 which had the tag ""name=Allensbach"" only until 2009-12-14.
It does not show the actual node 598664903 which exist from 2009-12-23 with the tag ""name=Allensbach"".",1.0,"Nominatim does not remove POI that have their tags removed - **[Submitted to the original trac issue database at 9.55am, Wednesday, 17th February 2010]**
A Search in Germany for ""Allensbach"" on 2010-02-17 with Data: 2010/02/15 shows following results:
Outdated node 257089879 and way 46034494 which had the tag ""name=Allensbach"" only until 2009-12-14.
It does not show the actual node 598664903 which exist from 2009-12-23 with the tag ""name=Allensbach"".",0,nominatim does not remove poi that have their tags removed a search in germany for allensbach on with data shows following results outdated node and way which had the tag name allensbach only until it does not show the actual node which exist from with the tag name allensbach ,0
215557,16608786467.0,IssuesEvent,2021-06-02 08:56:32,DatalogiForAlle/MarketSim,https://api.github.com/repos/DatalogiForAlle/MarketSim,opened,Documentation for deployment process,deployment documentation,Write some simple documentation for how our site is deployed.,1.0,Documentation for deployment process - Write some simple documentation for how our site is deployed.,0,documentation for deployment process write some simple documentation for how our site is deployed ,0
174565,27663430038.0,IssuesEvent,2023-03-12 19:37:04,APPSCHOOL1-REPO/finalproject-gitspace,https://api.github.com/repos/APPSCHOOL1-REPO/finalproject-gitspace,closed,[Design] GSTextEditor 디자인 시스템의 색상 값을 변경합니다,🎨 Design 🛠️ Fix,"### 📝 작업 목적
- 디자인 컨펌 결과를 TextEditor 색상에 반영하고, ColorScheme 조건을 Asset의 Darkmode 색상으로 변경합니다
---
### 🛠️ Tasks
* [x] 돋보기 아이콘 색상 변경
* [x] TextEditor 백그라운드 색상 변경
* [x] ColorScheme 삼항 연산자 제거
",1.0,"[Design] GSTextEditor 디자인 시스템의 색상 값을 변경합니다 - ### 📝 작업 목적
- 디자인 컨펌 결과를 TextEditor 색상에 반영하고, ColorScheme 조건을 Asset의 Darkmode 색상으로 변경합니다
---
### 🛠️ Tasks
* [x] 돋보기 아이콘 색상 변경
* [x] TextEditor 백그라운드 색상 변경
* [x] ColorScheme 삼항 연산자 제거
",0, gstexteditor 디자인 시스템의 색상 값을 변경합니다 📝 작업 목적 디자인 컨펌 결과를 texteditor 색상에 반영하고 colorscheme 조건을 asset의 darkmode 색상으로 변경합니다 🛠️ tasks 돋보기 아이콘 색상 변경 texteditor 백그라운드 색상 변경 colorscheme 삼항 연산자 제거 ,0
675641,23100704323.0,IssuesEvent,2022-07-27 02:14:16,yugabyte/yugabyte-db,https://api.github.com/repos/yugabyte/yugabyte-db,closed,sockets_to_add_.empty() check failed during test run,kind/bug area/docdb priority/medium community/request,"Jira Link: [DB-2229](https://yugabyte.atlassian.net/browse/DB-2229)
I was running
./yb_build.sh release --ctest
There were 44 test failures. e.g.
```
220 - client_ql-dml-ttl-test (Failed)
221 - client_ql-list-test (Failed)
222 - client_ql-tablet-test (Failed)
223 - client_ql-transaction-test (Failed)
224 - client_ql-stress-test (Failed)
225 - client_seal-txn-test (Failed)
226 - client_snapshot-txn-test (Failed)
227 - client_serializable-txn-test (Failed)
```
Looking at few failure details files such as:
build/release-clang-dynamic/yb-test-logs/tests-pgwrapper__pg_mini-test/PgMiniTest_CreateDatabase.fatal_failure_details.2020-08-09T09_41_11.pid77420.txt
```
F20200809 09:41:11 /Users/zhihongyu/yugabyte-db/src/yb/rpc/acceptor.cc:121] Check failed: sockets_to_add_.empty()
@ 0x114a73cf0 google::LogDestination::LogToSinks()
@ 0x114a72d1a google::LogMessage::SendToLog()
@ 0x114a736c5 google::LogMessage::Flush()
@ 0x114a7858f google::LogMessageFatal::~LogMessageFatal()
@ 0x114a74699 google::LogMessageFatal::~LogMessageFatal()
@ 0x113f7a867 yb::rpc::Acceptor::Shutdown()
@ 0x113f7a4b3 yb::rpc::Acceptor::~Acceptor()
@ 0x113f95b37 yb::rpc::Messenger::ShutdownAcceptor()
@ 0x112b56ad0 yb::server::RpcServer::Shutdown()
```
",1.0,"sockets_to_add_.empty() check failed during test run - Jira Link: [DB-2229](https://yugabyte.atlassian.net/browse/DB-2229)
I was running
./yb_build.sh release --ctest
There were 44 test failures. e.g.
```
220 - client_ql-dml-ttl-test (Failed)
221 - client_ql-list-test (Failed)
222 - client_ql-tablet-test (Failed)
223 - client_ql-transaction-test (Failed)
224 - client_ql-stress-test (Failed)
225 - client_seal-txn-test (Failed)
226 - client_snapshot-txn-test (Failed)
227 - client_serializable-txn-test (Failed)
```
Looking at few failure details files such as:
build/release-clang-dynamic/yb-test-logs/tests-pgwrapper__pg_mini-test/PgMiniTest_CreateDatabase.fatal_failure_details.2020-08-09T09_41_11.pid77420.txt
```
F20200809 09:41:11 /Users/zhihongyu/yugabyte-db/src/yb/rpc/acceptor.cc:121] Check failed: sockets_to_add_.empty()
@ 0x114a73cf0 google::LogDestination::LogToSinks()
@ 0x114a72d1a google::LogMessage::SendToLog()
@ 0x114a736c5 google::LogMessage::Flush()
@ 0x114a7858f google::LogMessageFatal::~LogMessageFatal()
@ 0x114a74699 google::LogMessageFatal::~LogMessageFatal()
@ 0x113f7a867 yb::rpc::Acceptor::Shutdown()
@ 0x113f7a4b3 yb::rpc::Acceptor::~Acceptor()
@ 0x113f95b37 yb::rpc::Messenger::ShutdownAcceptor()
@ 0x112b56ad0 yb::server::RpcServer::Shutdown()
```
",0,sockets to add empty check failed during test run jira link i was running yb build sh release ctest there were test failures e g client ql dml ttl test failed client ql list test failed client ql tablet test failed client ql transaction test failed client ql stress test failed client seal txn test failed client snapshot txn test failed client serializable txn test failed looking at few failure details files such as build release clang dynamic yb test logs tests pgwrapper pg mini test pgminitest createdatabase fatal failure details txt users zhihongyu yugabyte db src yb rpc acceptor cc check failed sockets to add empty google logdestination logtosinks google logmessage sendtolog google logmessage flush google logmessagefatal logmessagefatal google logmessagefatal logmessagefatal yb rpc acceptor shutdown yb rpc acceptor acceptor yb rpc messenger shutdownacceptor yb server rpcserver shutdown ,0
85902,16759684493.0,IssuesEvent,2021-06-13 14:27:42,joomla/joomla-cms,https://api.github.com/repos/joomla/joomla-cms,closed,"[RFC] Add parameter ""Inform Super Users"" if privacy consent of other users expired",No Code Attached Yet RFC,"### Is your feature request related to a problem? Please describe.
- From German forum: A site with x00 users.
- Whenever a privacy consent of one of these users has expired all Super Users with setting ""Receive System Emails"" are receiving a private message ""Privacy consent has expired for %1$s.""
- And are receiving an email when ""mail_on_new"" is activated in com_messages.
### Describe the solution you'd like
- A switch in plugin plg_system_privacyconsent to disable/enable this private message behavior.
### Additional context
- A user will be informed about expired consents when trying to login. Isn't that sufficient?",1.0,"[RFC] Add parameter ""Inform Super Users"" if privacy consent of other users expired - ### Is your feature request related to a problem? Please describe.
- From German forum: A site with x00 users.
- Whenever a privacy consent of one of these users has expired all Super Users with setting ""Receive System Emails"" are receiving a private message ""Privacy consent has expired for %1$s.""
- And are receiving an email when ""mail_on_new"" is activated in com_messages.
### Describe the solution you'd like
- A switch in plugin plg_system_privacyconsent to disable/enable this private message behavior.
### Additional context
- A user will be informed about expired consents when trying to login. Isn't that sufficient?",0, add parameter inform super users if privacy consent of other users expired is your feature request related to a problem please describe from german forum a site with users whenever a privacy consent of one of these users has expired all super users with setting receive system emails are receiving a private message privacy consent has expired for s and are receiving an email when mail on new is activated in com messages describe the solution you d like a switch in plugin plg system privacyconsent to disable enable this private message behavior additional context a user will be informed about expired consents when trying to login isn t that sufficient ,0
468217,13463030587.0,IssuesEvent,2020-09-09 16:56:40,googleapis/releasetool,https://api.github.com/repos/googleapis/releasetool,closed,"Magic proxy is currently failing, resulting in labels not being removed post publication",priority: p2 type: bug,"#### Steps to reproduce
1. Merge a release PR, resulting in autorelease tagging a release.
2. It will enqueue a job which kicks off publication on kokoro.
3. When this job finishes, publication will succeed, but labels will not be released; the following error will be in logs:
```bash
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://magic-github-proxy.endpoints.devrel-prod.cloud.goog/repos/googleapis/java-document-ai/issues/133/comments?
```
",1.0,"Magic proxy is currently failing, resulting in labels not being removed post publication - #### Steps to reproduce
1. Merge a release PR, resulting in autorelease tagging a release.
2. It will enqueue a job which kicks off publication on kokoro.
3. When this job finishes, publication will succeed, but labels will not be released; the following error will be in logs:
```bash
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://magic-github-proxy.endpoints.devrel-prod.cloud.goog/repos/googleapis/java-document-ai/issues/133/comments?
```
",0,magic proxy is currently failing resulting in labels not being removed post publication steps to reproduce merge a release pr resulting in autorelease tagging a release it will enqueue a job which kicks off publication on kokoro when this job finishes publication will succeed but labels will not be released the following error will be in logs bash requests exceptions httperror client error bad request for url ,0
991,12013753828.0,IssuesEvent,2020-04-10 09:37:30,dotnet/runtime,https://api.github.com/repos/dotnet/runtime,closed,Infinite spin lock in Encoding.GetEncoding,area-System.Threading tenet-reliability untriaged,"We've encountered weird deadlock/live lock on .NET Core 3.1 and captured a memory dump of it. The dump may contain sensitive information and hence I am not comfortable sharing it publicly but I will be happy to share it privately or dump specific information from it.
Three threads are racing on the same spin lock which is marked as locked but there doesn't seem to be any thread actually locking it.
Stack traces for the thread look like this:
Thread 18060:
```
System.Private.CoreLib.dll!System.Threading.ReaderWriterLockSlim.SpinLock.EnterSpin(System.Threading.ReaderWriterLockSlim.EnterSpinLockReason reason) Line 1606 C#
System.Private.CoreLib.dll!System.Threading.ReaderWriterLockSlim.TryEnterUpgradeableReadLockCore(System.Threading.ReaderWriterLockSlim.TimeoutTracker timeout) Line 681 C#
System.Text.Encoding.CodePages.dll!System.Text.EncodingTable.GetCodePageFromName(string name) Line 26 C#
System.Text.Encoding.CodePages.dll!System.Text.CodePagesEncodingProvider.GetEncoding(string name) Line 87 C#
System.Private.CoreLib.dll!System.Text.EncodingProvider.GetEncodingFromProvider(string encodingName) Line 94 C#
System.Private.CoreLib.dll!System.Text.Encoding.GetEncoding(string name) Line 315 C#
MailClient.Mail.dll!MailClient.Mail.MailHeaderDictionary.MailHeaderDictionary(System.IO.Stream stream) Unknown
```
looping with `spinIndex = 556840848`.
Thread 22080:
```
> System.Private.CoreLib.dll!System.Threading.ReaderWriterLockSlim.SpinLock.EnterSpin(System.Threading.ReaderWriterLockSlim.EnterSpinLockReason reason) Line 1608 C#
System.Private.CoreLib.dll!System.Threading.ReaderWriterLockSlim.TryEnterUpgradeableReadLockCore(System.Threading.ReaderWriterLockSlim.TimeoutTracker timeout) Line 681 C#
System.Text.Encoding.CodePages.dll!System.Text.EncodingTable.GetCodePageFromName(string name) Line 26 C#
System.Text.Encoding.CodePages.dll!System.Text.CodePagesEncodingProvider.GetEncoding(string name) Line 87 C#
System.Private.CoreLib.dll!System.Text.EncodingProvider.GetEncodingFromProvider(string encodingName) Line 94 C#
System.Net.Mail.dll!System.Net.Mail.MailAddress.MailAddress(string address, string displayName, System.Text.Encoding displayNameEncoding) Line 76 C#
System.Net.Mail.dll!System.Net.Mail.MailAddress.MailAddress(string address) Line 46 C#
```
looping with `spinIndex = 0`.
Thread 23232:
```
System.Private.CoreLib.dll!System.Threading.ReaderWriterLockSlim.SpinLock.EnterSpin(System.Threading.ReaderWriterLockSlim.EnterSpinLockReason reason) Line 1606 C#
System.Private.CoreLib.dll!System.Threading.ReaderWriterLockSlim.TryEnterUpgradeableReadLockCore(System.Threading.ReaderWriterLockSlim.TimeoutTracker timeout) Line 681 C#
System.Text.Encoding.CodePages.dll!System.Text.EncodingTable.GetCodePageFromName(string name) Line 26 C#
System.Text.Encoding.CodePages.dll!System.Text.CodePagesEncodingProvider.GetEncoding(string name) Line 87 C#
System.Private.CoreLib.dll!System.Text.EncodingProvider.GetEncodingFromProvider(string encodingName) Line 94 C#
> System.Private.CoreLib.dll!System.Text.Encoding.GetEncoding(string name) Line 315 C#
System.Net.Http.dll!System.Net.Http.HttpContent.ReadBufferAsString(System.ArraySegment buffer, System.Net.Http.Headers.HttpContentHeaders headers) Unknown
System.Net.Http.dll!System.Net.Http.HttpContent.ReadBufferedContentAsString() Unknown
System.Net.Http.dll!System.Net.Http.HttpContent.ReadAsStringAsync.AnonymousMethod__36_0(System.Net.Http.HttpContent s) Unknown
System.Net.Http.dll!System.Net.Http.HttpContent.WaitAndReturnAsync(System.Threading.Tasks.Task waitTask, System.Net.Http.HttpContent state, System.Func returnFunc) Unknown
System.Net.Http.dll!System.Net.Http.HttpContent.ReadAsStringAsync() Unknown
```
looping with `spinIndex = 552403664`.
`EncodingTable.s_cacheLock` object looks like this:
| Name | Value | Type
-- | -- | -- | --
| CurrentReadCount | 0 | int
| HasNoWaiters | true | bool
| IsReadLockHeld | false | bool
| IsUpgradeableReadLockHeld | false | bool
| IsWriteLockHeld | false | bool
| RecursionPolicy | NoRecursion | System.Threading.LockRecursionPolicy
| RecursiveReadCount | 0 | int
| RecursiveUpgradeCount | 0 | int
| RecursiveWriteCount | 0 | int
| WaitingReadCount | 0 | int
| WaitingUpgradeCount | 0 | int
| WaitingWriteCount | 0 | int
| _fDisposed | false | bool
| _fIsReentrant | false | bool
| _fUpgradeThreadHoldingRead | false | bool
| _lockID | 2 | long
| _numReadWaiters | 0 | uint
| _numUpgradeWaiters | 0 | uint
| _numWriteUpgradeWaiters | 0 | uint
| _numWriteWaiters | 0 | uint
| _owners | 0 | uint
▶ | _readEvent | null | System.Threading.EventWaitHandle
▶ | _spinLock | {System.Threading.ReaderWriterLockSlim.SpinLock} | System.Threading.ReaderWriterLockSlim.SpinLock
▶ | _upgradeEvent | null | System.Threading.EventWaitHandle
| _upgradeLockOwnerId | -1 | int
▶ | _waitUpgradeEvent | null | System.Threading.EventWaitHandle
| _waiterStates | NoWaiters | System.Threading.ReaderWriterLockSlim.WaiterStates
▶ | _writeEvent | null | System.Threading.EventWaitHandle
| _writeLockOwnerId | -1 | int
It seems that the lock object is not held by anything but it seems to spin three threads anyway.
For completeness, this is how the `SpinLock` object looks:
| Name | Value | Type
-- | -- | -- | --
| EnterForEnterAnyReadDeprioritizedCount | 0 | ushort
| EnterForEnterAnyWriteDeprioritizedCount | 0 | ushort
| _enterDeprioritizationState | 0 | int
| _isLocked | 1 | int
",True,"Infinite spin lock in Encoding.GetEncoding - We've encountered weird deadlock/live lock on .NET Core 3.1 and captured a memory dump of it. The dump may contain sensitive information and hence I am not comfortable sharing it publicly but I will be happy to share it privately or dump specific information from it.
Three threads are racing on the same spin lock which is marked as locked but there doesn't seem to be any thread actually locking it.
Stack traces for the thread look like this:
Thread 18060:
```
System.Private.CoreLib.dll!System.Threading.ReaderWriterLockSlim.SpinLock.EnterSpin(System.Threading.ReaderWriterLockSlim.EnterSpinLockReason reason) Line 1606 C#
System.Private.CoreLib.dll!System.Threading.ReaderWriterLockSlim.TryEnterUpgradeableReadLockCore(System.Threading.ReaderWriterLockSlim.TimeoutTracker timeout) Line 681 C#
System.Text.Encoding.CodePages.dll!System.Text.EncodingTable.GetCodePageFromName(string name) Line 26 C#
System.Text.Encoding.CodePages.dll!System.Text.CodePagesEncodingProvider.GetEncoding(string name) Line 87 C#
System.Private.CoreLib.dll!System.Text.EncodingProvider.GetEncodingFromProvider(string encodingName) Line 94 C#
System.Private.CoreLib.dll!System.Text.Encoding.GetEncoding(string name) Line 315 C#
MailClient.Mail.dll!MailClient.Mail.MailHeaderDictionary.MailHeaderDictionary(System.IO.Stream stream) Unknown
```
looping with `spinIndex = 556840848`.
Thread 22080:
```
> System.Private.CoreLib.dll!System.Threading.ReaderWriterLockSlim.SpinLock.EnterSpin(System.Threading.ReaderWriterLockSlim.EnterSpinLockReason reason) Line 1608 C#
System.Private.CoreLib.dll!System.Threading.ReaderWriterLockSlim.TryEnterUpgradeableReadLockCore(System.Threading.ReaderWriterLockSlim.TimeoutTracker timeout) Line 681 C#
System.Text.Encoding.CodePages.dll!System.Text.EncodingTable.GetCodePageFromName(string name) Line 26 C#
System.Text.Encoding.CodePages.dll!System.Text.CodePagesEncodingProvider.GetEncoding(string name) Line 87 C#
System.Private.CoreLib.dll!System.Text.EncodingProvider.GetEncodingFromProvider(string encodingName) Line 94 C#
System.Net.Mail.dll!System.Net.Mail.MailAddress.MailAddress(string address, string displayName, System.Text.Encoding displayNameEncoding) Line 76 C#
System.Net.Mail.dll!System.Net.Mail.MailAddress.MailAddress(string address) Line 46 C#
```
looping with `spinIndex = 0`.
Thread 23232:
```
System.Private.CoreLib.dll!System.Threading.ReaderWriterLockSlim.SpinLock.EnterSpin(System.Threading.ReaderWriterLockSlim.EnterSpinLockReason reason) Line 1606 C#
System.Private.CoreLib.dll!System.Threading.ReaderWriterLockSlim.TryEnterUpgradeableReadLockCore(System.Threading.ReaderWriterLockSlim.TimeoutTracker timeout) Line 681 C#
System.Text.Encoding.CodePages.dll!System.Text.EncodingTable.GetCodePageFromName(string name) Line 26 C#
System.Text.Encoding.CodePages.dll!System.Text.CodePagesEncodingProvider.GetEncoding(string name) Line 87 C#
System.Private.CoreLib.dll!System.Text.EncodingProvider.GetEncodingFromProvider(string encodingName) Line 94 C#
> System.Private.CoreLib.dll!System.Text.Encoding.GetEncoding(string name) Line 315 C#
System.Net.Http.dll!System.Net.Http.HttpContent.ReadBufferAsString(System.ArraySegment buffer, System.Net.Http.Headers.HttpContentHeaders headers) Unknown
System.Net.Http.dll!System.Net.Http.HttpContent.ReadBufferedContentAsString() Unknown
System.Net.Http.dll!System.Net.Http.HttpContent.ReadAsStringAsync.AnonymousMethod__36_0(System.Net.Http.HttpContent s) Unknown
System.Net.Http.dll!System.Net.Http.HttpContent.WaitAndReturnAsync(System.Threading.Tasks.Task waitTask, System.Net.Http.HttpContent state, System.Func returnFunc) Unknown
System.Net.Http.dll!System.Net.Http.HttpContent.ReadAsStringAsync() Unknown
```
looping with `spinIndex = 552403664`.
`EncodingTable.s_cacheLock` object looks like this:
| Name | Value | Type
-- | -- | -- | --
| CurrentReadCount | 0 | int
| HasNoWaiters | true | bool
| IsReadLockHeld | false | bool
| IsUpgradeableReadLockHeld | false | bool
| IsWriteLockHeld | false | bool
| RecursionPolicy | NoRecursion | System.Threading.LockRecursionPolicy
| RecursiveReadCount | 0 | int
| RecursiveUpgradeCount | 0 | int
| RecursiveWriteCount | 0 | int
| WaitingReadCount | 0 | int
| WaitingUpgradeCount | 0 | int
| WaitingWriteCount | 0 | int
| _fDisposed | false | bool
| _fIsReentrant | false | bool
| _fUpgradeThreadHoldingRead | false | bool
| _lockID | 2 | long
| _numReadWaiters | 0 | uint
| _numUpgradeWaiters | 0 | uint
| _numWriteUpgradeWaiters | 0 | uint
| _numWriteWaiters | 0 | uint
| _owners | 0 | uint
▶ | _readEvent | null | System.Threading.EventWaitHandle
▶ | _spinLock | {System.Threading.ReaderWriterLockSlim.SpinLock} | System.Threading.ReaderWriterLockSlim.SpinLock
▶ | _upgradeEvent | null | System.Threading.EventWaitHandle
| _upgradeLockOwnerId | -1 | int
▶ | _waitUpgradeEvent | null | System.Threading.EventWaitHandle
| _waiterStates | NoWaiters | System.Threading.ReaderWriterLockSlim.WaiterStates
▶ | _writeEvent | null | System.Threading.EventWaitHandle
| _writeLockOwnerId | -1 | int
It seems that the lock object is not held by anything but it seems to spin three threads anyway.
For completeness, this is how the `SpinLock` object looks:
| Name | Value | Type
-- | -- | -- | --
| EnterForEnterAnyReadDeprioritizedCount | 0 | ushort
| EnterForEnterAnyWriteDeprioritizedCount | 0 | ushort
| _enterDeprioritizationState | 0 | int
| _isLocked | 1 | int
",1,infinite spin lock in encoding getencoding we ve encountered weird deadlock live lock on net core and captured a memory dump of it the dump may contain sensitive information and hence i am not comfortable sharing it publicly but i will be happy to share it privately or dump specific information from it three threads are racing on the same spin lock which is marked as locked but there doesn t seem to be any thread actually locking it stack traces for the thread look like this thread system private corelib dll system threading readerwriterlockslim spinlock enterspin system threading readerwriterlockslim enterspinlockreason reason line c system private corelib dll system threading readerwriterlockslim tryenterupgradeablereadlockcore system threading readerwriterlockslim timeouttracker timeout line c system text encoding codepages dll system text encodingtable getcodepagefromname string name line c system text encoding codepages dll system text codepagesencodingprovider getencoding string name line c system private corelib dll system text encodingprovider getencodingfromprovider string encodingname line c system private corelib dll system text encoding getencoding string name line c mailclient mail dll mailclient mail mailheaderdictionary mailheaderdictionary system io stream stream unknown looping with spinindex thread system private corelib dll system threading readerwriterlockslim spinlock enterspin system threading readerwriterlockslim enterspinlockreason reason line c system private corelib dll system threading readerwriterlockslim tryenterupgradeablereadlockcore system threading readerwriterlockslim timeouttracker timeout line c system text encoding codepages dll system text encodingtable getcodepagefromname string name line c system text encoding codepages dll system text codepagesencodingprovider getencoding string name line c system private corelib dll system text encodingprovider getencodingfromprovider string encodingname line c system net mail dll system net mail mailaddress mailaddress string address string displayname system text encoding displaynameencoding line c system net mail dll system net mail mailaddress mailaddress string address line c looping with spinindex thread system private corelib dll system threading readerwriterlockslim spinlock enterspin system threading readerwriterlockslim enterspinlockreason reason line c system private corelib dll system threading readerwriterlockslim tryenterupgradeablereadlockcore system threading readerwriterlockslim timeouttracker timeout line c system text encoding codepages dll system text encodingtable getcodepagefromname string name line c system text encoding codepages dll system text codepagesencodingprovider getencoding string name line c system private corelib dll system text encodingprovider getencodingfromprovider string encodingname line c system private corelib dll system text encoding getencoding string name line c system net http dll system net http httpcontent readbufferasstring system arraysegment buffer system net http headers httpcontentheaders headers unknown system net http dll system net http httpcontent readbufferedcontentasstring unknown system net http dll system net http httpcontent readasstringasync anonymousmethod system net http httpcontent s unknown system net http dll system net http httpcontent waitandreturnasync system threading tasks task waittask system net http httpcontent state system func returnfunc unknown system net http dll system net http httpcontent readasstringasync unknown looping with spinindex encodingtable s cachelock object looks like this name value type currentreadcount int hasnowaiters true bool isreadlockheld false bool isupgradeablereadlockheld false bool iswritelockheld false bool recursionpolicy norecursion system threading lockrecursionpolicy recursivereadcount int recursiveupgradecount int recursivewritecount int waitingreadcount int waitingupgradecount int waitingwritecount int fdisposed false bool fisreentrant false bool fupgradethreadholdingread false bool lockid long numreadwaiters uint numupgradewaiters uint numwriteupgradewaiters uint numwritewaiters uint owners uint ▶ readevent null system threading eventwaithandle ▶ spinlock system threading readerwriterlockslim spinlock system threading readerwriterlockslim spinlock ▶ upgradeevent null system threading eventwaithandle upgradelockownerid int ▶ waitupgradeevent null system threading eventwaithandle waiterstates nowaiters system threading readerwriterlockslim waiterstates ▶ writeevent null system threading eventwaithandle writelockownerid int it seems that the lock object is not held by anything but it seems to spin three threads anyway for completeness this is how the spinlock object looks name value type enterforenteranyreaddeprioritizedcount ushort enterforenteranywritedeprioritizedcount ushort enterdeprioritizationstate int islocked int ,1
481875,13893422258.0,IssuesEvent,2020-10-19 13:30:47,kubeflow/manifests,https://api.github.com/repos/kubeflow/manifests,closed,Add a way to set networkRef and subnetworkRef to ContainerCluster (cnrm),kind/feature platform/gcp priority/p2,"Similar to the issue https://github.com/kubeflow/manifests/issues/1577, it would be great to add a way to set `networkRef` and `subnetworkRef` properties to the `ContainerCluster` manifest (cnrm):
https://github.com/kubeflow/manifests/blob/c728bc737cfadc2816ecd50c5c96ee59cd1d9b1a/gcp/v2/cnrm/cluster/cluster.yaml#L17-L62
Since `networkRef` and` subnetworkRef` have multiple properties (`external`, `name` and` namespace`), I'm not sure if this can be done using` kpt`. Anyway, if it is not possible to define using `kpt`, it would be good to include at least a manual way of doing this in the documentation.",1.0,"Add a way to set networkRef and subnetworkRef to ContainerCluster (cnrm) - Similar to the issue https://github.com/kubeflow/manifests/issues/1577, it would be great to add a way to set `networkRef` and `subnetworkRef` properties to the `ContainerCluster` manifest (cnrm):
https://github.com/kubeflow/manifests/blob/c728bc737cfadc2816ecd50c5c96ee59cd1d9b1a/gcp/v2/cnrm/cluster/cluster.yaml#L17-L62
Since `networkRef` and` subnetworkRef` have multiple properties (`external`, `name` and` namespace`), I'm not sure if this can be done using` kpt`. Anyway, if it is not possible to define using `kpt`, it would be good to include at least a manual way of doing this in the documentation.",0,add a way to set networkref and subnetworkref to containercluster cnrm similar to the issue it would be great to add a way to set networkref and subnetworkref properties to the containercluster manifest cnrm since networkref and subnetworkref have multiple properties external name and namespace i m not sure if this can be done using kpt anyway if it is not possible to define using kpt it would be good to include at least a manual way of doing this in the documentation ,0
2164,23865205035.0,IssuesEvent,2022-09-07 10:24:21,ppy/osu,https://api.github.com/repos/ppy/osu,closed,The target mod can't play and fails to calculate sr for certain beatmaps,area:mods type:reliability,"### Type
Game behaviour
### Bug description
I made [a local difficulty](https://drive.google.com/file/d/1RqwxLlSOf66mXAS6bFjYcJyjR8QWB5HL/view?usp=sharing) (osz in google drive) of [Galaxy Collapse](https://osu.ppy.sh/beatmapsets/396221#osu/862088) that only included the speedup from 2:35~2:56. Then, I selected the target mod and received several notifications saying that the game had failed to calculate the beatmap difficulty. I then tried to play the map and it behaved like a beatmap with no hit objects in it.
I've reproduced this bug on my mac (M1 Monterey) and my PC (Win 11) so it's probably platform independent.
### Screenshots or videos
https://user-images.githubusercontent.com/100527514/188699419-34f980db-cf02-4d43-831a-b14eae4be5cb.mp4
### Version
2022.902.1
### Logs
[database.log](https://github.com/ppy/osu/files/9499344/database.log)
[network.log](https://github.com/ppy/osu/files/9499345/network.log)
[performance.log](https://github.com/ppy/osu/files/9499346/performance.log)
[runtime.log](https://github.com/ppy/osu/files/9499347/runtime.log)
",True,"The target mod can't play and fails to calculate sr for certain beatmaps - ### Type
Game behaviour
### Bug description
I made [a local difficulty](https://drive.google.com/file/d/1RqwxLlSOf66mXAS6bFjYcJyjR8QWB5HL/view?usp=sharing) (osz in google drive) of [Galaxy Collapse](https://osu.ppy.sh/beatmapsets/396221#osu/862088) that only included the speedup from 2:35~2:56. Then, I selected the target mod and received several notifications saying that the game had failed to calculate the beatmap difficulty. I then tried to play the map and it behaved like a beatmap with no hit objects in it.
I've reproduced this bug on my mac (M1 Monterey) and my PC (Win 11) so it's probably platform independent.
### Screenshots or videos
https://user-images.githubusercontent.com/100527514/188699419-34f980db-cf02-4d43-831a-b14eae4be5cb.mp4
### Version
2022.902.1
### Logs
[database.log](https://github.com/ppy/osu/files/9499344/database.log)
[network.log](https://github.com/ppy/osu/files/9499345/network.log)
[performance.log](https://github.com/ppy/osu/files/9499346/performance.log)
[runtime.log](https://github.com/ppy/osu/files/9499347/runtime.log)
",1,the target mod can t play and fails to calculate sr for certain beatmaps type game behaviour bug description i made osz in google drive of that only included the speedup from then i selected the target mod and received several notifications saying that the game had failed to calculate the beatmap difficulty i then tried to play the map and it behaved like a beatmap with no hit objects in it i ve reproduced this bug on my mac monterey and my pc win so it s probably platform independent screenshots or videos img width alt screen shot at am src version logs ,1
1384,15725465012.0,IssuesEvent,2021-03-29 10:01:28,FoundationDB/fdb-kubernetes-operator,https://api.github.com/repos/FoundationDB/fdb-kubernetes-operator,opened,Add Prometheus metrics to the FDB sidecar,reliability,"We should add the Prometheus client to the sidecar to be able to expose metrics for the sidecar (and be able to define a scrape target). Besides the default metrics we might want to expose some additional information like current FDB version, Hash of the copied files (`fdb.cluster` and `monitor.conf`).",True,"Add Prometheus metrics to the FDB sidecar - We should add the Prometheus client to the sidecar to be able to expose metrics for the sidecar (and be able to define a scrape target). Besides the default metrics we might want to expose some additional information like current FDB version, Hash of the copied files (`fdb.cluster` and `monitor.conf`).",1,add prometheus metrics to the fdb sidecar we should add the prometheus client to the sidecar to be able to expose metrics for the sidecar and be able to define a scrape target besides the default metrics we might want to expose some additional information like current fdb version hash of the copied files fdb cluster and monitor conf ,1
29389,5664009102.0,IssuesEvent,2017-04-11 00:26:12,MDAnalysis/mdanalysis,https://api.github.com/repos/MDAnalysis/mdanalysis,closed,MDAnalysisTests raises exception when imported,defect testing,"### Expected behaviour
```python
import MDAnalysisTests
```
and
```python
import MDAnalysis.tests
```
should import (and make eg files available in `datafiles`.
### Actual behaviour
Both imports fail with
```
In [3]: import MDAnalysisTests
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
in ()
----> 1 import MDAnalysisTests
/Users/oliver/.virtualenvs/mda_clean/lib/python2.7/site-packages/MDAnalysisTests/__init__.py in ()
141 pass
142
--> 143 from MDAnalysisTests.util import (
144 block_import,
145 executable_not_found,
/Users/oliver/.virtualenvs/mda_clean/lib/python2.7/site-packages/MDAnalysisTests/util.py in ()
37 from functools import wraps
38 import importlib
---> 39 import mock
40 import os
41
/Users/oliver/.virtualenvs/mda_clean/lib/python2.7/site-packages/mock/__init__.py in ()
1 from __future__ import absolute_import
----> 2 import mock.mock as _mock
3 from mock.mock import *
4 __all__ = _mock.__all__
5 #import mock.mock as _mock
/Users/oliver/.virtualenvs/mda_clean/lib/python2.7/site-packages/mock/mock.py in ()
69 from pbr.version import VersionInfo
70
---> 71 _v = VersionInfo('mock').semantic_version()
72 __version__ = _v.release_string()
73 version_info = _v.version_tuple()
/Users/oliver/.virtualenvs/mda_clean/lib/python2.7/site-packages/pbr/version.pyc in semantic_version(self)
458 """"""Return the SemanticVersion object for this version.""""""
459 if self._semantic is None:
--> 460 self._semantic = self._get_version_from_pkg_resources()
461 return self._semantic
462
/Users/oliver/.virtualenvs/mda_clean/lib/python2.7/site-packages/pbr/version.pyc in _get_version_from_pkg_resources(self)
445 # installed into anything. Revert to setup-time logic.
446 from pbr import packaging
--> 447 result_string = packaging.get_version(self.package)
448 return SemanticVersion.from_pip_string(result_string)
449
/Users/oliver/.virtualenvs/mda_clean/lib/python2.7/site-packages/pbr/packaging.pyc in get_version(package_name, pre_version)
748 "" to pbr.version.VersionInfo. Project name {name} was""
749 "" given, but was not able to be found."".format(
--> 750 name=package_name))
751
752
Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. It's also possible that there is a mismatch between the package name in setup.cfg and the argument given to pbr.version.VersionInfo. Project name mock was given, but was not able to be found.
```
### Code to reproduce the behaviour
```python
import MDAnalysisTests
```
### Currently version of MDAnalysis:
(run `python -c ""import MDAnalysis as mda; print(mda.__version__)""`)
0.16.0 (pip upgraded in a virtualenv)",1.0,"MDAnalysisTests raises exception when imported - ### Expected behaviour
```python
import MDAnalysisTests
```
and
```python
import MDAnalysis.tests
```
should import (and make eg files available in `datafiles`.
### Actual behaviour
Both imports fail with
```
In [3]: import MDAnalysisTests
---------------------------------------------------------------------------
Exception Traceback (most recent call last)
in ()
----> 1 import MDAnalysisTests
/Users/oliver/.virtualenvs/mda_clean/lib/python2.7/site-packages/MDAnalysisTests/__init__.py in ()
141 pass
142
--> 143 from MDAnalysisTests.util import (
144 block_import,
145 executable_not_found,
/Users/oliver/.virtualenvs/mda_clean/lib/python2.7/site-packages/MDAnalysisTests/util.py in ()
37 from functools import wraps
38 import importlib
---> 39 import mock
40 import os
41
/Users/oliver/.virtualenvs/mda_clean/lib/python2.7/site-packages/mock/__init__.py in ()
1 from __future__ import absolute_import
----> 2 import mock.mock as _mock
3 from mock.mock import *
4 __all__ = _mock.__all__
5 #import mock.mock as _mock
/Users/oliver/.virtualenvs/mda_clean/lib/python2.7/site-packages/mock/mock.py in ()
69 from pbr.version import VersionInfo
70
---> 71 _v = VersionInfo('mock').semantic_version()
72 __version__ = _v.release_string()
73 version_info = _v.version_tuple()
/Users/oliver/.virtualenvs/mda_clean/lib/python2.7/site-packages/pbr/version.pyc in semantic_version(self)
458 """"""Return the SemanticVersion object for this version.""""""
459 if self._semantic is None:
--> 460 self._semantic = self._get_version_from_pkg_resources()
461 return self._semantic
462
/Users/oliver/.virtualenvs/mda_clean/lib/python2.7/site-packages/pbr/version.pyc in _get_version_from_pkg_resources(self)
445 # installed into anything. Revert to setup-time logic.
446 from pbr import packaging
--> 447 result_string = packaging.get_version(self.package)
448 return SemanticVersion.from_pip_string(result_string)
449
/Users/oliver/.virtualenvs/mda_clean/lib/python2.7/site-packages/pbr/packaging.pyc in get_version(package_name, pre_version)
748 "" to pbr.version.VersionInfo. Project name {name} was""
749 "" given, but was not able to be found."".format(
--> 750 name=package_name))
751
752
Exception: Versioning for this project requires either an sdist tarball, or access to an upstream git repository. It's also possible that there is a mismatch between the package name in setup.cfg and the argument given to pbr.version.VersionInfo. Project name mock was given, but was not able to be found.
```
### Code to reproduce the behaviour
```python
import MDAnalysisTests
```
### Currently version of MDAnalysis:
(run `python -c ""import MDAnalysis as mda; print(mda.__version__)""`)
0.16.0 (pip upgraded in a virtualenv)",0,mdanalysistests raises exception when imported expected behaviour python import mdanalysistests and python import mdanalysis tests should import and make eg files available in datafiles actual behaviour both imports fail with in import mdanalysistests exception traceback most recent call last in import mdanalysistests users oliver virtualenvs mda clean lib site packages mdanalysistests init py in pass from mdanalysistests util import block import executable not found users oliver virtualenvs mda clean lib site packages mdanalysistests util py in from functools import wraps import importlib import mock import os users oliver virtualenvs mda clean lib site packages mock init py in from future import absolute import import mock mock as mock from mock mock import all mock all import mock mock as mock users oliver virtualenvs mda clean lib site packages mock mock py in from pbr version import versioninfo v versioninfo mock semantic version version v release string version info v version tuple users oliver virtualenvs mda clean lib site packages pbr version pyc in semantic version self return the semanticversion object for this version if self semantic is none self semantic self get version from pkg resources return self semantic users oliver virtualenvs mda clean lib site packages pbr version pyc in get version from pkg resources self installed into anything revert to setup time logic from pbr import packaging result string packaging get version self package return semanticversion from pip string result string users oliver virtualenvs mda clean lib site packages pbr packaging pyc in get version package name pre version to pbr version versioninfo project name name was given but was not able to be found format name package name exception versioning for this project requires either an sdist tarball or access to an upstream git repository it s also possible that there is a mismatch between the package name in setup cfg and the argument given to pbr version versioninfo project name mock was given but was not able to be found code to reproduce the behaviour python import mdanalysistests currently version of mdanalysis run python c import mdanalysis as mda print mda version pip upgraded in a virtualenv ,0
1884,21427793070.0,IssuesEvent,2022-04-23 00:20:16,ewxrjk/rsbackup,https://api.github.com/repos/ewxrjk/rsbackup,closed,Commit removals in a more timely way,reliability maintenance,"```
static void commitRemovals(std::vector &removableBackups) {
for(;;) {
int retries = 0;
try {
globalConfig.getdb().begin();
for(auto &removable: removableBackups) {
if(removable.bulkRemover.getStatus() == 0) {
removable.backup->setStatus(PRUNED);
// TODO actually this value for pruned is a bit late.
removable.backup->pruned = Date::now();
removable.backup->update(globalConfig.getdb());
}
}
```
Ideally each removal would be committed when `rm` completes, not in bulk at the end. This doesn't matter much (if we fail first we'll `rm` something that doesn't exist and quickly succeed) but it is a bit untidy.",True,"Commit removals in a more timely way - ```
static void commitRemovals(std::vector &removableBackups) {
for(;;) {
int retries = 0;
try {
globalConfig.getdb().begin();
for(auto &removable: removableBackups) {
if(removable.bulkRemover.getStatus() == 0) {
removable.backup->setStatus(PRUNED);
// TODO actually this value for pruned is a bit late.
removable.backup->pruned = Date::now();
removable.backup->update(globalConfig.getdb());
}
}
```
Ideally each removal would be committed when `rm` completes, not in bulk at the end. This doesn't matter much (if we fail first we'll `rm` something that doesn't exist and quickly succeed) but it is a bit untidy.",1,commit removals in a more timely way static void commitremovals std vector removablebackups for int retries try globalconfig getdb begin for auto removable removablebackups if removable bulkremover getstatus removable backup setstatus pruned todo actually this value for pruned is a bit late removable backup pruned date now removable backup update globalconfig getdb ideally each removal would be committed when rm completes not in bulk at the end this doesn t matter much if we fail first we ll rm something that doesn t exist and quickly succeed but it is a bit untidy ,1
162266,13885295668.0,IssuesEvent,2020-10-18 19:25:01,data-describe/data-describe,https://api.github.com/repos/data-describe/data-describe,closed,CONTRIBUTING.md is hard to read,documentation enhancement,"Your request may already be reported!
Please search on the [issue tracker](../?q=label%3Aenhancement+is%3Aissue) before creating one.
**Desired Change**
Reorganize/update the markdown to make the file easier to read
**Additional Info**
",1.0,"CONTRIBUTING.md is hard to read - Your request may already be reported!
Please search on the [issue tracker](../?q=label%3Aenhancement+is%3Aissue) before creating one.
**Desired Change**
Reorganize/update the markdown to make the file easier to read
**Additional Info**
",0,contributing md is hard to read your request may already be reported please search on the q label is before creating one desired change reorganize update the markdown to make the file easier to read additional info ,0
2745,27392638877.0,IssuesEvent,2023-02-28 17:16:01,dotCMS/core,https://api.github.com/repos/dotCMS/core,closed,Update Starter Logic,Merged QA : Passed Internal dotCMS : Privacy Team : Falcon Type : Task OKR : Reliability Next Release,"We need to update our starter logic to include the correct asset directories
### Proposed Priority
Priority 2 - Important
https://docs.google.com/document/d/1UZO9md55XL5xAz5FReJMLy90SnteXZSYCDTI3OX0cP4/edit
",True,"Update Starter Logic - We need to update our starter logic to include the correct asset directories
### Proposed Priority
Priority 2 - Important
https://docs.google.com/document/d/1UZO9md55XL5xAz5FReJMLy90SnteXZSYCDTI3OX0cP4/edit
",1,update starter logic we need to update our starter logic to include the correct asset directories proposed priority priority important ,1
309,6499003375.0,IssuesEvent,2017-08-22 19:43:19,Storj/bridge,https://api.github.com/repos/Storj/bridge,closed,Monitor will stop pinging farmers,monitor reliability,"Last log messages where: Failed to get retrieval pointer, Unable to replicate shard, and Replicating shard.",True,"Monitor will stop pinging farmers - Last log messages where: Failed to get retrieval pointer, Unable to replicate shard, and Replicating shard.",1,monitor will stop pinging farmers last log messages where failed to get retrieval pointer unable to replicate shard and replicating shard ,1
2944,30507162365.0,IssuesEvent,2023-07-18 17:47:02,dotnet/runtime,https://api.github.com/repos/dotnet/runtime,closed,Frequent WebSocket Compression Segfault,bug area-System.Net tenet-reliability in-pr,"### Description
I run a service that heavily uses WebSockets, with anywhere from 3-8k concurrent WebSockets connections per server. These are long-lived connections; most are from 12 hours to a few days. I enabled websocket compression because size and latency is essential for my application, but I have been getting a super frequent crash due to it.
I'm running Ubuntu server 22.04 with the most up-to-date MS package repo versions of dotnet 7.0, aspnet, etc. My service uses the WebSocket class directly; I'm not using any middleware for handling the streams. I have around eight server running, and I see about 1-3 crashes per server per day due to this bug.
Please let me know if I can provide any more helpful debugging information. I'm able to attach lldb and wait for a segfault, so I can gather any other info needed.
### Reproduction Steps
Set up an aspnet server that accepts websockets and enable web socket compression. Run the server as normal, allowing messages to be sent on the WebSocket and sockets to come and go. Eventually, the dotnet process will segfault, with this stack:
```
OS Thread Id: 0x4242a (212)
Child SP IP Call Site
00007F2BDD7F9038 00007f2c4d07f401 [InlinedCallFrame: 00007f2bdd7f9038] Interop+ZLib.Deflate(ZStream*, FlushCode)
00007F2BDD7F9038 00007f6c8109b576 [InlinedCallFrame: 00007f2bdd7f9038] Interop+ZLib.Deflate(ZStream*, FlushCode)
00007F2BDD7F9030 00007F6C8109B576 System.Net.WebSockets.Compression.WebSocketDeflater.Deflate(ZLibStreamHandle, FlushCode)
00007F2BDD7F90C0 00007F6C810F4458 System.Net.WebSockets.Compression.WebSocketDeflater.UnsafeFlush(System.Span`1, Boolean ByRef)
00007F2BDD7F90F0 00007F6C810F4268 System.Net.WebSockets.Compression.WebSocketDeflater.DeflatePrivate(System.ReadOnlySpan`1, System.Span`1, Boolean, Int32 ByRef, Int32 ByRef, Boolean ByRef)
00007F2BDD7F9140 00007F6C810F4012 System.Net.WebSockets.Compression.WebSocketDeflater.Deflate(System.ReadOnlySpan`1, Boolean)
00007F2BDD7F91C0 00007F6C80BCBD17 System.Net.WebSockets.ManagedWebSocket.WriteFrameToSendBuffer(MessageOpcode, Boolean, Boolean, System.ReadOnlySpan`1)
00007F2BDD7F9210 00007F6C80BCB3C8 System.Net.WebSockets.ManagedWebSocket+d__58.MoveNext()
00007F2BDD7F9310 00007F6C80BCB0C7 System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[[System.Net.WebSockets.ManagedWebSocket+d__58, System.Net.WebSockets]](d__58 ByRef)
00007F2BDD7F9350 00007F6C80BCAFE4 System.Net.WebSockets.ManagedWebSocket.SendFrameFallbackAsync(MessageOpcode, Boolean, Boolean, System.ReadOnlyMemory`1, System.Threading.Tasks.Task, System.Threading.CancellationToken)
00007F2BDD7F9400 00007F6C80BCAB52 System.Net.WebSockets.ManagedWebSocket.SendAsync(System.ReadOnlyMemory`1, System.Net.WebSockets.WebSocketMessageType, System.Net.WebSockets.WebSocketMessageFlags, System.Threading.CancellationToken)
...
```
### Expected behavior
The dotnet process shouldn't crash; at a minimum, it should throw a managed exception.
### Actual behavior
After some time of my service running, the process segfaults with the stack above.
### Regression?
This has been happening for a while, I'm not sure when it started, but it's been at least a few months.
### Known Workarounds
None. :(
### Configuration
Latest dotnet and aspnet packages from the Microsoft package repo for Ubunut Jammy. I also have the most up-to-date system packages.
dotnet sdk 7.0.302
dotnet runtime 7.0.5
aspnet 7.0.5
Ubuntu 22.04.2 LTS (Jammy Jellyfish), x64.
### Other information
_No response_",True,"Frequent WebSocket Compression Segfault - ### Description
I run a service that heavily uses WebSockets, with anywhere from 3-8k concurrent WebSockets connections per server. These are long-lived connections; most are from 12 hours to a few days. I enabled websocket compression because size and latency is essential for my application, but I have been getting a super frequent crash due to it.
I'm running Ubuntu server 22.04 with the most up-to-date MS package repo versions of dotnet 7.0, aspnet, etc. My service uses the WebSocket class directly; I'm not using any middleware for handling the streams. I have around eight server running, and I see about 1-3 crashes per server per day due to this bug.
Please let me know if I can provide any more helpful debugging information. I'm able to attach lldb and wait for a segfault, so I can gather any other info needed.
### Reproduction Steps
Set up an aspnet server that accepts websockets and enable web socket compression. Run the server as normal, allowing messages to be sent on the WebSocket and sockets to come and go. Eventually, the dotnet process will segfault, with this stack:
```
OS Thread Id: 0x4242a (212)
Child SP IP Call Site
00007F2BDD7F9038 00007f2c4d07f401 [InlinedCallFrame: 00007f2bdd7f9038] Interop+ZLib.Deflate(ZStream*, FlushCode)
00007F2BDD7F9038 00007f6c8109b576 [InlinedCallFrame: 00007f2bdd7f9038] Interop+ZLib.Deflate(ZStream*, FlushCode)
00007F2BDD7F9030 00007F6C8109B576 System.Net.WebSockets.Compression.WebSocketDeflater.Deflate(ZLibStreamHandle, FlushCode)
00007F2BDD7F90C0 00007F6C810F4458 System.Net.WebSockets.Compression.WebSocketDeflater.UnsafeFlush(System.Span`1, Boolean ByRef)
00007F2BDD7F90F0 00007F6C810F4268 System.Net.WebSockets.Compression.WebSocketDeflater.DeflatePrivate(System.ReadOnlySpan`1, System.Span`1, Boolean, Int32 ByRef, Int32 ByRef, Boolean ByRef)
00007F2BDD7F9140 00007F6C810F4012 System.Net.WebSockets.Compression.WebSocketDeflater.Deflate(System.ReadOnlySpan`1, Boolean)
00007F2BDD7F91C0 00007F6C80BCBD17 System.Net.WebSockets.ManagedWebSocket.WriteFrameToSendBuffer(MessageOpcode, Boolean, Boolean, System.ReadOnlySpan`1)
00007F2BDD7F9210 00007F6C80BCB3C8 System.Net.WebSockets.ManagedWebSocket+d__58.MoveNext()
00007F2BDD7F9310 00007F6C80BCB0C7 System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[[System.Net.WebSockets.ManagedWebSocket+d__58, System.Net.WebSockets]](d__58 ByRef)
00007F2BDD7F9350 00007F6C80BCAFE4 System.Net.WebSockets.ManagedWebSocket.SendFrameFallbackAsync(MessageOpcode, Boolean, Boolean, System.ReadOnlyMemory`1, System.Threading.Tasks.Task, System.Threading.CancellationToken)
00007F2BDD7F9400 00007F6C80BCAB52 System.Net.WebSockets.ManagedWebSocket.SendAsync(System.ReadOnlyMemory`1, System.Net.WebSockets.WebSocketMessageType, System.Net.WebSockets.WebSocketMessageFlags, System.Threading.CancellationToken)
...
```
### Expected behavior
The dotnet process shouldn't crash; at a minimum, it should throw a managed exception.
### Actual behavior
After some time of my service running, the process segfaults with the stack above.
### Regression?
This has been happening for a while, I'm not sure when it started, but it's been at least a few months.
### Known Workarounds
None. :(
### Configuration
Latest dotnet and aspnet packages from the Microsoft package repo for Ubunut Jammy. I also have the most up-to-date system packages.
dotnet sdk 7.0.302
dotnet runtime 7.0.5
aspnet 7.0.5
Ubuntu 22.04.2 LTS (Jammy Jellyfish), x64.
### Other information
_No response_",1,frequent websocket compression segfault description i run a service that heavily uses websockets with anywhere from concurrent websockets connections per server these are long lived connections most are from hours to a few days i enabled websocket compression because size and latency is essential for my application but i have been getting a super frequent crash due to it i m running ubuntu server with the most up to date ms package repo versions of dotnet aspnet etc my service uses the websocket class directly i m not using any middleware for handling the streams i have around eight server running and i see about crashes per server per day due to this bug please let me know if i can provide any more helpful debugging information i m able to attach lldb and wait for a segfault so i can gather any other info needed reproduction steps set up an aspnet server that accepts websockets and enable web socket compression run the server as normal allowing messages to be sent on the websocket and sockets to come and go eventually the dotnet process will segfault with this stack os thread id child sp ip call site interop zlib deflate zstream flushcode interop zlib deflate zstream flushcode system net websockets compression websocketdeflater deflate zlibstreamhandle flushcode system net websockets compression websocketdeflater unsafeflush system span boolean byref system net websockets compression websocketdeflater deflateprivate system readonlyspan system span boolean byref byref boolean byref system net websockets compression websocketdeflater deflate system readonlyspan boolean system net websockets managedwebsocket writeframetosendbuffer messageopcode boolean boolean system readonlyspan system net websockets managedwebsocket d movenext system runtime compilerservices asyncmethodbuildercore start d byref system net websockets managedwebsocket sendframefallbackasync messageopcode boolean boolean system readonlymemory system threading tasks task system threading cancellationtoken system net websockets managedwebsocket sendasync system readonlymemory system net websockets websocketmessagetype system net websockets websocketmessageflags system threading cancellationtoken expected behavior the dotnet process shouldn t crash at a minimum it should throw a managed exception actual behavior after some time of my service running the process segfaults with the stack above regression this has been happening for a while i m not sure when it started but it s been at least a few months known workarounds none configuration latest dotnet and aspnet packages from the microsoft package repo for ubunut jammy i also have the most up to date system packages dotnet sdk dotnet runtime aspnet ubuntu lts jammy jellyfish other information no response ,1
2496,25837969424.0,IssuesEvent,2022-12-12 21:18:49,pulumi/pulumi-docker,https://api.github.com/repos/pulumi/pulumi-docker,opened,Explore automapping for resource marshaling,impact/reliability kind/engineering,"Resource marshalling is manually implemented. While there's unit tests, it would be great if we could instead automap resource inputs into Go structs, and there is a bit of tooling that could be leveraged.
Prior art: https://github.com/pulumi/pulumi-docker/pull/435
",True,"Explore automapping for resource marshaling - Resource marshalling is manually implemented. While there's unit tests, it would be great if we could instead automap resource inputs into Go structs, and there is a bit of tooling that could be leveraged.
Prior art: https://github.com/pulumi/pulumi-docker/pull/435
",1,explore automapping for resource marshaling resource marshalling is manually implemented while there s unit tests it would be great if we could instead automap resource inputs into go structs and there is a bit of tooling that could be leveraged prior art ,1
1249,14290208843.0,IssuesEvent,2020-11-23 20:30:58,argoproj/argo,https://api.github.com/repos/argoproj/argo,closed,Transient database errors with offload enabled cause workflow to fail,bug epic/reliability,"## Summary
If there is a transient database connection error with offload enabled, then active workflows are marked as failed. I would at least expect some kind of retry.
## Diagnostics
What Kubernetes provider are you using?
GKE
What version of Argo Workflows are you running?
2.11.0-rc1
```
Paste the logs from the workflow controller:
kubectl logs -n argo $(kubectl get pods -l app=workflow-controller -n argo -o name) | grep ${workflow}
```
```
time=""2020-11-02T13:32:42Z"" level=error msg=""hydration failed: dial tcp 10.X.X.X:5432: connect: connection refused"" namespace=default workflow=XXXX
time=""2020-11-02T13:32:42Z"" level=error msg=""Failed to archive workflow"" err=""dial tcp 10.X.X.X:5432: connect: connection refused"" namespace=default workflow=XXXX
```
Seems to be caused by controller.go:517 which marks any workflows as 'error' if hydration fails.
---
**Message from the maintainers**:
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
",True,"Transient database errors with offload enabled cause workflow to fail - ## Summary
If there is a transient database connection error with offload enabled, then active workflows are marked as failed. I would at least expect some kind of retry.
## Diagnostics
What Kubernetes provider are you using?
GKE
What version of Argo Workflows are you running?
2.11.0-rc1
```
Paste the logs from the workflow controller:
kubectl logs -n argo $(kubectl get pods -l app=workflow-controller -n argo -o name) | grep ${workflow}
```
```
time=""2020-11-02T13:32:42Z"" level=error msg=""hydration failed: dial tcp 10.X.X.X:5432: connect: connection refused"" namespace=default workflow=XXXX
time=""2020-11-02T13:32:42Z"" level=error msg=""Failed to archive workflow"" err=""dial tcp 10.X.X.X:5432: connect: connection refused"" namespace=default workflow=XXXX
```
Seems to be caused by controller.go:517 which marks any workflows as 'error' if hydration fails.
---
**Message from the maintainers**:
Impacted by this bug? Give it a 👍. We prioritise the issues with the most 👍.
",1,transient database errors with offload enabled cause workflow to fail summary if there is a transient database connection error with offload enabled then active workflows are marked as failed i would at least expect some kind of retry diagnostics what kubernetes provider are you using gke what version of argo workflows are you running paste the logs from the workflow controller kubectl logs n argo kubectl get pods l app workflow controller n argo o name grep workflow time level error msg hydration failed dial tcp x x x connect connection refused namespace default workflow xxxx time level error msg failed to archive workflow err dial tcp x x x connect connection refused namespace default workflow xxxx seems to be caused by controller go which marks any workflows as error if hydration fails message from the maintainers impacted by this bug give it a 👍 we prioritise the issues with the most 👍 ,1
2402,25163773524.0,IssuesEvent,2022-11-10 18:55:38,StormSurgeLive/asgs,https://api.github.com/repos/StormSurgeLive/asgs,opened,add error checking for ftp and filesystem combinations in `get_atcf.pl`,reliability ATCF,"The `get_atcf.pl` script automatically tries to get forecast data from an ftp site if `TRIGGER=ftp`, even if `FTPSITE=filesystem`. This is an incompatible combination that should be detected and flagged with an error message. ",True,"add error checking for ftp and filesystem combinations in `get_atcf.pl` - The `get_atcf.pl` script automatically tries to get forecast data from an ftp site if `TRIGGER=ftp`, even if `FTPSITE=filesystem`. This is an incompatible combination that should be detected and flagged with an error message. ",1,add error checking for ftp and filesystem combinations in get atcf pl the get atcf pl script automatically tries to get forecast data from an ftp site if trigger ftp even if ftpsite filesystem this is an incompatible combination that should be detected and flagged with an error message ,1
575,8679254875.0,IssuesEvent,2018-11-30 22:56:36,dotnet/corefx,https://api.github.com/repos/dotnet/corefx,closed,OnCompleted events should run after pipe is completed,area-System.IO.Pipelines tenet-reliability,So all blocks are returned at that point and it's safe to `Dispose` the pool.,True,OnCompleted events should run after pipe is completed - So all blocks are returned at that point and it's safe to `Dispose` the pool.,1,oncompleted events should run after pipe is completed so all blocks are returned at that point and it s safe to dispose the pool ,1
267495,28509064247.0,IssuesEvent,2023-04-19 01:32:11,dpteam/RK3188_TABLET,https://api.github.com/repos/dpteam/RK3188_TABLET,closed,"CVE-2011-4131 (Medium) detected in randomv3.0.66, linuxv3.0 - autoclosed",Mend: dependency security vulnerability,"## CVE-2011-4131 - Medium Severity Vulnerability
Vulnerable Libraries - randomv3.0.66, linuxv3.0
Vulnerability Details
The NFSv4 implementation in the Linux kernel before 3.2.2 does not properly handle bitmap sizes in GETACL replies, which allows remote NFS servers to cause a denial of service (OOPS) by sending an excessive number of bitmap words.
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2011-4131 (Medium) detected in randomv3.0.66, linuxv3.0 - autoclosed - ## CVE-2011-4131 - Medium Severity Vulnerability
Vulnerable Libraries - randomv3.0.66, linuxv3.0
Vulnerability Details
The NFSv4 implementation in the Linux kernel before 3.2.2 does not properly handle bitmap sizes in GETACL replies, which allows remote NFS servers to cause a denial of service (OOPS) by sending an excessive number of bitmap words.
***
Step up your Open Source Security Game with Mend [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve medium detected in autoclosed cve medium severity vulnerability vulnerable libraries vulnerability details the implementation in the linux kernel before does not properly handle bitmap sizes in getacl replies which allows remote nfs servers to cause a denial of service oops by sending an excessive number of bitmap words publish date url a href cvss score details base score metrics exploitability metrics attack vector adjacent attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact none integrity impact none availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with mend ,0
403205,27405077361.0,IssuesEvent,2023-03-01 05:51:14,busykoala/fastapi-opa,https://api.github.com/repos/busykoala/fastapi-opa,closed,Integrate a documentation linter to improve readability,documentation enhancement feature,"**Is your feature request related to a problem? Please describe.**
The documentation is crucial to the users of this package. It is important to continuously work on the documentation and make it as readable as possible.
**Describe the solution you'd like**
One way to improve upon the documentation is to follow some guidelines.
[Vale](https://vale.sh/) provides some great templates to integrate with widely used guidelines and check them for documentation files.
**Describe alternatives you've considered**
There are many other solutions on the market while Vale addresses and combines some of the most important topics (general guidelines, naive linting, readability, sexism, etc.).
",1.0,"Integrate a documentation linter to improve readability - **Is your feature request related to a problem? Please describe.**
The documentation is crucial to the users of this package. It is important to continuously work on the documentation and make it as readable as possible.
**Describe the solution you'd like**
One way to improve upon the documentation is to follow some guidelines.
[Vale](https://vale.sh/) provides some great templates to integrate with widely used guidelines and check them for documentation files.
**Describe alternatives you've considered**
There are many other solutions on the market while Vale addresses and combines some of the most important topics (general guidelines, naive linting, readability, sexism, etc.).
",0,integrate a documentation linter to improve readability is your feature request related to a problem please describe the documentation is crucial to the users of this package it is important to continuously work on the documentation and make it as readable as possible describe the solution you d like one way to improve upon the documentation is to follow some guidelines provides some great templates to integrate with widely used guidelines and check them for documentation files describe alternatives you ve considered there are many other solutions on the market while vale addresses and combines some of the most important topics general guidelines naive linting readability sexism etc ,0
231098,7623669061.0,IssuesEvent,2018-05-03 15:37:15,quipucords/quipucords,https://api.github.com/repos/quipucords/quipucords,closed,Investigate Process.terminate issue in docker container,bug priority - high,"## Specify type:
- Bug
### Priority:
- High
___
## Description:
This is a possible bug. QE logs have shown instances where the pause/cancel task doesn't stop processing even though the python Process.terminate() method has been called. This issue is to investigate whether the pause/cancel work differently when run in a docker container.
___
## Acceptance Criteria:
- [ ] Verify that pause/cancel work in a docker container
",1.0,"Investigate Process.terminate issue in docker container - ## Specify type:
- Bug
### Priority:
- High
___
## Description:
This is a possible bug. QE logs have shown instances where the pause/cancel task doesn't stop processing even though the python Process.terminate() method has been called. This issue is to investigate whether the pause/cancel work differently when run in a docker container.
___
## Acceptance Criteria:
- [ ] Verify that pause/cancel work in a docker container
",0,investigate process terminate issue in docker container specify type bug priority high description this is a possible bug qe logs have shown instances where the pause cancel task doesn t stop processing even though the python process terminate method has been called this issue is to investigate whether the pause cancel work differently when run in a docker container acceptance criteria verify that pause cancel work in a docker container ,0
3002,30924858020.0,IssuesEvent,2023-08-06 11:06:53,ppy/osu,https://api.github.com/repos/ppy/osu,opened,Rapidly changing changelog listings cause a crash while disconnected from the internet,area:overlay-changelog type:reliability,"### Discussed in https://github.com/ppy/osu/discussions/24474
Originally posted by **TheRealStevie** August 6, 2023
If you're disconnected from the internet (done manually at 0:16 in the video), then by changing the changelog listings quick enough can cause a crash.
https://github.com/ppy/osu/assets/75444413/12b4cfc0-154d-47c1-a948-ca22ce2ffb8c
[database.log](https://github.com/ppy/osu/files/12269105/database.log)
[input.log](https://github.com/ppy/osu/files/12269106/input.log)
[network.log](https://github.com/ppy/osu/files/12269107/network.log)
[performance.log](https://github.com/ppy/osu/files/12269108/performance.log)
[runtime.log](https://github.com/ppy/osu/files/12269109/runtime.log)
[updater.log](https://github.com/ppy/osu/files/12269110/updater.log)
Looking at the runtime logs, I thought there was something similar to the logs it pointed out at https://github.com/ppy/osu/pull/20504 that fixes [an issue I reported](https://github.com/ppy/osu/discussions/20448), so I'll reference it here just in case.
",True,"Rapidly changing changelog listings cause a crash while disconnected from the internet - ### Discussed in https://github.com/ppy/osu/discussions/24474
Originally posted by **TheRealStevie** August 6, 2023
If you're disconnected from the internet (done manually at 0:16 in the video), then by changing the changelog listings quick enough can cause a crash.
https://github.com/ppy/osu/assets/75444413/12b4cfc0-154d-47c1-a948-ca22ce2ffb8c
[database.log](https://github.com/ppy/osu/files/12269105/database.log)
[input.log](https://github.com/ppy/osu/files/12269106/input.log)
[network.log](https://github.com/ppy/osu/files/12269107/network.log)
[performance.log](https://github.com/ppy/osu/files/12269108/performance.log)
[runtime.log](https://github.com/ppy/osu/files/12269109/runtime.log)
[updater.log](https://github.com/ppy/osu/files/12269110/updater.log)
Looking at the runtime logs, I thought there was something similar to the logs it pointed out at https://github.com/ppy/osu/pull/20504 that fixes [an issue I reported](https://github.com/ppy/osu/discussions/20448), so I'll reference it here just in case.
",1,rapidly changing changelog listings cause a crash while disconnected from the internet discussed in originally posted by therealstevie august if you re disconnected from the internet done manually at in the video then by changing the changelog listings quick enough can cause a crash looking at the runtime logs i thought there was something similar to the logs it pointed out at that fixes so i ll reference it here just in case ,1
2157,23826113203.0,IssuesEvent,2022-09-05 14:59:53,adoptium/infrastructure,https://api.github.com/repos/adoptium/infrastructure,closed,Ansible request for avoiding 31/32-bit package installs on RHEL8 (s390x as a minimum),arch:s390x (zLinux) ansible reliability currency,"Please put the name of the software product (and affected platforms if relevant) in the title of this issue
Delete as appropriate from this list:
- Bug in ansible playbook
Details: Playbook fails on RHEL8/s390x - we should have no need for the 31-bit support on z so this can probably be removed.
```
TASK [Common : Install additional build tools for RHEL on s390x] ***************
failed: [test-marist-rhel8-s390x-1] (item=glibc.s390) => {""ansible_loop_var"": ""item"", ""changed"": false, ""failures"": [""No package glibc.s390 available.""], ""item"": ""glibc.s390"", ""msg"": ""Failed to install some of the specified packages"", ""rc"": 1, ""results"": []}
failed: [test-marist-sles15-s390x-2] (item=glibc.s390) => {""ansible_loop_var"": ""item"", ""changed"": false, ""failures"": [""No package glibc.s390 available.""], ""item"": ""glibc.s390"", ""msg"": ""Failed to install some of the specified packages"", ""rc"": 1, ""results"": []}
changed: [test-marist-rhel7-s390x-1] => (item=glibc.s390)
changed: [test-marist-rhel7-s390x-2] => (item=glibc.s390)
failed: [test-marist-rhel8-s390x-1] (item=glibc-devel.s390) => {""ansible_loop_var"": ""item"", ""changed"": false, ""failures"": [""No package glibc-devel.s390 available.""], ""item"": ""glibc-devel.s390"", ""msg"": ""Failed to install some of the specified packages"", ""rc"": 1, ""results"": []}
failed: [test-marist-sles15-s390x-2] (item=glibc-devel.s390) => {""ansible_loop_var"": ""item"", ""changed"": false, ""failures"": [""No package glibc-devel.s390 available.""], ""item"": ""glibc-devel.s390"", ""msg"": ""Failed to install some of the specified packages"", ""rc"": 1, ""results"": []}
changed: [test-marist-rhel7-s390x-1] => (item=glibc-devel.s390)
changed: [test-marist-rhel7-s390x-2] => (item=glibc-devel.s390)
failed: [test-marist-rhel8-s390x-1] (item=libstdc++.s390) => {""ansible_loop_var"": ""item"", ""changed"": false, ""failures"": [""No package libstdc++.s390 available.""], ""item"": ""libstdc++.s390"", ""msg"": ""Failed to install some of the specified packages"", ""rc"": 1, ""results"": []}
failed: [test-marist-sles15-s390x-2] (item=libstdc++.s390) => {""ansible_loop_var"": ""item"", ""changed"": false, ""failures"": [""No package libstdc++.s390 available.""], ""item"": ""libstdc++.s390"", ""msg"": ""Failed to install some of the specified packages"", ""rc"": 1, ""results"": []}
```
",True,"Ansible request for avoiding 31/32-bit package installs on RHEL8 (s390x as a minimum) - Please put the name of the software product (and affected platforms if relevant) in the title of this issue
Delete as appropriate from this list:
- Bug in ansible playbook
Details: Playbook fails on RHEL8/s390x - we should have no need for the 31-bit support on z so this can probably be removed.
```
TASK [Common : Install additional build tools for RHEL on s390x] ***************
failed: [test-marist-rhel8-s390x-1] (item=glibc.s390) => {""ansible_loop_var"": ""item"", ""changed"": false, ""failures"": [""No package glibc.s390 available.""], ""item"": ""glibc.s390"", ""msg"": ""Failed to install some of the specified packages"", ""rc"": 1, ""results"": []}
failed: [test-marist-sles15-s390x-2] (item=glibc.s390) => {""ansible_loop_var"": ""item"", ""changed"": false, ""failures"": [""No package glibc.s390 available.""], ""item"": ""glibc.s390"", ""msg"": ""Failed to install some of the specified packages"", ""rc"": 1, ""results"": []}
changed: [test-marist-rhel7-s390x-1] => (item=glibc.s390)
changed: [test-marist-rhel7-s390x-2] => (item=glibc.s390)
failed: [test-marist-rhel8-s390x-1] (item=glibc-devel.s390) => {""ansible_loop_var"": ""item"", ""changed"": false, ""failures"": [""No package glibc-devel.s390 available.""], ""item"": ""glibc-devel.s390"", ""msg"": ""Failed to install some of the specified packages"", ""rc"": 1, ""results"": []}
failed: [test-marist-sles15-s390x-2] (item=glibc-devel.s390) => {""ansible_loop_var"": ""item"", ""changed"": false, ""failures"": [""No package glibc-devel.s390 available.""], ""item"": ""glibc-devel.s390"", ""msg"": ""Failed to install some of the specified packages"", ""rc"": 1, ""results"": []}
changed: [test-marist-rhel7-s390x-1] => (item=glibc-devel.s390)
changed: [test-marist-rhel7-s390x-2] => (item=glibc-devel.s390)
failed: [test-marist-rhel8-s390x-1] (item=libstdc++.s390) => {""ansible_loop_var"": ""item"", ""changed"": false, ""failures"": [""No package libstdc++.s390 available.""], ""item"": ""libstdc++.s390"", ""msg"": ""Failed to install some of the specified packages"", ""rc"": 1, ""results"": []}
failed: [test-marist-sles15-s390x-2] (item=libstdc++.s390) => {""ansible_loop_var"": ""item"", ""changed"": false, ""failures"": [""No package libstdc++.s390 available.""], ""item"": ""libstdc++.s390"", ""msg"": ""Failed to install some of the specified packages"", ""rc"": 1, ""results"": []}
```
",1,ansible request for avoiding bit package installs on as a minimum please put the name of the software product and affected platforms if relevant in the title of this issue delete as appropriate from this list bug in ansible playbook details playbook fails on we should have no need for the bit support on z so this can probably be removed task failed item glibc ansible loop var item changed false failures item glibc msg failed to install some of the specified packages rc results failed item glibc ansible loop var item changed false failures item glibc msg failed to install some of the specified packages rc results changed item glibc changed item glibc failed item glibc devel ansible loop var item changed false failures item glibc devel msg failed to install some of the specified packages rc results failed item glibc devel ansible loop var item changed false failures item glibc devel msg failed to install some of the specified packages rc results changed item glibc devel changed item glibc devel failed item libstdc ansible loop var item changed false failures item libstdc msg failed to install some of the specified packages rc results failed item libstdc ansible loop var item changed false failures item libstdc msg failed to install some of the specified packages rc results ,1
1683,18669214659.0,IssuesEvent,2021-10-30 11:32:10,beattosetto/beattosetto,https://api.github.com/repos/beattosetto/beattosetto,opened,Fix display bug in beatmap card and beatmap collection page,frontend area:home area:collection fix from discussion type:reliability,"When user wrie description in collection that is too long the beatmap card will change the shape that we are not expected and it will overflow in header too.
- [ ] Fix text over in collection card
- [ ] Fix text and picture overflow in collection card",True,"Fix display bug in beatmap card and beatmap collection page - When user wrie description in collection that is too long the beatmap card will change the shape that we are not expected and it will overflow in header too.
- [ ] Fix text over in collection card
- [ ] Fix text and picture overflow in collection card",1,fix display bug in beatmap card and beatmap collection page when user wrie description in collection that is too long the beatmap card will change the shape that we are not expected and it will overflow in header too fix text over in collection card fix text and picture overflow in collection card,1
2581,26500988020.0,IssuesEvent,2023-01-18 10:15:22,camunda/zeebe,https://api.github.com/repos/camunda/zeebe,closed,Calling Rebalance API causes short disruption,kind/bug severity/low area/reliability,"**Describe the bug**
Based on our last hack week, we added new functionality (cronjob) to our benchmarks setup, which triggers the rebalance API continuously https://github.com/zeebe-io/benchmark-helm/pull/7 so we are sure that we have always good leader distribution.
I have run a test benchmark for a while and we can see that every API call triggers disruption, still the avg performance looks ok, the processing execution latency is also highly impacted. We should investigate this further, because it looks for me that most of the time the leadership was already well distributed, so I would expect no impact.
**General**

**Raft**
We can see in the Raft metrics, that the Raft request are going down for a short period of time. Furthermore, we see only two places were really leader changes were happening. The heartbeat misses are likely to caused by the drop of append requests.


**Latency**
The process execution latency goes up to 10s, potentially due to the append request drop.

**To Reproduce**
Run a benchmark with the most recent changes https://github.com/zeebe-io/benchmark-helm/pull/7
**Expected behavior**
If the leader distribution is already well distributed we shouldn't see any impact.
**Environment:**
- OS:
- Zeebe Version: 8.1.x, 8.2
- Configuration:
",True,"Calling Rebalance API causes short disruption - **Describe the bug**
Based on our last hack week, we added new functionality (cronjob) to our benchmarks setup, which triggers the rebalance API continuously https://github.com/zeebe-io/benchmark-helm/pull/7 so we are sure that we have always good leader distribution.
I have run a test benchmark for a while and we can see that every API call triggers disruption, still the avg performance looks ok, the processing execution latency is also highly impacted. We should investigate this further, because it looks for me that most of the time the leadership was already well distributed, so I would expect no impact.
**General**

**Raft**
We can see in the Raft metrics, that the Raft request are going down for a short period of time. Furthermore, we see only two places were really leader changes were happening. The heartbeat misses are likely to caused by the drop of append requests.


**Latency**
The process execution latency goes up to 10s, potentially due to the append request drop.

**To Reproduce**
Run a benchmark with the most recent changes https://github.com/zeebe-io/benchmark-helm/pull/7
**Expected behavior**
If the leader distribution is already well distributed we shouldn't see any impact.
**Environment:**
- OS:
- Zeebe Version: 8.1.x, 8.2
- Configuration:
",1,calling rebalance api causes short disruption describe the bug based on our last hack week we added new functionality cronjob to our benchmarks setup which triggers the rebalance api continuously so we are sure that we have always good leader distribution i have run a test benchmark for a while and we can see that every api call triggers disruption still the avg performance looks ok the processing execution latency is also highly impacted we should investigate this further because it looks for me that most of the time the leadership was already well distributed so i would expect no impact general raft we can see in the raft metrics that the raft request are going down for a short period of time furthermore we see only two places were really leader changes were happening the heartbeat misses are likely to caused by the drop of append requests latency the process execution latency goes up to potentially due to the append request drop to reproduce run a benchmark with the most recent changes steps to reproduce the behavior if possible add a minimal reproducer code sample when using the java client expected behavior if the leader distribution is already well distributed we shouldn t see any impact environment os zeebe version x configuration ,1
673206,22952773857.0,IssuesEvent,2022-07-19 08:54:01,AbsaOSS/enceladus,https://api.github.com/repos/AbsaOSS/enceladus,opened,Use Spark-Commons 0.3.0,refactoring feature Conformance Standardization priority: high,"## Background
_Spark-Commons 0.3.0_ has been released.
## Feature
Upgrade and utilize the new version of _Spark-Commons_
",1.0,"Use Spark-Commons 0.3.0 - ## Background
_Spark-Commons 0.3.0_ has been released.
## Feature
Upgrade and utilize the new version of _Spark-Commons_
",0,use spark commons background spark commons has been released feature upgrade and utilize the new version of spark commons ,0
1613,17534301512.0,IssuesEvent,2021-08-12 03:40:01,ppy/osu,https://api.github.com/repos/ppy/osu,closed,Crash in editor when making changes to hitobjects,missing details area:editor type:reliability,"May be related to autoplay generation or some other weirdness.
https://user-images.githubusercontent.com/191335/112435722-76046400-8d88-11eb-94ac-3daca293bdc1.mp4
```
[runtime] 2021-03-25 06:30:55 [error]: An unhandled error has occurred.
[runtime] 2021-03-25 06:30:55 [error]: System.InvalidOperationException: A DrawableSliderHead was hit before it became hittable!
[runtime] 2021-03-25 06:30:55 [error]: at osu.Game.Rulesets.Osu.UI.StartTimeOrderedHitPolicy.HandleHit(DrawableHitObject hitObject) in /Users/dean/Projects/osu/osu.Game.Rulesets.Osu/UI/StartTimeOrderedHitPolicy.cs:line 53
[runtime] 2021-03-25 06:30:55 [error]: at osu.Game.Rulesets.Osu.UI.OsuPlayfield.onNewResult(DrawableHitObject judgedObject, JudgementResult result) in /Users/dean/Projects/osu/osu.Game.Rulesets.Osu/UI/OsuPlayfield.cs:line 142
[runtime] 2021-03-25 06:30:55 [error]: at osu.Game.Rulesets.UI.Playfield.<.ctor>b__27_2(DrawableHitObject d, JudgementResult r) in /Users/dean/Projects/osu/osu.Game/Rulesets/UI/Playfield.cs:line 103
[runtime] 2021-03-25 06:30:55 [error]: at osu.Game.Rulesets.UI.HitObjectContainer.onNewResult(DrawableHitObject d, JudgementResult r) in /Users/dean/Projects/osu/osu.Game/Rulesets/UI/HitObjectContainer.cs:line 213
[runtime] 2021-03-25 06:30:55 [error]: at osu.Game.Rulesets.Objects.Drawables.DrawableHitObject.onNewResult(DrawableHitObject drawableHitObject, JudgementResult result) in /Users/dean/Projects/osu/osu.Game/Rulesets/Objects/Drawables/DrawableHitObject.cs:line 384
[runtime] 2021-03-25 06:30:55 [error]: at osu.Game.Rulesets.Objects.Drawables.DrawableHitObject.ApplyResult(Action`1 application) in /Users/dean/Projects/osu/osu.Game/Rulesets/Objects/Drawables/DrawableHitObject.cs:line 768
[runtime] 2021-03-25 06:30:55 [error]: at osu.Game.Rulesets.Osu.Objects.Drawables.DrawableHitCircle.CheckForResult(Boolean userTriggered, Double timeOffset) in /Users/dean/Projects/osu/osu.Game.Rulesets.Osu/Objects/Drawables/DrawableHitCircle.cs:line 134
[runtime] 2021-03-25 06:30:55 [error]: at osu.Game.Rulesets.Objects.Drawables.DrawableHitObject.UpdateResult(Boolean userTriggered) in /Users/dean/Projects/osu/osu.Game/Rulesets/Objects/Drawables/DrawableHitObject.cs:line 785
[runtime] 2021-03-25 06:30:55 [error]: at osu.Game.Rulesets.Osu.Objects.Drawables.DrawableHitCircle.b__20_4() in /Users/dean/Projects/osu/osu.Game.Rulesets.Osu/Objects/Drawables/DrawableHitCircle.cs:line 65
[runtime] 2021-03-25 06:30:55 [error]: at osu.Game.Rulesets.Osu.Objects.Drawables.DrawableHitCircle.HitReceptor.OnPressed(OsuAction action) in /Users/dean/Projects/osu/osu.Game.Rulesets.Osu/Objects/Drawables/DrawableHitCircle.cs:line 225
```
",True,"Crash in editor when making changes to hitobjects - May be related to autoplay generation or some other weirdness.
https://user-images.githubusercontent.com/191335/112435722-76046400-8d88-11eb-94ac-3daca293bdc1.mp4
```
[runtime] 2021-03-25 06:30:55 [error]: An unhandled error has occurred.
[runtime] 2021-03-25 06:30:55 [error]: System.InvalidOperationException: A DrawableSliderHead was hit before it became hittable!
[runtime] 2021-03-25 06:30:55 [error]: at osu.Game.Rulesets.Osu.UI.StartTimeOrderedHitPolicy.HandleHit(DrawableHitObject hitObject) in /Users/dean/Projects/osu/osu.Game.Rulesets.Osu/UI/StartTimeOrderedHitPolicy.cs:line 53
[runtime] 2021-03-25 06:30:55 [error]: at osu.Game.Rulesets.Osu.UI.OsuPlayfield.onNewResult(DrawableHitObject judgedObject, JudgementResult result) in /Users/dean/Projects/osu/osu.Game.Rulesets.Osu/UI/OsuPlayfield.cs:line 142
[runtime] 2021-03-25 06:30:55 [error]: at osu.Game.Rulesets.UI.Playfield.<.ctor>b__27_2(DrawableHitObject d, JudgementResult r) in /Users/dean/Projects/osu/osu.Game/Rulesets/UI/Playfield.cs:line 103
[runtime] 2021-03-25 06:30:55 [error]: at osu.Game.Rulesets.UI.HitObjectContainer.onNewResult(DrawableHitObject d, JudgementResult r) in /Users/dean/Projects/osu/osu.Game/Rulesets/UI/HitObjectContainer.cs:line 213
[runtime] 2021-03-25 06:30:55 [error]: at osu.Game.Rulesets.Objects.Drawables.DrawableHitObject.onNewResult(DrawableHitObject drawableHitObject, JudgementResult result) in /Users/dean/Projects/osu/osu.Game/Rulesets/Objects/Drawables/DrawableHitObject.cs:line 384
[runtime] 2021-03-25 06:30:55 [error]: at osu.Game.Rulesets.Objects.Drawables.DrawableHitObject.ApplyResult(Action`1 application) in /Users/dean/Projects/osu/osu.Game/Rulesets/Objects/Drawables/DrawableHitObject.cs:line 768
[runtime] 2021-03-25 06:30:55 [error]: at osu.Game.Rulesets.Osu.Objects.Drawables.DrawableHitCircle.CheckForResult(Boolean userTriggered, Double timeOffset) in /Users/dean/Projects/osu/osu.Game.Rulesets.Osu/Objects/Drawables/DrawableHitCircle.cs:line 134
[runtime] 2021-03-25 06:30:55 [error]: at osu.Game.Rulesets.Objects.Drawables.DrawableHitObject.UpdateResult(Boolean userTriggered) in /Users/dean/Projects/osu/osu.Game/Rulesets/Objects/Drawables/DrawableHitObject.cs:line 785
[runtime] 2021-03-25 06:30:55 [error]: at osu.Game.Rulesets.Osu.Objects.Drawables.DrawableHitCircle.b__20_4() in /Users/dean/Projects/osu/osu.Game.Rulesets.Osu/Objects/Drawables/DrawableHitCircle.cs:line 65
[runtime] 2021-03-25 06:30:55 [error]: at osu.Game.Rulesets.Osu.Objects.Drawables.DrawableHitCircle.HitReceptor.OnPressed(OsuAction action) in /Users/dean/Projects/osu/osu.Game.Rulesets.Osu/Objects/Drawables/DrawableHitCircle.cs:line 225
```
",1,crash in editor when making changes to hitobjects may be related to autoplay generation or some other weirdness an unhandled error has occurred system invalidoperationexception a drawablesliderhead was hit before it became hittable at osu game rulesets osu ui starttimeorderedhitpolicy handlehit drawablehitobject hitobject in users dean projects osu osu game rulesets osu ui starttimeorderedhitpolicy cs line at osu game rulesets osu ui osuplayfield onnewresult drawablehitobject judgedobject judgementresult result in users dean projects osu osu game rulesets osu ui osuplayfield cs line at osu game rulesets ui playfield b drawablehitobject d judgementresult r in users dean projects osu osu game rulesets ui playfield cs line at osu game rulesets ui hitobjectcontainer onnewresult drawablehitobject d judgementresult r in users dean projects osu osu game rulesets ui hitobjectcontainer cs line at osu game rulesets objects drawables drawablehitobject onnewresult drawablehitobject drawablehitobject judgementresult result in users dean projects osu osu game rulesets objects drawables drawablehitobject cs line at osu game rulesets objects drawables drawablehitobject applyresult action application in users dean projects osu osu game rulesets objects drawables drawablehitobject cs line at osu game rulesets osu objects drawables drawablehitcircle checkforresult boolean usertriggered double timeoffset in users dean projects osu osu game rulesets osu objects drawables drawablehitcircle cs line at osu game rulesets objects drawables drawablehitobject updateresult boolean usertriggered in users dean projects osu osu game rulesets objects drawables drawablehitobject cs line at osu game rulesets osu objects drawables drawablehitcircle b in users dean projects osu osu game rulesets osu objects drawables drawablehitcircle cs line at osu game rulesets osu objects drawables drawablehitcircle hitreceptor onpressed osuaction action in users dean projects osu osu game rulesets osu objects drawables drawablehitcircle cs line ,1
2462,25549869379.0,IssuesEvent,2022-11-29 22:24:24,NVIDIA/spark-rapids,https://api.github.com/repos/NVIDIA/spark-rapids,opened,[FEA] Port parts of ShuffleSuite to test the RapidsShuffleManager,feature request ? - Needs Triage shuffle reliability,"There are suites in Spark: https://github.com/apache/spark/blob/master/core/src/test/scala/org/apache/spark/ShuffleSuite.scala and https://github.com/apache/spark/blob/master/core/src/test/scala/org/apache/spark/SortShuffleSuite.scala that would be great to port (at least partially) to the plugin to test against `RapidsShuffleManager` to prevent issues like https://github.com/NVIDIA/spark-rapids/pull/7199 from going unnoticed.
I don't expect this to be a huge amount of work, but I think it's valuable.",True,"[FEA] Port parts of ShuffleSuite to test the RapidsShuffleManager - There are suites in Spark: https://github.com/apache/spark/blob/master/core/src/test/scala/org/apache/spark/ShuffleSuite.scala and https://github.com/apache/spark/blob/master/core/src/test/scala/org/apache/spark/SortShuffleSuite.scala that would be great to port (at least partially) to the plugin to test against `RapidsShuffleManager` to prevent issues like https://github.com/NVIDIA/spark-rapids/pull/7199 from going unnoticed.
I don't expect this to be a huge amount of work, but I think it's valuable.",1, port parts of shufflesuite to test the rapidsshufflemanager there are suites in spark and that would be great to port at least partially to the plugin to test against rapidsshufflemanager to prevent issues like from going unnoticed i don t expect this to be a huge amount of work but i think it s valuable ,1
2575,26464724346.0,IssuesEvent,2023-01-16 21:45:51,microsoft/azuredatastudio,https://api.github.com/repos/microsoft/azuredatastudio,closed,Database List Fails to Load,Bug Area - Object Explorer Impact: Reliability,"Issue Type: Bug
1. Connect to Server with 5000 databases (Maximum allowed in Azure).
2. Attempt to use Servers->Databases.
3. Error will show ""Object Exporer task didn't complete within 45 seconds""
I had already lengthened the timeout in the advance properties of the connection to 120 seconds, but to no avail. It doesn't seem like listing the DBs should take any appreciable amount of time unless it's trying to run some metadata queries on each one too.
Azure Data Studio version: azuredatastudio 1.3.1-insider (bd53e685d0091f9fe09c2662c30e0e506c402561, 2018-11-09T19:34:35.108Z)
OS version: Windows_NT x64 10.0.17134
System Info
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz (8 x 2904)|
|GPU Status|2d_canvas: enabled checker_imaging: disabled_off flash_3d: enabled flash_stage3d: enabled flash_stage3d_baseline: enabled gpu_compositing: enabled multiple_raster_threads: enabled_on native_gpu_memory_buffers: disabled_software rasterization: enabled video_decode: enabled video_encode: enabled webgl: enabled webgl2: enabled|
|Memory (System)|31.85GB (12.49GB free)|
|Process Argv|C:\Program Files\Azure Data Studio\azuredatastudio.exe|
|Screen Reader|no|
|VM|0%|
Extensions: none
",True,"Database List Fails to Load - Issue Type: Bug
1. Connect to Server with 5000 databases (Maximum allowed in Azure).
2. Attempt to use Servers->Databases.
3. Error will show ""Object Exporer task didn't complete within 45 seconds""
I had already lengthened the timeout in the advance properties of the connection to 120 seconds, but to no avail. It doesn't seem like listing the DBs should take any appreciable amount of time unless it's trying to run some metadata queries on each one too.
Azure Data Studio version: azuredatastudio 1.3.1-insider (bd53e685d0091f9fe09c2662c30e0e506c402561, 2018-11-09T19:34:35.108Z)
OS version: Windows_NT x64 10.0.17134
System Info
|Item|Value|
|---|---|
|CPUs|Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz (8 x 2904)|
|GPU Status|2d_canvas: enabled checker_imaging: disabled_off flash_3d: enabled flash_stage3d: enabled flash_stage3d_baseline: enabled gpu_compositing: enabled multiple_raster_threads: enabled_on native_gpu_memory_buffers: disabled_software rasterization: enabled video_decode: enabled video_encode: enabled webgl: enabled webgl2: enabled|
|Memory (System)|31.85GB (12.49GB free)|
|Process Argv|C:\Program Files\Azure Data Studio\azuredatastudio.exe|
|Screen Reader|no|
|VM|0%|
Extensions: none
",1,database list fails to load issue type bug connect to server with databases maximum allowed in azure attempt to use servers databases error will show object exporer task didn t complete within seconds i had already lengthened the timeout in the advance properties of the connection to seconds but to no avail it doesn t seem like listing the dbs should take any appreciable amount of time unless it s trying to run some metadata queries on each one too azure data studio version azuredatastudio insider os version windows nt system info item value cpus intel r core tm cpu x gpu status canvas enabled checker imaging disabled off flash enabled flash enabled flash baseline enabled gpu compositing enabled multiple raster threads enabled on native gpu memory buffers disabled software rasterization enabled video decode enabled video encode enabled webgl enabled enabled memory system free process argv c program files azure data studio azuredatastudio exe screen reader no vm extensions none ,1
46759,13055971615.0,IssuesEvent,2020-07-30 03:16:23,icecube-trac/tix2,https://api.github.com/repos/icecube-trac/tix2,opened,[Serialization] (boost)serialization of raw and shared_ptr is broken (Trac #1832),Incomplete Migration Migrated from Trac combo core defect,"Migrated from https://code.icecube.wisc.edu/ticket/1832
```json
{
""status"": ""closed"",
""changetime"": ""2019-02-13T14:12:54"",
""description"": ""the serialization of shared pointers through the icecube::serialization interface is not working.\n\nerror below.\n\ncweaver suspects it hanging on the 'enable_if' switch.\n\n{{{\n[ 95%] Building CXX object IceHiveZ/CMakeFiles/IceHiveZ.dir/private/IceHiveZ/internals/Relation.cxx.o\nIn file included from /data/user/mzoll/i3/meta-projects/icerec/trunk/src/IceHiveZ/private/IceHiveZ/internals/Relation.cxx:12:\nIn file included from /data/user/mzoll/i3/meta-projects/icerec/trunk/src/IceHiveZ/private/IceHiveZ/internals/Relation.h:20:\nIn file included from /data/user/mzoll/i3/meta-projects/icerec/trunk/src/icetray/public/icetray/serialization.h:32:\nIn file included from /data/user/mzoll/i3/meta-projects/icerec/trunk/src/icetray/public/icetray/portable_binary_archive.hpp:8:\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/serialization/shared_ptr_helper.hpp:181:30: error: no matching\n member function for call to 'insert'\n result = m_o_sp->insert(std::make_pair(oid, s));\n ~~~~~~~~^~~~~~\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/serialization/shared_ptr.hpp:171:7: note: in instantiation of\n function template specialization 'icecube::serialization::shared_ptr_helper::reset' requested here\n h.reset(t,r); \n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/serialization/split_free.hpp:58:9: note: in instantiation of\n function template specialization 'icecube::serialization::load' requested here\n load(ar, t, v);\n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/serialization/split_free.hpp:74:12: note: in instantiation of member\n function 'icecube::serialization::free_loader >::invoke' requested here\n typex::invoke(ar, t, file_version);\n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/serialization/shared_ptr.hpp:187:29: note: in instantiation of\n function template specialization 'icecube::serialization::split_free >' requested here\n icecube::serialization::split_free(ar, t, file_version);\n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/serialization/serialization.hpp:128:9: note: in instantiation of\n function template specialization 'icecube::serialization::serialize' requested here\n serialize(ar, t, v);\n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/archive/detail/iserializer.hpp:179:29: note: (skipping 20 contexts\n in backtrace; use -ftemplate-backtrace-limit=0 to see all)\n icecube::serialization::serialize_adl(\n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/archive/detail/common_iarchive.hpp:66:18: note: in instantiation of\n function template specialization 'icecube::archive::load > >' requested here\n archive::load(* this->This(), t);\n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/icetray/public/icetray/portable_binary_archive.hpp:144:10: note: in instantiation of\n function template specialization\n 'icecube::archive::detail::common_iarchive::load_override > >' requested here\n ::load_override(t, static_cast(version));\n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/archive/detail/interface_iarchive.hpp:60:23: note: in instantiation\n of function template specialization 'icecube::archive::portable_binary_iarchive::load_override > >' requested here\n this->This()->load_override(t, 0);\n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/archive/detail/interface_iarchive.hpp:67:32: note: in instantiation\n of function template specialization\n 'icecube::archive::detail::interface_iarchive::operator>> > >' requested here\n return *(this->This()) >> t;\n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/IceHiveZ/private/IceHiveZ/internals/Relation.h:150:6: note: in instantiation of function\n template specialization 'icecube::archive::detail::interface_iarchive::operator& > >' requested here\n ar & icecube::serialization::make_nvp(\""hasher\"",hasher_);\n ^\n/cvmfs/icecube.opensciencegrid.org/py2-v3_early_access/RHEL_6_x86_64/bin/../include/c++/v1/map:1089:9: note: candidate function not viable:\n no known conversion from 'pair::type, shared_ptr>' to 'const pair>' for 1st argument\n insert(const value_type& __v) {return __tree_.__insert_unique(__v);}\n ^\n/cvmfs/icecube.opensciencegrid.org/py2-v3_early_access/RHEL_6_x86_64/bin/../include/c++/v1/map:1118:10: note: candidate function not viable:\n no known conversion from 'pair::type, typename __make_pair_return &>::type>' (aka 'pair >') to\n 'initializer_list' (aka 'initializer_list > >') for 1st argument\n void insert(initializer_list __il)\n ^\n/cvmfs/icecube.opensciencegrid.org/py2-v3_early_access/RHEL_6_x86_64/bin/../include/c++/v1/map:1074:42: note: candidate template ignored:\n disabled by 'enable_if' [with _Pp = std::__1::pair >]\n class = typename enable_if::value>::type>\n ^\n/cvmfs/icecube.opensciencegrid.org/py2-v3_early_access/RHEL_6_x86_64/bin/../include/c++/v1/map:1082:18: note: candidate function template\n not viable: requires 2 arguments, but 1 was provided\n iterator insert(const_iterator __pos, _Pp&& __p)\n ^\n/cvmfs/icecube.opensciencegrid.org/py2-v3_early_access/RHEL_6_x86_64/bin/../include/c++/v1/map:1109:14: note: candidate function template\n not viable: requires 2 arguments, but 1 was provided\n void insert(_InputIterator __f, _InputIterator __l)\n ^\n/cvmfs/icecube.opensciencegrid.org/py2-v3_early_access/RHEL_6_x86_64/bin/../include/c++/v1/map:1093:9: note: candidate function not viable:\n requires 2 arguments, but 1 was provided\n insert(const_iterator __p, const value_type& __v)\n}}}"",
""reporter"": ""mzoll"",
""cc"": ""olivas"",
""resolution"": ""invalid"",
""_ts"": ""1550067174476394"",
""component"": ""combo core"",
""summary"": ""[Serialization] (boost)serialization of raw and shared_ptr is broken"",
""priority"": ""blocker"",
""keywords"": ""serilaization shared_ptr"",
""time"": ""2016-08-19T19:46:49"",
""milestone"": """",
""owner"": ""cweaver"",
""type"": ""defect""
}
```
",1.0,"[Serialization] (boost)serialization of raw and shared_ptr is broken (Trac #1832) - Migrated from https://code.icecube.wisc.edu/ticket/1832
```json
{
""status"": ""closed"",
""changetime"": ""2019-02-13T14:12:54"",
""description"": ""the serialization of shared pointers through the icecube::serialization interface is not working.\n\nerror below.\n\ncweaver suspects it hanging on the 'enable_if' switch.\n\n{{{\n[ 95%] Building CXX object IceHiveZ/CMakeFiles/IceHiveZ.dir/private/IceHiveZ/internals/Relation.cxx.o\nIn file included from /data/user/mzoll/i3/meta-projects/icerec/trunk/src/IceHiveZ/private/IceHiveZ/internals/Relation.cxx:12:\nIn file included from /data/user/mzoll/i3/meta-projects/icerec/trunk/src/IceHiveZ/private/IceHiveZ/internals/Relation.h:20:\nIn file included from /data/user/mzoll/i3/meta-projects/icerec/trunk/src/icetray/public/icetray/serialization.h:32:\nIn file included from /data/user/mzoll/i3/meta-projects/icerec/trunk/src/icetray/public/icetray/portable_binary_archive.hpp:8:\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/serialization/shared_ptr_helper.hpp:181:30: error: no matching\n member function for call to 'insert'\n result = m_o_sp->insert(std::make_pair(oid, s));\n ~~~~~~~~^~~~~~\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/serialization/shared_ptr.hpp:171:7: note: in instantiation of\n function template specialization 'icecube::serialization::shared_ptr_helper::reset' requested here\n h.reset(t,r); \n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/serialization/split_free.hpp:58:9: note: in instantiation of\n function template specialization 'icecube::serialization::load' requested here\n load(ar, t, v);\n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/serialization/split_free.hpp:74:12: note: in instantiation of member\n function 'icecube::serialization::free_loader >::invoke' requested here\n typex::invoke(ar, t, file_version);\n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/serialization/shared_ptr.hpp:187:29: note: in instantiation of\n function template specialization 'icecube::serialization::split_free >' requested here\n icecube::serialization::split_free(ar, t, file_version);\n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/serialization/serialization.hpp:128:9: note: in instantiation of\n function template specialization 'icecube::serialization::serialize' requested here\n serialize(ar, t, v);\n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/archive/detail/iserializer.hpp:179:29: note: (skipping 20 contexts\n in backtrace; use -ftemplate-backtrace-limit=0 to see all)\n icecube::serialization::serialize_adl(\n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/archive/detail/common_iarchive.hpp:66:18: note: in instantiation of\n function template specialization 'icecube::archive::load > >' requested here\n archive::load(* this->This(), t);\n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/icetray/public/icetray/portable_binary_archive.hpp:144:10: note: in instantiation of\n function template specialization\n 'icecube::archive::detail::common_iarchive::load_override > >' requested here\n ::load_override(t, static_cast(version));\n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/archive/detail/interface_iarchive.hpp:60:23: note: in instantiation\n of function template specialization 'icecube::archive::portable_binary_iarchive::load_override > >' requested here\n this->This()->load_override(t, 0);\n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/serialization/public/archive/detail/interface_iarchive.hpp:67:32: note: in instantiation\n of function template specialization\n 'icecube::archive::detail::interface_iarchive::operator>> > >' requested here\n return *(this->This()) >> t;\n ^\n/data/user/mzoll/i3/meta-projects/icerec/trunk/src/IceHiveZ/private/IceHiveZ/internals/Relation.h:150:6: note: in instantiation of function\n template specialization 'icecube::archive::detail::interface_iarchive::operator& > >' requested here\n ar & icecube::serialization::make_nvp(\""hasher\"",hasher_);\n ^\n/cvmfs/icecube.opensciencegrid.org/py2-v3_early_access/RHEL_6_x86_64/bin/../include/c++/v1/map:1089:9: note: candidate function not viable:\n no known conversion from 'pair::type, shared_ptr>' to 'const pair>' for 1st argument\n insert(const value_type& __v) {return __tree_.__insert_unique(__v);}\n ^\n/cvmfs/icecube.opensciencegrid.org/py2-v3_early_access/RHEL_6_x86_64/bin/../include/c++/v1/map:1118:10: note: candidate function not viable:\n no known conversion from 'pair::type, typename __make_pair_return &>::type>' (aka 'pair >') to\n 'initializer_list' (aka 'initializer_list > >') for 1st argument\n void insert(initializer_list __il)\n ^\n/cvmfs/icecube.opensciencegrid.org/py2-v3_early_access/RHEL_6_x86_64/bin/../include/c++/v1/map:1074:42: note: candidate template ignored:\n disabled by 'enable_if' [with _Pp = std::__1::pair >]\n class = typename enable_if::value>::type>\n ^\n/cvmfs/icecube.opensciencegrid.org/py2-v3_early_access/RHEL_6_x86_64/bin/../include/c++/v1/map:1082:18: note: candidate function template\n not viable: requires 2 arguments, but 1 was provided\n iterator insert(const_iterator __pos, _Pp&& __p)\n ^\n/cvmfs/icecube.opensciencegrid.org/py2-v3_early_access/RHEL_6_x86_64/bin/../include/c++/v1/map:1109:14: note: candidate function template\n not viable: requires 2 arguments, but 1 was provided\n void insert(_InputIterator __f, _InputIterator __l)\n ^\n/cvmfs/icecube.opensciencegrid.org/py2-v3_early_access/RHEL_6_x86_64/bin/../include/c++/v1/map:1093:9: note: candidate function not viable:\n requires 2 arguments, but 1 was provided\n insert(const_iterator __p, const value_type& __v)\n}}}"",
""reporter"": ""mzoll"",
""cc"": ""olivas"",
""resolution"": ""invalid"",
""_ts"": ""1550067174476394"",
""component"": ""combo core"",
""summary"": ""[Serialization] (boost)serialization of raw and shared_ptr is broken"",
""priority"": ""blocker"",
""keywords"": ""serilaization shared_ptr"",
""time"": ""2016-08-19T19:46:49"",
""milestone"": """",
""owner"": ""cweaver"",
""type"": ""defect""
}
```
",0, boost serialization of raw and shared ptr is broken trac migrated from json status closed changetime description the serialization of shared pointers through the icecube serialization interface is not working n nerror below n ncweaver suspects it hanging on the enable if switch n n n building cxx object icehivez cmakefiles icehivez dir private icehivez internals relation cxx o nin file included from data user mzoll meta projects icerec trunk src icehivez private icehivez internals relation cxx nin file included from data user mzoll meta projects icerec trunk src icehivez private icehivez internals relation h nin file included from data user mzoll meta projects icerec trunk src icetray public icetray serialization h nin file included from data user mzoll meta projects icerec trunk src icetray public icetray portable binary archive hpp n data user mzoll meta projects icerec trunk src serialization public serialization shared ptr helper hpp error no matching n member function for call to insert n result m o sp insert std make pair oid s n n data user mzoll meta projects icerec trunk src serialization public serialization shared ptr hpp note in instantiation of n function template specialization icecube serialization shared ptr helper reset requested here n h reset t r n n data user mzoll meta projects icerec trunk src serialization public serialization split free hpp note in instantiation of n function template specialization icecube serialization load requested here n load ar t v n n data user mzoll meta projects icerec trunk src serialization public serialization split free hpp note in instantiation of member n function icecube serialization free loader invoke requested here n typex invoke ar t file version n n data user mzoll meta projects icerec trunk src serialization public serialization shared ptr hpp note in instantiation of n function template specialization icecube serialization split free requested here n icecube serialization split free ar t file version n n data user mzoll meta projects icerec trunk src serialization public serialization serialization hpp note in instantiation of n function template specialization icecube serialization serialize requested here n serialize ar t v n n data user mzoll meta projects icerec trunk src serialization public archive detail iserializer hpp note skipping contexts n in backtrace use ftemplate backtrace limit to see all n icecube serialization serialize adl n n data user mzoll meta projects icerec trunk src serialization public archive detail common iarchive hpp note in instantiation of n function template specialization icecube archive load requested here n archive load this this t n n data user mzoll meta projects icerec trunk src icetray public icetray portable binary archive hpp note in instantiation of n function template specialization n icecube archive detail common iarchive load override requested here n load override t static cast version n n data user mzoll meta projects icerec trunk src serialization public archive detail interface iarchive hpp note in instantiation n of function template specialization icecube archive portable binary iarchive load override requested here n this this load override t n n data user mzoll meta projects icerec trunk src serialization public archive detail interface iarchive hpp note in instantiation n of function template specialization n icecube archive detail interface iarchive operator requested here n return this this t n n data user mzoll meta projects icerec trunk src icehivez private icehivez internals relation h note in instantiation of function n template specialization icecube archive detail interface iarchive operator requested here n ar icecube serialization make nvp hasher hasher n n cvmfs icecube opensciencegrid org early access rhel bin include c map note candidate function not viable n no known conversion from pair type shared ptr to const pair for argument n insert const value type v return tree insert unique v n n cvmfs icecube opensciencegrid org early access rhel bin include c map note candidate function not viable n no known conversion from pair type typename make pair return type aka pair to n initializer list aka initializer list for argument n void insert initializer list il n n cvmfs icecube opensciencegrid org early access rhel bin include c map note candidate template ignored n disabled by enable if n class typename enable if value type n n cvmfs icecube opensciencegrid org early access rhel bin include c map note candidate function template n not viable requires arguments but was provided n iterator insert const iterator pos pp p n n cvmfs icecube opensciencegrid org early access rhel bin include c map note candidate function template n not viable requires arguments but was provided n void insert inputiterator f inputiterator l n n cvmfs icecube opensciencegrid org early access rhel bin include c map note candidate function not viable n requires arguments but was provided n insert const iterator p const value type v n reporter mzoll cc olivas resolution invalid ts component combo core summary boost serialization of raw and shared ptr is broken priority blocker keywords serilaization shared ptr time milestone owner cweaver type defect ,0
424340,12309624792.0,IssuesEvent,2020-05-12 09:15:35,our-city-app/mobicage-backend,https://api.github.com/repos/our-city-app/mobicage-backend,closed,Document feed_names in news.publish,priority_minor,"---
Migrated from https://github.com/rogerthat-platform/rogerthat-backend/issues/514
Originally created by @bart-at-mobicage on *Thu, 07 Jun 2018 12:28:12 GMT*
---
Document `feed_names` and other new properties in `news.publish`",1.0,"Document feed_names in news.publish - ---
Migrated from https://github.com/rogerthat-platform/rogerthat-backend/issues/514
Originally created by @bart-at-mobicage on *Thu, 07 Jun 2018 12:28:12 GMT*
---
Document `feed_names` and other new properties in `news.publish`",0,document feed names in news publish migrated from originally created by bart at mobicage on thu jun gmt document feed names and other new properties in news publish ,0
115515,4675404112.0,IssuesEvent,2016-10-07 07:40:01,CS2103AUG2016-W14-C3/main,https://api.github.com/repos/CS2103AUG2016-W14-C3/main,opened,"As a user, I can list out all supported commands and how to use them",priority.high type.story,... so that I know what I can do with the application,1.0,"As a user, I can list out all supported commands and how to use them - ... so that I know what I can do with the application",0,as a user i can list out all supported commands and how to use them so that i know what i can do with the application,0
38914,6714152529.0,IssuesEvent,2017-10-13 15:48:25,me-box/databox,https://api.github.com/repos/me-box/databox,closed,"SDK missing documentation, including smoke test guide",area/documentation area/sdk,"Using the SDK I don't know what components *should* work in the test or app view.
E.g. a simple set of instructions that would make a ""known good"" app with some indication of what ""success"" should look like, both test and on databox would be great.",1.0,"SDK missing documentation, including smoke test guide - Using the SDK I don't know what components *should* work in the test or app view.
E.g. a simple set of instructions that would make a ""known good"" app with some indication of what ""success"" should look like, both test and on databox would be great.",0,sdk missing documentation including smoke test guide using the sdk i don t know what components should work in the test or app view e g a simple set of instructions that would make a known good app with some indication of what success should look like both test and on databox would be great ,0
57525,11763544349.0,IssuesEvent,2020-03-14 07:50:22,nim-lang/Nim,https://api.github.com/repos/nim-lang/Nim,opened,Range types always uses signed integer as a base type,Codegen Range,"Range always use signed integer as the base type in the codegen.
Example:
```Nim
type
BaseUint* = SomeUnsignedInt or byte
Ct*[T: BaseUint] = distinct T
## Constant-Time wrapper
## Only constant-time operations in particular the ternary operator equivalent
## condition: if true: a else: b
## are allowed
CTBool*[T: Ct] = distinct range[T(0)..T(1)]
## Constant-Time boolean wrapper
var x: array[8, CTBool[Ct[uint32]]]
x[0] = (CTBool[Ct[uint32]])(1)
echo x.repr
```
C code
```C
// [...]
typedef NI tyArray__tvxsR2G9chmuUgp9afapiQkg[8];
// [...]
N_LIB_PRIVATE N_NIMCALL(void, NimMainModule)(void) {
{
tyArray__nHXaesL0DJZHyVS07ARPRA T1_;
nimfr_(""range_unsigned"", ""/home/beta/Programming/Nim/constantine/build/range_unsigned.nim"");
nimln_(13, ""/home/beta/Programming/Nim/constantine/build/range_unsigned.nim"");
x__C3MQJCEokeOV37kXufP37g[(((NI) 0))- 0] = ((NI) 1);
nimln_(14, ""/home/beta/Programming/Nim/constantine/build/range_unsigned.nim"");
nimZeroMem((void*)T1_, sizeof(tyArray__nHXaesL0DJZHyVS07ARPRA));
T1_[0] = reprAny(x__C3MQJCEokeOV37kXufP37g, (&NTI__tvxsR2G9chmuUgp9afapiQkg_));
echoBinSafe(T1_, 1);
popFrame();
}
}
// [...]
```
This causes the following issues, note that this is used in a cryptographic library for BigInt
- The CTBool should be the same size as the base word, if the base word is uint32 on a x86-64 CPU they are not.
- This causes issues for inline assembly, as with 32-bit word we expect `testl` + `cmovl` to work
but they only work on 32-bit width operand.
```Nim
func mux*[T](ctl: CTBool[T], x, y: T): T {.inline.}=
## Multiplexer / selector
## Returns x if ctl is true
## else returns y
## So equivalent to ctl? x: y
#
# TODO verify assembly generated
# Alternatives:
# - https://cryptocoding.net/index.php/Coding_rules
# - https://www.cl.cam.ac.uk/~rja14/Papers/whatyouc.pdf
when defined(amd64) or defined(i386):
when sizeof(T) == 8:
var muxed = x
asm """"""
testq %[ctl], %[ctl]
cmovzq %[y], %[muxed]
: [muxed] ""+r"" (`muxed`)
: [ctl] ""r"" (`ctl`), [y] ""r"" (`y`)
: ""cc""
""""""
muxed
elif sizeof(T) == 4:
var muxed = x
asm """"""
testl %[ctl], %[ctl]
cmovzl %[y], %[muxed]
: [muxed] ""+r"" (`muxed`)
: [ctl] ""r"" (`ctl`), [y] ""r"" (`y`)
: ""cc""
""""""
muxed
else:
{.error: ""Unsupported word size"".}
else:
y xor (-T(ctl) and (x xor y))
```
- C compilers use the undefined behavior of signed int for optimization. As mentioned in
- https://www.cl.cam.ac.uk/~rja14/Papers/whatyouc.pdf
- https://cryptocoding.net/index.php/Coding_rules
Cryptography requires preventing several compiler optimizations as they would often expose information on potentially secret data. This requires careful usage and representation of conditionals.
While Nim `int` instead of the specified `uint32` probably wouldn't lead to secret data leak in this case, it would be far better if the requested base type was used.
- Functions that return or accept this CTBool will use 8 bytes instead of 4 bytes",1.0,"Range types always uses signed integer as a base type - Range always use signed integer as the base type in the codegen.
Example:
```Nim
type
BaseUint* = SomeUnsignedInt or byte
Ct*[T: BaseUint] = distinct T
## Constant-Time wrapper
## Only constant-time operations in particular the ternary operator equivalent
## condition: if true: a else: b
## are allowed
CTBool*[T: Ct] = distinct range[T(0)..T(1)]
## Constant-Time boolean wrapper
var x: array[8, CTBool[Ct[uint32]]]
x[0] = (CTBool[Ct[uint32]])(1)
echo x.repr
```
C code
```C
// [...]
typedef NI tyArray__tvxsR2G9chmuUgp9afapiQkg[8];
// [...]
N_LIB_PRIVATE N_NIMCALL(void, NimMainModule)(void) {
{
tyArray__nHXaesL0DJZHyVS07ARPRA T1_;
nimfr_(""range_unsigned"", ""/home/beta/Programming/Nim/constantine/build/range_unsigned.nim"");
nimln_(13, ""/home/beta/Programming/Nim/constantine/build/range_unsigned.nim"");
x__C3MQJCEokeOV37kXufP37g[(((NI) 0))- 0] = ((NI) 1);
nimln_(14, ""/home/beta/Programming/Nim/constantine/build/range_unsigned.nim"");
nimZeroMem((void*)T1_, sizeof(tyArray__nHXaesL0DJZHyVS07ARPRA));
T1_[0] = reprAny(x__C3MQJCEokeOV37kXufP37g, (&NTI__tvxsR2G9chmuUgp9afapiQkg_));
echoBinSafe(T1_, 1);
popFrame();
}
}
// [...]
```
This causes the following issues, note that this is used in a cryptographic library for BigInt
- The CTBool should be the same size as the base word, if the base word is uint32 on a x86-64 CPU they are not.
- This causes issues for inline assembly, as with 32-bit word we expect `testl` + `cmovl` to work
but they only work on 32-bit width operand.
```Nim
func mux*[T](ctl: CTBool[T], x, y: T): T {.inline.}=
## Multiplexer / selector
## Returns x if ctl is true
## else returns y
## So equivalent to ctl? x: y
#
# TODO verify assembly generated
# Alternatives:
# - https://cryptocoding.net/index.php/Coding_rules
# - https://www.cl.cam.ac.uk/~rja14/Papers/whatyouc.pdf
when defined(amd64) or defined(i386):
when sizeof(T) == 8:
var muxed = x
asm """"""
testq %[ctl], %[ctl]
cmovzq %[y], %[muxed]
: [muxed] ""+r"" (`muxed`)
: [ctl] ""r"" (`ctl`), [y] ""r"" (`y`)
: ""cc""
""""""
muxed
elif sizeof(T) == 4:
var muxed = x
asm """"""
testl %[ctl], %[ctl]
cmovzl %[y], %[muxed]
: [muxed] ""+r"" (`muxed`)
: [ctl] ""r"" (`ctl`), [y] ""r"" (`y`)
: ""cc""
""""""
muxed
else:
{.error: ""Unsupported word size"".}
else:
y xor (-T(ctl) and (x xor y))
```
- C compilers use the undefined behavior of signed int for optimization. As mentioned in
- https://www.cl.cam.ac.uk/~rja14/Papers/whatyouc.pdf
- https://cryptocoding.net/index.php/Coding_rules
Cryptography requires preventing several compiler optimizations as they would often expose information on potentially secret data. This requires careful usage and representation of conditionals.
While Nim `int` instead of the specified `uint32` probably wouldn't lead to secret data leak in this case, it would be far better if the requested base type was used.
- Functions that return or accept this CTBool will use 8 bytes instead of 4 bytes",0,range types always uses signed integer as a base type range always use signed integer as the base type in the codegen example nim type baseuint someunsignedint or byte ct distinct t constant time wrapper only constant time operations in particular the ternary operator equivalent condition if true a else b are allowed ctbool distinct range constant time boolean wrapper var x array x ctbool echo x repr c code c typedef ni tyarray n lib private n nimcall void nimmainmodule void tyarray nimfr range unsigned home beta programming nim constantine build range unsigned nim nimln home beta programming nim constantine build range unsigned nim x ni nimln home beta programming nim constantine build range unsigned nim nimzeromem void sizeof tyarray reprany x nti echobinsafe popframe this causes the following issues note that this is used in a cryptographic library for bigint the ctbool should be the same size as the base word if the base word is on a cpu they are not this causes issues for inline assembly as with bit word we expect testl cmovl to work but they only work on bit width operand nim func mux ctl ctbool x y t t inline multiplexer selector returns x if ctl is true else returns y so equivalent to ctl x y todo verify assembly generated alternatives when defined or defined when sizeof t var muxed x asm testq cmovzq r muxed r ctl r y cc muxed elif sizeof t var muxed x asm testl cmovzl r muxed r ctl r y cc muxed else error unsupported word size else y xor t ctl and x xor y c compilers use the undefined behavior of signed int for optimization as mentioned in cryptography requires preventing several compiler optimizations as they would often expose information on potentially secret data this requires careful usage and representation of conditionals while nim int instead of the specified probably wouldn t lead to secret data leak in this case it would be far better if the requested base type was used functions that return or accept this ctbool will use bytes instead of bytes,0
2534,26121415525.0,IssuesEvent,2022-12-28 13:05:40,ppy/osu,https://api.github.com/repos/ppy/osu,closed,Various failures in solo statistics watcher's initial fetch,type:online type:reliability,"One is below. Another one I'll link in a second to the same issue to save the bureaucracy since they look potentially related.
---
Sentry Issue: [OSU-AAC](https://sentry.ppy.sh/organizations/ppy/issues/11737/?referrer=github_integration)
```
System.NullReferenceException: Object reference not set to an instance of an object.
?, in void SoloStatisticsWatcher.onUserChanged(APIUser localUser)+() => { }
?, in void ScheduledDelegate.RunTaskInternal()
?, in int Scheduler.Update()
?, in bool Drawable.UpdateSubTree()
?, in bool CompositeDrawable.UpdateSubTree() x 5
...
(2 additional frame(s) were not displayed)
An unhandled error has occurred.
```",True,"Various failures in solo statistics watcher's initial fetch - One is below. Another one I'll link in a second to the same issue to save the bureaucracy since they look potentially related.
---
Sentry Issue: [OSU-AAC](https://sentry.ppy.sh/organizations/ppy/issues/11737/?referrer=github_integration)
```
System.NullReferenceException: Object reference not set to an instance of an object.
?, in void SoloStatisticsWatcher.onUserChanged(APIUser localUser)+() => { }
?, in void ScheduledDelegate.RunTaskInternal()
?, in int Scheduler.Update()
?, in bool Drawable.UpdateSubTree()
?, in bool CompositeDrawable.UpdateSubTree() x 5
...
(2 additional frame(s) were not displayed)
An unhandled error has occurred.
```",1,various failures in solo statistics watcher s initial fetch one is below another one i ll link in a second to the same issue to save the bureaucracy since they look potentially related sentry issue system nullreferenceexception object reference not set to an instance of an object in void solostatisticswatcher onuserchanged apiuser localuser in void scheduleddelegate runtaskinternal in int scheduler update in bool drawable updatesubtree in bool compositedrawable updatesubtree x additional frame s were not displayed an unhandled error has occurred ,1
323475,27728540195.0,IssuesEvent,2023-03-15 05:38:44,kubernetes/kubernetes,https://api.github.com/repos/kubernetes/kubernetes,closed,Enable Aggregated Discovery for Beta failed e2e of Horizontal pod autoscaling ,sig/api-machinery sig/autoscaling kind/failing-test needs-triage,"### Failure cluster [75a2e7fe872595660e39](https://go.k8s.io/triage#75a2e7fe872595660e39)
##### Error text:
```
[FAILED] Timeout waiting 15m0s for 1 replicas
In [It] at: test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:612 @ 02/27/23 12:59:03.861
```
#### Recent failures:
[2023/3/13 00:34:29 ci-kubernetes-e2e-gci-gce-autoscaling-hpa-cm](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-autoscaling-hpa-cm/1634956015021592576)
[2023/3/13 00:23:31 ci-kubernetes-e2e-gci-gce-autoscaling-migs-hpa](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-autoscaling-migs-hpa/1634953246915170304)
[2023/3/12 20:52:28 ci-kubernetes-e2e-gci-gce-autoscaling-migs-hpa](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-autoscaling-migs-hpa/1634900146359635968)
[2023/3/12 18:50:28 ci-kubernetes-e2e-gci-gce-autoscaling-hpa-cm](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-autoscaling-hpa-cm/1634869444146630656)
[2023/3/12 17:21:29 ci-kubernetes-e2e-gci-gce-autoscaling-migs-hpa](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-autoscaling-migs-hpa/1634847045967679488)
/kind failing-test
/sig autoscaling
/cc @Jefftree ",1.0,"Enable Aggregated Discovery for Beta failed e2e of Horizontal pod autoscaling - ### Failure cluster [75a2e7fe872595660e39](https://go.k8s.io/triage#75a2e7fe872595660e39)
##### Error text:
```
[FAILED] Timeout waiting 15m0s for 1 replicas
In [It] at: test/e2e/autoscaling/custom_metrics_stackdriver_autoscaling.go:612 @ 02/27/23 12:59:03.861
```
#### Recent failures:
[2023/3/13 00:34:29 ci-kubernetes-e2e-gci-gce-autoscaling-hpa-cm](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-autoscaling-hpa-cm/1634956015021592576)
[2023/3/13 00:23:31 ci-kubernetes-e2e-gci-gce-autoscaling-migs-hpa](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-autoscaling-migs-hpa/1634953246915170304)
[2023/3/12 20:52:28 ci-kubernetes-e2e-gci-gce-autoscaling-migs-hpa](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-autoscaling-migs-hpa/1634900146359635968)
[2023/3/12 18:50:28 ci-kubernetes-e2e-gci-gce-autoscaling-hpa-cm](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-autoscaling-hpa-cm/1634869444146630656)
[2023/3/12 17:21:29 ci-kubernetes-e2e-gci-gce-autoscaling-migs-hpa](https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-autoscaling-migs-hpa/1634847045967679488)
/kind failing-test
/sig autoscaling
/cc @Jefftree ",0,enable aggregated discovery for beta failed of horizontal pod autoscaling failure cluster error text timeout waiting for replicas in at test autoscaling custom metrics stackdriver autoscaling go recent failures kind failing test sig autoscaling cc jefftree ,0
44499,12216133156.0,IssuesEvent,2020-05-01 14:31:07,NREL/EnergyPlus,https://api.github.com/repos/NREL/EnergyPlus,closed,Electric Chiller Forgets to Update Sometimes,Defect PriorityHigh SeverityHigh,"Issue overview
--------------
Possible related or duplicates:
- #7134
- #6404
While refactoring the electric chiller for PlantComponent in #7637, a bunch of time was ~~spent~~ wasted debugging stray diffs in CompSetPtControl.idf. As it turns out, even in develop that chiller model is misbehaving. Primarily due to forgetting to update a number of what used to be module-level shared variables. Overall, now that the chiller is refactored and cleaned up, someone needs to take a full pass to clean up all return paths to ensure variables are initialized/cleared wherever needed. This will indeed cause diffs, so this needs to be done in a dedicated branch, not in conjunction with larger work.
This can be seen by just looking at the CompSetPtControl.idf outputs, especially how the condenser flow and delta T do not agree for some timesteps.
(FYI @mjwitte @mitchute)",1.0,"Electric Chiller Forgets to Update Sometimes - Issue overview
--------------
Possible related or duplicates:
- #7134
- #6404
While refactoring the electric chiller for PlantComponent in #7637, a bunch of time was ~~spent~~ wasted debugging stray diffs in CompSetPtControl.idf. As it turns out, even in develop that chiller model is misbehaving. Primarily due to forgetting to update a number of what used to be module-level shared variables. Overall, now that the chiller is refactored and cleaned up, someone needs to take a full pass to clean up all return paths to ensure variables are initialized/cleared wherever needed. This will indeed cause diffs, so this needs to be done in a dedicated branch, not in conjunction with larger work.
This can be seen by just looking at the CompSetPtControl.idf outputs, especially how the condenser flow and delta T do not agree for some timesteps.
(FYI @mjwitte @mitchute)",0,electric chiller forgets to update sometimes issue overview possible related or duplicates while refactoring the electric chiller for plantcomponent in a bunch of time was spent wasted debugging stray diffs in compsetptcontrol idf as it turns out even in develop that chiller model is misbehaving primarily due to forgetting to update a number of what used to be module level shared variables overall now that the chiller is refactored and cleaned up someone needs to take a full pass to clean up all return paths to ensure variables are initialized cleared wherever needed this will indeed cause diffs so this needs to be done in a dedicated branch not in conjunction with larger work this can be seen by just looking at the compsetptcontrol idf outputs especially how the condenser flow and delta t do not agree for some timesteps fyi mjwitte mitchute ,0
180467,6650139215.0,IssuesEvent,2017-09-28 15:22:13,huridocs/uwazi,https://api.github.com/repos/huridocs/uwazi,closed,Elastic reindexing has been giving silent errors and there are ghost files ON THE DATABASE!,Priority: Medium Status: Review needed Status: Sprint Type: Bug Type: Question,"For some time, there were errors during document saves, deletes and editions. This happened around January 2017.
There are documents on the Database with files no longer present on the hard drive. Those files are not being correctly indexed, and therefore do not appear in Elastic. But once a correct migration takes out the fullText of those ghost files, they will be indexed and REAPPEAR on the libraries and upload sections, potentially duplicating info.
We need to figure out a way of dealing with this, options including allowing the documents to get indexed and then rely on clients to delete manually. Other option is to assume that this documents have been lost and delete them without any client interaction.
This needs discussion.",1.0,"Elastic reindexing has been giving silent errors and there are ghost files ON THE DATABASE! - For some time, there were errors during document saves, deletes and editions. This happened around January 2017.
There are documents on the Database with files no longer present on the hard drive. Those files are not being correctly indexed, and therefore do not appear in Elastic. But once a correct migration takes out the fullText of those ghost files, they will be indexed and REAPPEAR on the libraries and upload sections, potentially duplicating info.
We need to figure out a way of dealing with this, options including allowing the documents to get indexed and then rely on clients to delete manually. Other option is to assume that this documents have been lost and delete them without any client interaction.
This needs discussion.",0,elastic reindexing has been giving silent errors and there are ghost files on the database for some time there were errors during document saves deletes and editions this happened around january there are documents on the database with files no longer present on the hard drive those files are not being correctly indexed and therefore do not appear in elastic but once a correct migration takes out the fulltext of those ghost files they will be indexed and reappear on the libraries and upload sections potentially duplicating info we need to figure out a way of dealing with this options including allowing the documents to get indexed and then rely on clients to delete manually other option is to assume that this documents have been lost and delete them without any client interaction this needs discussion ,0
287,6021623418.0,IssuesEvent,2017-06-07 19:07:02,LeastAuthority/leastauthority.com,https://api.github.com/repos/LeastAuthority/leastauthority.com,closed,"SSEC2 clocks can drift by more than 15 minutes, causing errors from S3",blocks-customers bug monitoring reliability,"The storage server 107.20.93.110 was failing due to errors like this from S3 (edited for formatting):
```
File ""/usr/local/lib/python2.7/dist-packages/txAWS-0.2.1.post4-py2.7.egg/txaws/client/base.py"", line 46, in error_wrapper
raise fallback_error
allmydata.storage.backends.s3.s3_common.TahoeS3Error: ('403', '403 Forbidden', '
RequestTimeTooSkewedThe difference between the request time and the current time is too large.900000395773C55D004CDD4rNtXTWMBRWOWuxqhpFqMbKe8bTgM4OR0P7Ku/5Med0GoRG0dTJiwIr3G7vJ47TvTue, 04 Mar 2014 19:47:51 GMT2014-03-04T20:03:05Z')...
```
I logged into the server and verified that its clock was just over 15 minutes slow -- longer than the 900 seconds permitted time skew for S3 requests.
Rebooting the server (sudo reboot now) caused its clock to be resynced.
",True,"SSEC2 clocks can drift by more than 15 minutes, causing errors from S3 - The storage server 107.20.93.110 was failing due to errors like this from S3 (edited for formatting):
```
File ""/usr/local/lib/python2.7/dist-packages/txAWS-0.2.1.post4-py2.7.egg/txaws/client/base.py"", line 46, in error_wrapper
raise fallback_error
allmydata.storage.backends.s3.s3_common.TahoeS3Error: ('403', '403 Forbidden', '
RequestTimeTooSkewedThe difference between the request time and the current time is too large.900000395773C55D004CDD4rNtXTWMBRWOWuxqhpFqMbKe8bTgM4OR0P7Ku/5Med0GoRG0dTJiwIr3G7vJ47TvTue, 04 Mar 2014 19:47:51 GMT2014-03-04T20:03:05Z')...
```
I logged into the server and verified that its clock was just over 15 minutes slow -- longer than the 900 seconds permitted time skew for S3 requests.
Rebooting the server (sudo reboot now) caused its clock to be resynced.
",1, clocks can drift by more than minutes causing errors from the storage server was failing due to errors like this from edited for formatting file usr local lib dist packages txaws egg txaws client base py line in error wrapper raise fallback error allmydata storage backends common forbidden requesttimetooskewed the difference between the request time and the current time is too large tue mar gmt i logged into the server and verified that its clock was just over minutes slow longer than the seconds permitted time skew for requests rebooting the server sudo reboot now caused its clock to be resynced ,1
1583,17268762466.0,IssuesEvent,2021-07-22 16:49:56,pulumi/pulumi,https://api.github.com/repos/pulumi/pulumi,closed,[sdk/dotnet] Concurrency issue in WhileRunningAsync HandleCompletion,impact/reliability kind/bug language/dotnet resolution/fixed size/M,"We received the following error in one of our [test runs](https://github.com/pulumi/pulumi-azure-nextgen-provider/pull/522/checks?check_run_id=1921033867#step:19:183) which means our task awaiting logic isn't entirely thread-safe. It doesn't happen often but that's how concurrency issues manifest... We should fix this.
```
Diagnostics:
pulumi:pulumi:Stack (cs-simple-p-it-fv-az59-79-cs-simple-c27e31cf):
error: Running program '/tmp/p-it-fv-az59-79-cs-simple-c27e31cf-036146210/bin/Debug/netcoreapp3.1/cs-simple.dll' failed with an unhandled exception:
System.Collections.Generic.KeyNotFoundException: The given key 'System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1+AsyncStateMachineBox`1[System.Threading.Tasks.VoidTaskResult,Pulumi.Deployment+d__74]' was not present in the dictionary.
at System.Collections.Generic.Dictionary`2.get_Item(TKey key)
at Pulumi.Deployment.Runner.<>c__DisplayClass9_0.<g__HandleCompletion|0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at Pulumi.Deployment.Runner.WhileRunningAsync()
```",True,"[sdk/dotnet] Concurrency issue in WhileRunningAsync HandleCompletion - We received the following error in one of our [test runs](https://github.com/pulumi/pulumi-azure-nextgen-provider/pull/522/checks?check_run_id=1921033867#step:19:183) which means our task awaiting logic isn't entirely thread-safe. It doesn't happen often but that's how concurrency issues manifest... We should fix this.
```
Diagnostics:
pulumi:pulumi:Stack (cs-simple-p-it-fv-az59-79-cs-simple-c27e31cf):
error: Running program '/tmp/p-it-fv-az59-79-cs-simple-c27e31cf-036146210/bin/Debug/netcoreapp3.1/cs-simple.dll' failed with an unhandled exception:
System.Collections.Generic.KeyNotFoundException: The given key 'System.Runtime.CompilerServices.AsyncTaskMethodBuilder`1+AsyncStateMachineBox`1[System.Threading.Tasks.VoidTaskResult,Pulumi.Deployment+d__74]' was not present in the dictionary.
at System.Collections.Generic.Dictionary`2.get_Item(TKey key)
at Pulumi.Deployment.Runner.<>c__DisplayClass9_0.<g__HandleCompletion|0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at Pulumi.Deployment.Runner.WhileRunningAsync()
```",1, concurrency issue in whilerunningasync handlecompletion we received the following error in one of our which means our task awaiting logic isn t entirely thread safe it doesn t happen often but that s how concurrency issues manifest we should fix this diagnostics pulumi pulumi stack cs simple p it fv cs simple error running program tmp p it fv cs simple bin debug cs simple dll failed with an unhandled exception system collections generic keynotfoundexception the given key system runtime compilerservices asynctaskmethodbuilder asyncstatemachinebox was not present in the dictionary at system collections generic dictionary get item tkey key at pulumi deployment runner c g handlecompletion d movenext end of stack trace from previous location where exception was thrown at pulumi deployment runner whilerunningasync ,1
934,11716122674.0,IssuesEvent,2020-03-09 15:10:29,sohaibaslam/learning_site,https://api.github.com/repos/sohaibaslam/learning_site,opened,"Broken Crawlers 09, Mar 2020",crawler broken/unreliable,"1. **24sevres eu(100%)/fr(100%)/uk(100%)/us(100%)**
1. **abcmart kr(100%)**
1. **abercrombie cn(100%)/hk(100%)/jp(100%)**
1. **adidas pl(100%)**
1. **alcott eu(100%)**
1. **americaneagle ca(100%)**
1. **ami cn(100%)/dk(100%)/jp(100%)/kr(100%)/mx(100%)/uk(100%)/us(100%)**
1. **antonioli at(100%)/hk(100%)/pt(100%)**
1. **asos ae(100%)/au(100%)/ch(100%)/cn(100%)/hk(100%)/id(100%)/my(100%)/nl(100%)/ph(100%)/pl(100%)/ru(100%)/sa(100%)/sg(100%)/th(100%)/us(100%)/vn(100%)**
1. **babyshop ae(100%)/sa(100%)**
1. **bananarepublic ca(100%)**
1. **benetton at(46%)/be(49%)/bg(45%)/ch(47%)/de(49%)/dk(41%)/ee(42%)/es(48%)/fi(47%)/fr(47%)/gr(47%)/hr(47%)/ie(48%)/it(40%)/lv(100%)/nl(45%)/pt(47%)/se(47%)/si(49%)/sk(47%)/uk(48%)**
1. **bijoubrigitte de(100%)/nl(100%)**
1. **bistmart ir(100%)**
1. **boconcept at(100%)/de(100%)**
1. **borbonese eu(100%)/it(100%)/uk(100%)**
1. **buckle us(100%)**
1. **carpisa hr(100%)**
1. **carters us(100%)**
1. **charmingcharlie us(100%)**
1. **clarks eu(100%)**
1. **coach ca(100%)/uk(100%)/us(100%)**
1. **columbiasportswear at(100%)/nl(33%)**
1. **conforama fr(100%)**
1. **converse au(100%)/es(100%)/kr(100%)/nl(77%)**
1. **cos at(100%)/hu(56%)**
1. **creationl de(100%)**
1. **dfs uk(100%)**
1. **dickssportinggoods us(100%)**
1. **ernstings de(100%)**
1. **falabella cl(100%)/co(100%)**
1. **fanatics us(100%)**
1. **fendi cn(100%)**
1. **footaction us(100%)**
1. **footlocker be(100%)/de(100%)/dk(100%)/es(100%)/fr(100%)/it(100%)/lu(100%)/nl(100%)/no(100%)/se(100%)/uk(100%)**
1. **frescobolcarioca eu(100%)/uk(100%)**
1. **gap ca(100%)/cl(33%)**
1. **harrods (100%)**
1. **heine at(100%)**
1. **hermes ca(100%)/de(50%)/es(50%)/fr(67%)/uk(34%)/us(93%)**
1. **hm ae(100%)/dk(35%)/eu(100%)/fi(42%)/kw(100%)/no(40%)/pl(100%)/sa(100%)/se(41%)**
1. **hollister cn(100%)/hk(100%)/jp(100%)/tw(100%)**
1. **hunter (100%)**
1. **ikea au(100%)/pt(100%)**
1. **intersport fr(100%)**
1. **intimissimi cn(100%)/hk(100%)/jp(100%)**
1. **jackwills (100%)**
1. **jeffreycampbell us(100%)**
1. **klingel de(100%)**
1. **lacoste cn(100%)/mx(100%)/us(32%)**
1. **levi my(100%)**
1. **lifestylestores in(100%)**
1. **limango de(100%)**
1. **made ch(100%)/de(100%)/es(100%)/nl(100%)/uk(100%)**
1. **massimodutti ad(46%)/al(45%)/am(47%)/az(47%)/ba(45%)/bh(100%)/by(45%)/co(46%)/cr(44%)/cy(47%)/do(41%)/ec(45%)/eg(100%)/ge(39%)/gt(46%)/hk(42%)/hn(46%)/id(43%)/il(44%)/in(43%)/kz(45%)/mc(54%)/mk(45%)/mo(41%)/my(48%)/pa(44%)/ph(44%)/rs(45%)/sg(41%)/th(47%)/tn(47%)/tw(44%)/ua(47%)/vn(47%)**
1. **maxfashion ae(100%)**
1. **michaelkors ca(100%)/us(30%)**
1. **moosejaw us(100%)**
1. **mothercare sa(100%)**
1. **mq se(100%)**
1. **mrprice uk(100%)**
1. **muji de(100%)/fr(100%)/uk(34%)**
1. **oldnavy ca(100%)**
1. **oshkosh us(100%)**
1. **oysho id(100%)**
1. **parfois ad(100%)/al(100%)/am(100%)/ao(100%)/at(100%)/ba(100%)/be(100%)/bg(100%)/bh(100%)/br(100%)/by(100%)/ch(100%)/co(100%)/cz(100%)/de(100%)/dk(100%)/do(100%)/ee(100%)/eg(100%)/es(100%)/fi(100%)/fr(100%)/ge(100%)/gr(100%)/gt(100%)/hr(100%)/hu(100%)/ie(100%)/ir(100%)/it(100%)/jo(100%)/kw(100%)/lb(100%)/lt(100%)/lu(100%)/lv(100%)/ly(100%)/ma(100%)/mc(100%)/mk(100%)/mt(100%)/mx(100%)/mz(100%)/nl(100%)/om(100%)/pa(100%)/pe(100%)/ph(100%)/pl(100%)/pt(100%)/qa(100%)/ro(100%)/rs(100%)/sa(100%)/se(100%)/si(100%)/sk(100%)/tn(100%)/uk(100%)/us(100%)/ve(100%)/ye(100%)**
1. **patagonia ca(100%)**
1. **popup br(100%)**
1. **prettysecrets in(100%)**
1. **pullandbear kr(100%)/ph(100%)**
1. **rakuten fr(100%)/us(100%)**
1. **ralphlauren de(100%)**
1. **reebok at(46%)/be(41%)/ch(100%)/es(77%)/fr(64%)/ie(43%)/it(57%)/nl(47%)/no(86%)/se(88%)/sk(100%)/uk(100%)/us(71%)**
1. **runnerspoint de(100%)**
1. **runwaysale za(100%)**
1. **sainsburys uk(100%)**
1. **saksfifthavenue mo(100%)**
1. **sandroatjd cn(100%)**
1. **selfridges de(100%)/es(100%)**
1. **shoedazzle us(100%)**
1. **simons ca(100%)**
1. **snipes de(100%)**
1. **solebox de(100%)/uk(100%)**
1. **speedo us(100%)**
1. **splashfashions ae(100%)/bh(100%)/sa(100%)**
1. **stories at(44%)/be(43%)/de(43%)/dk(44%)/es(45%)/fi(45%)/fr(45%)/ie(45%)/it(45%)/nl(43%)/pl(45%)/se(45%)/uk(100%)/us(39%)**
1. **stradivarius ph(100%)**
1. **stylebop (100%)/au(100%)/ca(100%)/cn(100%)/de(100%)/es(100%)/fr(100%)/hk(100%)/jp(100%)/kr(100%)/mo(100%)/sg(100%)/us(100%)**
1. **superbalist za(100%)**
1. **thenorthface us(100%)**
1. **thread uk(100%)/us(100%)**
1. **tods cn(100%)/gr(100%)/pt(100%)**
1. **tommybahama bh(100%)/de(83%)/ph(100%)/uk(34%)/za(100%)**
1. **tommyhilfiger jp(100%)**
1. **topbrands ru(100%)**
1. **trendygolf uk(100%)**
1. **undefeated us(100%)**
1. **underarmour ca(100%)**
1. **watchshop eu(100%)/pl(100%)/ru(100%)/se(100%)**
1. **wayfair ca(100%)/de(100%)/uk(100%)**
1. **weekday eu(100%)**
1. **wenz de(100%)**
1. **westwingnow ch(100%)**
1. **womanwithin us(100%)**
1. **zalandolounge de(100%)**
1. **zalora id(100%)/ph(88%)/tw(86%)**
1. **zara pe(100%)/ph(100%)/uy(100%)**
1. **zilingo my(100%)**
1. vip cn(96%)
1. hibbett us(95%)
1. gosport fr(94%)
1. industrie uk(94%)
1. nike hk(92%)/kr(80%)
1. nastygal (91%)
1. filippak at(83%)/eu(36%)/nl(88%)/no(41%)/se(88%)/us(42%)
1. melijoe be(43%)/cn(67%)/kr(73%)/uk(88%)
1. defacto tr(84%)
1. lee pl(84%)
1. burberry ae(70%)/at(66%)/bg(76%)/ch(73%)/cz(74%)/es(75%)/hk(80%)/hu(68%)/ie(76%)/it(76%)/jp(66%)/my(76%)/pl(79%)/pt(73%)/ru(82%)/se(70%)/tw(78%)
1. vansjd cn(74%)
1. shoeshowmega us(68%)
1. koton tr(65%)
1. maxmara it(65%)/kr(49%)
1. nikeatjd cn(63%)
1. zalando dk(63%)
1. sfera es(56%)
1. zivame in(56%)
1. timberland my(53%)
1. timberlandtrans sg(53%)
1. venteprivee es(50%)/fr(49%)/it(52%)
1. misssixty cn(50%)
1. leroymerlin fr(48%)
1. mangoattmall cn(48%)
1. liujo es(34%)/it(47%)
1. tchibo de(46%)
1. brunellocucinelli cn(45%)
1. interightatjd cn(45%)
1. theoryattmall cn(45%)
1. rinascimento fr(43%)
1. sandroattmall cn(43%)
1. strellson at(39%)/be(38%)/ch(42%)/de(41%)/fr(36%)/nl(41%)
1. uniqlo us(41%)
1. anayi jp(40%)
1. marcopolo ch(32%)/es(39%)/ie(38%)/lt(38%)/se(38%)/uk(39%)
1. oodji ru(39%)
1. shein au(36%)/nz(39%)
1. boardiesapparel au(38%)
1. onitsukatigerjd cn(38%)
1. hugoboss cn(37%)
1. bash de(36%)/fr(36%)/hk(36%)/uk(30%)
1. marinarinaldi at(31%)/be(30%)/cz(34%)/de(34%)/dk(30%)/es(32%)/fr(32%)/hu(32%)/ie(32%)/it(36%)/nl(34%)/pl(32%)/pt(30%)/ro(34%)/se(32%)/uk(32%)
1. gstar at(31%)/au(31%)/bg(30%)/ch(32%)/cz(32%)/de(35%)/ee(33%)/hr(33%)/lt(30%)/lv(30%)/pl(31%)/ru(32%)/si(33%)/sk(33%)
1. petitbateau uk(35%)
1. aboutyou cz(31%)/hu(33%)/pl(33%)/ro(34%)/sk(33%)
1. terranovastyle de(30%)/es(32%)/fr(32%)/it(30%)/nl(34%)/uk(32%)
1. lululemon cn(33%)
1. mango kr(33%)
1. tigerofsweden at(30%)/ie(30%)/nl(32%)/no(33%)
1. superdry th(32%)
1. vionicshoes uk(32%)
1. only us(31%)
1. replayjeans au(30%)/be(30%)/ch(30%)/de(30%)/eu(30%)/fr(30%)/no(30%)/uk(31%)
1. darjeeling fr(30%)
1. dsw us(30%)
",True,"Broken Crawlers 09, Mar 2020 - 1. **24sevres eu(100%)/fr(100%)/uk(100%)/us(100%)**
1. **abcmart kr(100%)**
1. **abercrombie cn(100%)/hk(100%)/jp(100%)**
1. **adidas pl(100%)**
1. **alcott eu(100%)**
1. **americaneagle ca(100%)**
1. **ami cn(100%)/dk(100%)/jp(100%)/kr(100%)/mx(100%)/uk(100%)/us(100%)**
1. **antonioli at(100%)/hk(100%)/pt(100%)**
1. **asos ae(100%)/au(100%)/ch(100%)/cn(100%)/hk(100%)/id(100%)/my(100%)/nl(100%)/ph(100%)/pl(100%)/ru(100%)/sa(100%)/sg(100%)/th(100%)/us(100%)/vn(100%)**
1. **babyshop ae(100%)/sa(100%)**
1. **bananarepublic ca(100%)**
1. **benetton at(46%)/be(49%)/bg(45%)/ch(47%)/de(49%)/dk(41%)/ee(42%)/es(48%)/fi(47%)/fr(47%)/gr(47%)/hr(47%)/ie(48%)/it(40%)/lv(100%)/nl(45%)/pt(47%)/se(47%)/si(49%)/sk(47%)/uk(48%)**
1. **bijoubrigitte de(100%)/nl(100%)**
1. **bistmart ir(100%)**
1. **boconcept at(100%)/de(100%)**
1. **borbonese eu(100%)/it(100%)/uk(100%)**
1. **buckle us(100%)**
1. **carpisa hr(100%)**
1. **carters us(100%)**
1. **charmingcharlie us(100%)**
1. **clarks eu(100%)**
1. **coach ca(100%)/uk(100%)/us(100%)**
1. **columbiasportswear at(100%)/nl(33%)**
1. **conforama fr(100%)**
1. **converse au(100%)/es(100%)/kr(100%)/nl(77%)**
1. **cos at(100%)/hu(56%)**
1. **creationl de(100%)**
1. **dfs uk(100%)**
1. **dickssportinggoods us(100%)**
1. **ernstings de(100%)**
1. **falabella cl(100%)/co(100%)**
1. **fanatics us(100%)**
1. **fendi cn(100%)**
1. **footaction us(100%)**
1. **footlocker be(100%)/de(100%)/dk(100%)/es(100%)/fr(100%)/it(100%)/lu(100%)/nl(100%)/no(100%)/se(100%)/uk(100%)**
1. **frescobolcarioca eu(100%)/uk(100%)**
1. **gap ca(100%)/cl(33%)**
1. **harrods (100%)**
1. **heine at(100%)**
1. **hermes ca(100%)/de(50%)/es(50%)/fr(67%)/uk(34%)/us(93%)**
1. **hm ae(100%)/dk(35%)/eu(100%)/fi(42%)/kw(100%)/no(40%)/pl(100%)/sa(100%)/se(41%)**
1. **hollister cn(100%)/hk(100%)/jp(100%)/tw(100%)**
1. **hunter (100%)**
1. **ikea au(100%)/pt(100%)**
1. **intersport fr(100%)**
1. **intimissimi cn(100%)/hk(100%)/jp(100%)**
1. **jackwills (100%)**
1. **jeffreycampbell us(100%)**
1. **klingel de(100%)**
1. **lacoste cn(100%)/mx(100%)/us(32%)**
1. **levi my(100%)**
1. **lifestylestores in(100%)**
1. **limango de(100%)**
1. **made ch(100%)/de(100%)/es(100%)/nl(100%)/uk(100%)**
1. **massimodutti ad(46%)/al(45%)/am(47%)/az(47%)/ba(45%)/bh(100%)/by(45%)/co(46%)/cr(44%)/cy(47%)/do(41%)/ec(45%)/eg(100%)/ge(39%)/gt(46%)/hk(42%)/hn(46%)/id(43%)/il(44%)/in(43%)/kz(45%)/mc(54%)/mk(45%)/mo(41%)/my(48%)/pa(44%)/ph(44%)/rs(45%)/sg(41%)/th(47%)/tn(47%)/tw(44%)/ua(47%)/vn(47%)**
1. **maxfashion ae(100%)**
1. **michaelkors ca(100%)/us(30%)**
1. **moosejaw us(100%)**
1. **mothercare sa(100%)**
1. **mq se(100%)**
1. **mrprice uk(100%)**
1. **muji de(100%)/fr(100%)/uk(34%)**
1. **oldnavy ca(100%)**
1. **oshkosh us(100%)**
1. **oysho id(100%)**
1. **parfois ad(100%)/al(100%)/am(100%)/ao(100%)/at(100%)/ba(100%)/be(100%)/bg(100%)/bh(100%)/br(100%)/by(100%)/ch(100%)/co(100%)/cz(100%)/de(100%)/dk(100%)/do(100%)/ee(100%)/eg(100%)/es(100%)/fi(100%)/fr(100%)/ge(100%)/gr(100%)/gt(100%)/hr(100%)/hu(100%)/ie(100%)/ir(100%)/it(100%)/jo(100%)/kw(100%)/lb(100%)/lt(100%)/lu(100%)/lv(100%)/ly(100%)/ma(100%)/mc(100%)/mk(100%)/mt(100%)/mx(100%)/mz(100%)/nl(100%)/om(100%)/pa(100%)/pe(100%)/ph(100%)/pl(100%)/pt(100%)/qa(100%)/ro(100%)/rs(100%)/sa(100%)/se(100%)/si(100%)/sk(100%)/tn(100%)/uk(100%)/us(100%)/ve(100%)/ye(100%)**
1. **patagonia ca(100%)**
1. **popup br(100%)**
1. **prettysecrets in(100%)**
1. **pullandbear kr(100%)/ph(100%)**
1. **rakuten fr(100%)/us(100%)**
1. **ralphlauren de(100%)**
1. **reebok at(46%)/be(41%)/ch(100%)/es(77%)/fr(64%)/ie(43%)/it(57%)/nl(47%)/no(86%)/se(88%)/sk(100%)/uk(100%)/us(71%)**
1. **runnerspoint de(100%)**
1. **runwaysale za(100%)**
1. **sainsburys uk(100%)**
1. **saksfifthavenue mo(100%)**
1. **sandroatjd cn(100%)**
1. **selfridges de(100%)/es(100%)**
1. **shoedazzle us(100%)**
1. **simons ca(100%)**
1. **snipes de(100%)**
1. **solebox de(100%)/uk(100%)**
1. **speedo us(100%)**
1. **splashfashions ae(100%)/bh(100%)/sa(100%)**
1. **stories at(44%)/be(43%)/de(43%)/dk(44%)/es(45%)/fi(45%)/fr(45%)/ie(45%)/it(45%)/nl(43%)/pl(45%)/se(45%)/uk(100%)/us(39%)**
1. **stradivarius ph(100%)**
1. **stylebop (100%)/au(100%)/ca(100%)/cn(100%)/de(100%)/es(100%)/fr(100%)/hk(100%)/jp(100%)/kr(100%)/mo(100%)/sg(100%)/us(100%)**
1. **superbalist za(100%)**
1. **thenorthface us(100%)**
1. **thread uk(100%)/us(100%)**
1. **tods cn(100%)/gr(100%)/pt(100%)**
1. **tommybahama bh(100%)/de(83%)/ph(100%)/uk(34%)/za(100%)**
1. **tommyhilfiger jp(100%)**
1. **topbrands ru(100%)**
1. **trendygolf uk(100%)**
1. **undefeated us(100%)**
1. **underarmour ca(100%)**
1. **watchshop eu(100%)/pl(100%)/ru(100%)/se(100%)**
1. **wayfair ca(100%)/de(100%)/uk(100%)**
1. **weekday eu(100%)**
1. **wenz de(100%)**
1. **westwingnow ch(100%)**
1. **womanwithin us(100%)**
1. **zalandolounge de(100%)**
1. **zalora id(100%)/ph(88%)/tw(86%)**
1. **zara pe(100%)/ph(100%)/uy(100%)**
1. **zilingo my(100%)**
1. vip cn(96%)
1. hibbett us(95%)
1. gosport fr(94%)
1. industrie uk(94%)
1. nike hk(92%)/kr(80%)
1. nastygal (91%)
1. filippak at(83%)/eu(36%)/nl(88%)/no(41%)/se(88%)/us(42%)
1. melijoe be(43%)/cn(67%)/kr(73%)/uk(88%)
1. defacto tr(84%)
1. lee pl(84%)
1. burberry ae(70%)/at(66%)/bg(76%)/ch(73%)/cz(74%)/es(75%)/hk(80%)/hu(68%)/ie(76%)/it(76%)/jp(66%)/my(76%)/pl(79%)/pt(73%)/ru(82%)/se(70%)/tw(78%)
1. vansjd cn(74%)
1. shoeshowmega us(68%)
1. koton tr(65%)
1. maxmara it(65%)/kr(49%)
1. nikeatjd cn(63%)
1. zalando dk(63%)
1. sfera es(56%)
1. zivame in(56%)
1. timberland my(53%)
1. timberlandtrans sg(53%)
1. venteprivee es(50%)/fr(49%)/it(52%)
1. misssixty cn(50%)
1. leroymerlin fr(48%)
1. mangoattmall cn(48%)
1. liujo es(34%)/it(47%)
1. tchibo de(46%)
1. brunellocucinelli cn(45%)
1. interightatjd cn(45%)
1. theoryattmall cn(45%)
1. rinascimento fr(43%)
1. sandroattmall cn(43%)
1. strellson at(39%)/be(38%)/ch(42%)/de(41%)/fr(36%)/nl(41%)
1. uniqlo us(41%)
1. anayi jp(40%)
1. marcopolo ch(32%)/es(39%)/ie(38%)/lt(38%)/se(38%)/uk(39%)
1. oodji ru(39%)
1. shein au(36%)/nz(39%)
1. boardiesapparel au(38%)
1. onitsukatigerjd cn(38%)
1. hugoboss cn(37%)
1. bash de(36%)/fr(36%)/hk(36%)/uk(30%)
1. marinarinaldi at(31%)/be(30%)/cz(34%)/de(34%)/dk(30%)/es(32%)/fr(32%)/hu(32%)/ie(32%)/it(36%)/nl(34%)/pl(32%)/pt(30%)/ro(34%)/se(32%)/uk(32%)
1. gstar at(31%)/au(31%)/bg(30%)/ch(32%)/cz(32%)/de(35%)/ee(33%)/hr(33%)/lt(30%)/lv(30%)/pl(31%)/ru(32%)/si(33%)/sk(33%)
1. petitbateau uk(35%)
1. aboutyou cz(31%)/hu(33%)/pl(33%)/ro(34%)/sk(33%)
1. terranovastyle de(30%)/es(32%)/fr(32%)/it(30%)/nl(34%)/uk(32%)
1. lululemon cn(33%)
1. mango kr(33%)
1. tigerofsweden at(30%)/ie(30%)/nl(32%)/no(33%)
1. superdry th(32%)
1. vionicshoes uk(32%)
1. only us(31%)
1. replayjeans au(30%)/be(30%)/ch(30%)/de(30%)/eu(30%)/fr(30%)/no(30%)/uk(31%)
1. darjeeling fr(30%)
1. dsw us(30%)
",1,broken crawlers mar eu fr uk us abcmart kr abercrombie cn hk jp adidas pl alcott eu americaneagle ca ami cn dk jp kr mx uk us antonioli at hk pt asos ae au ch cn hk id my nl ph pl ru sa sg th us vn babyshop ae sa bananarepublic ca benetton at be bg ch de dk ee es fi fr gr hr ie it lv nl pt se si sk uk bijoubrigitte de nl bistmart ir boconcept at de borbonese eu it uk buckle us carpisa hr carters us charmingcharlie us clarks eu coach ca uk us columbiasportswear at nl conforama fr converse au es kr nl cos at hu creationl de dfs uk dickssportinggoods us ernstings de falabella cl co fanatics us fendi cn footaction us footlocker be de dk es fr it lu nl no se uk frescobolcarioca eu uk gap ca cl harrods heine at hermes ca de es fr uk us hm ae dk eu fi kw no pl sa se hollister cn hk jp tw hunter ikea au pt intersport fr intimissimi cn hk jp jackwills jeffreycampbell us klingel de lacoste cn mx us levi my lifestylestores in limango de made ch de es nl uk massimodutti ad al am az ba bh by co cr cy do ec eg ge gt hk hn id il in kz mc mk mo my pa ph rs sg th tn tw ua vn maxfashion ae michaelkors ca us moosejaw us mothercare sa mq se mrprice uk muji de fr uk oldnavy ca oshkosh us oysho id parfois ad al am ao at ba be bg bh br by ch co cz de dk do ee eg es fi fr ge gr gt hr hu ie ir it jo kw lb lt lu lv ly ma mc mk mt mx mz nl om pa pe ph pl pt qa ro rs sa se si sk tn uk us ve ye patagonia ca popup br prettysecrets in pullandbear kr ph rakuten fr us ralphlauren de reebok at be ch es fr ie it nl no se sk uk us runnerspoint de runwaysale za sainsburys uk saksfifthavenue mo sandroatjd cn selfridges de es shoedazzle us simons ca snipes de solebox de uk speedo us splashfashions ae bh sa stories at be de dk es fi fr ie it nl pl se uk us stradivarius ph stylebop au ca cn de es fr hk jp kr mo sg us superbalist za thenorthface us thread uk us tods cn gr pt tommybahama bh de ph uk za tommyhilfiger jp topbrands ru trendygolf uk undefeated us underarmour ca watchshop eu pl ru se wayfair ca de uk weekday eu wenz de westwingnow ch womanwithin us zalandolounge de zalora id ph tw zara pe ph uy zilingo my vip cn hibbett us gosport fr industrie uk nike hk kr nastygal filippak at eu nl no se us melijoe be cn kr uk defacto tr lee pl burberry ae at bg ch cz es hk hu ie it jp my pl pt ru se tw vansjd cn shoeshowmega us koton tr maxmara it kr nikeatjd cn zalando dk sfera es zivame in timberland my timberlandtrans sg venteprivee es fr it misssixty cn leroymerlin fr mangoattmall cn liujo es it tchibo de brunellocucinelli cn interightatjd cn theoryattmall cn rinascimento fr sandroattmall cn strellson at be ch de fr nl uniqlo us anayi jp marcopolo ch es ie lt se uk oodji ru shein au nz boardiesapparel au onitsukatigerjd cn hugoboss cn bash de fr hk uk marinarinaldi at be cz de dk es fr hu ie it nl pl pt ro se uk gstar at au bg ch cz de ee hr lt lv pl ru si sk petitbateau uk aboutyou cz hu pl ro sk terranovastyle de es fr it nl uk lululemon cn mango kr tigerofsweden at ie nl no superdry th vionicshoes uk only us replayjeans au be ch de eu fr no uk darjeeling fr dsw us ,1
636331,20597584276.0,IssuesEvent,2022-03-05 19:00:55,grage03/prello,https://api.github.com/repos/grage03/prello,closed,Router,frontend low priority,"It is necessary to add or correct the following points:
- [x] Going to another page should use the name, not the address
- [x] UILink",1.0,"Router - It is necessary to add or correct the following points:
- [x] Going to another page should use the name, not the address
- [x] UILink",0,router it is necessary to add or correct the following points going to another page should use the name not the address uilink,0
1813,20117186666.0,IssuesEvent,2022-02-07 20:52:30,Azure/azure-sdk-tools,https://api.github.com/repos/Azure/azure-sdk-tools,closed,[stress] Single dashboard with a breakdown of the test job status,Central-EngSys pillar-reliability Stress,"As part of finalizing the first phase we want to have a simple dashboard that leads can look at to see all of the active tests in the cluster. Right now the tests are run at least once a week, which means we can just scrape the cluster stats (maybe for a couple of weeks) and display:
| Test name | Started On | Time running | LastStatus |
|------------|------------|------------|------------|
| go-sb-infinitesend | 2021-12-01 | 1d | Active |
| go-sb-infinitesend | 2021-11-28 | 3d | Failed |
This should give enough starting information for leads to get an idea of how far along things are, which tests are running, etc...",True,"[stress] Single dashboard with a breakdown of the test job status - As part of finalizing the first phase we want to have a simple dashboard that leads can look at to see all of the active tests in the cluster. Right now the tests are run at least once a week, which means we can just scrape the cluster stats (maybe for a couple of weeks) and display:
| Test name | Started On | Time running | LastStatus |
|------------|------------|------------|------------|
| go-sb-infinitesend | 2021-12-01 | 1d | Active |
| go-sb-infinitesend | 2021-11-28 | 3d | Failed |
This should give enough starting information for leads to get an idea of how far along things are, which tests are running, etc...",1, single dashboard with a breakdown of the test job status as part of finalizing the first phase we want to have a simple dashboard that leads can look at to see all of the active tests in the cluster right now the tests are run at least once a week which means we can just scrape the cluster stats maybe for a couple of weeks and display test name started on time running laststatus go sb infinitesend active go sb infinitesend failed this should give enough starting information for leads to get an idea of how far along things are which tests are running etc ,1
296466,9116229521.0,IssuesEvent,2019-02-22 08:23:03,webcompat/web-bugs,https://api.github.com/repos/webcompat/web-bugs,closed,support.mozilla.org - see bug description,browser-firefox-mobile priority-important,"
**URL**: https://support.mozilla.org/en-US/kb/tracking-protection-firefox-android
**Browser / Version**: Firefox Mobile 66.0
**Operating System**: Android 8.1.0
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: fake it's a rip off of Google
**Steps to Reproduce**:
Dumb developer made it off of googles home page
Browser Configuration
mixed active content blocked: false
image.mem.shared: true
buildID: 20190218131312
tracking content blocked: false
gfx.webrender.blob-images: true
hasTouchScreen: true
mixed passive content blocked: false
gfx.webrender.enabled: false
gfx.webrender.all: false
channel: beta
Console Messages:
[u'[console.log(JQMIGRATE: Logging is active) https://static-media-prod-cdn.sumo.mozilla.net/static/build/common-min.4b219b53323f.js:6:6003]']
_From [webcompat.com](https://webcompat.com/) with ❤️_",1.0,"support.mozilla.org - see bug description -
**URL**: https://support.mozilla.org/en-US/kb/tracking-protection-firefox-android
**Browser / Version**: Firefox Mobile 66.0
**Operating System**: Android 8.1.0
**Tested Another Browser**: Yes
**Problem type**: Something else
**Description**: fake it's a rip off of Google
**Steps to Reproduce**:
Dumb developer made it off of googles home page
Browser Configuration
mixed active content blocked: false
image.mem.shared: true
buildID: 20190218131312
tracking content blocked: false
gfx.webrender.blob-images: true
hasTouchScreen: true
mixed passive content blocked: false
gfx.webrender.enabled: false
gfx.webrender.all: false
channel: beta
Console Messages:
[u'[console.log(JQMIGRATE: Logging is active) https://static-media-prod-cdn.sumo.mozilla.net/static/build/common-min.4b219b53323f.js:6:6003]']
_From [webcompat.com](https://webcompat.com/) with ❤️_",0,support mozilla org see bug description url browser version firefox mobile operating system android tested another browser yes problem type something else description fake it s a rip off of google steps to reproduce dumb developer made it off of googles home page browser configuration mixed active content blocked false image mem shared true buildid tracking content blocked false gfx webrender blob images true hastouchscreen true mixed passive content blocked false gfx webrender enabled false gfx webrender all false channel beta console messages from with ❤️ ,0
394366,27027698425.0,IssuesEvent,2023-02-11 20:20:42,facebookresearch/detectron2,https://api.github.com/repos/facebookresearch/detectron2,opened,Does T.Resize in custom dataloader also resizes bounding box?,documentation,"## 📚 Documentation Issue
This issue category is for problems about existing documentation, not for asking how-to questions.
In documentation it says ""Using a different “mapper” with build_detection_{train,test}_loader(mapper=) works for most use cases of custom data loading. For example, if you want to resize all images to a fixed size for training"", my question is does T.Resize only resizes train image or it resizes the bounding box along with it? If it does resize bounding box? do we have to code it separately as part of mapper function?
* Provide a link to an existing documentation/comment/tutorial:
https://detectron2.readthedocs.io/en/v0.4.1/tutorials/data_loading.html
* How should the above documentation/comment/tutorial improve:
It resizes bounding box and image together or only the image, not bounding box",1.0,"Does T.Resize in custom dataloader also resizes bounding box? - ## 📚 Documentation Issue
This issue category is for problems about existing documentation, not for asking how-to questions.
In documentation it says ""Using a different “mapper” with build_detection_{train,test}_loader(mapper=) works for most use cases of custom data loading. For example, if you want to resize all images to a fixed size for training"", my question is does T.Resize only resizes train image or it resizes the bounding box along with it? If it does resize bounding box? do we have to code it separately as part of mapper function?
* Provide a link to an existing documentation/comment/tutorial:
https://detectron2.readthedocs.io/en/v0.4.1/tutorials/data_loading.html
* How should the above documentation/comment/tutorial improve:
It resizes bounding box and image together or only the image, not bounding box",0,does t resize in custom dataloader also resizes bounding box 📚 documentation issue this issue category is for problems about existing documentation not for asking how to questions in documentation it says using a different “mapper” with build detection train test loader mapper works for most use cases of custom data loading for example if you want to resize all images to a fixed size for training my question is does t resize only resizes train image or it resizes the bounding box along with it if it does resize bounding box do we have to code it separately as part of mapper function provide a link to an existing documentation comment tutorial how should the above documentation comment tutorial improve it resizes bounding box and image together or only the image not bounding box,0
567693,16889879462.0,IssuesEvent,2021-06-23 07:57:58,jina-ai/jina-hub,https://api.github.com/repos/jina-ai/jina-hub,opened,Showcase sparse,area/hub priority/important-longterm,"Follow up for #[2560](https://github.com/jina-ai/jina/issues/2560). When the indexers are ready, let's add something to showcase `sparse`",1.0,"Showcase sparse - Follow up for #[2560](https://github.com/jina-ai/jina/issues/2560). When the indexers are ready, let's add something to showcase `sparse`",0,showcase sparse follow up for when the indexers are ready let s add something to showcase sparse ,0
54850,23344505055.0,IssuesEvent,2022-08-09 16:38:02,MicrosoftDocs/azure-dev-docs,https://api.github.com/repos/MicrosoftDocs/azure-dev-docs,closed,scope doesnt exist for resource '00000003-0000-0000-c000-000000000000',mobile-apps doc-bug mobile-services/svc Pri1,"
Followed steps to add Authentication. Two issues (may be related or not)
- the authentication form is opening multiple times (looks like recursion issue)
- getting error message that Application X asked for scope 'access_as_user' that doesnt exist on resource '00000003-0000-0000-c000-000000000000'
Raises two questions about code:
1 - how to test / debug authentication errors
2 - is there a recursion issue when retrieving data
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 0df49cfd-7591-447c-8ec6-c97285ba93e8
* Version Independent ID: 0df49cfd-7591-447c-8ec6-c97285ba93e8
* Content: [Add authentication to your .NET MAUI app](https://docs.microsoft.com/en-us/azure/developer/mobile-apps/azure-mobile-apps/quickstarts/maui/authentication)
* Content Source: [articles/mobile-apps/azure-mobile-apps/quickstarts/maui/authentication.md](https://github.com/MicrosoftDocs/azure-dev-docs/blob/main/articles/mobile-apps/azure-mobile-apps/quickstarts/maui/authentication.md)
* Service: **mobile-services**
* GitHub Login: @adrianhall
* Microsoft Alias: **adhal**",1.0,"scope doesnt exist for resource '00000003-0000-0000-c000-000000000000' -
Followed steps to add Authentication. Two issues (may be related or not)
- the authentication form is opening multiple times (looks like recursion issue)
- getting error message that Application X asked for scope 'access_as_user' that doesnt exist on resource '00000003-0000-0000-c000-000000000000'
Raises two questions about code:
1 - how to test / debug authentication errors
2 - is there a recursion issue when retrieving data
---
#### Document Details
⚠ *Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.*
* ID: 0df49cfd-7591-447c-8ec6-c97285ba93e8
* Version Independent ID: 0df49cfd-7591-447c-8ec6-c97285ba93e8
* Content: [Add authentication to your .NET MAUI app](https://docs.microsoft.com/en-us/azure/developer/mobile-apps/azure-mobile-apps/quickstarts/maui/authentication)
* Content Source: [articles/mobile-apps/azure-mobile-apps/quickstarts/maui/authentication.md](https://github.com/MicrosoftDocs/azure-dev-docs/blob/main/articles/mobile-apps/azure-mobile-apps/quickstarts/maui/authentication.md)
* Service: **mobile-services**
* GitHub Login: @adrianhall
* Microsoft Alias: **adhal**",0,scope doesnt exist for resource followed steps to add authentication two issues may be related or not the authentication form is opening multiple times looks like recursion issue getting error message that application x asked for scope access as user that doesnt exist on resource raises two questions about code how to test debug authentication errors is there a recursion issue when retrieving data document details ⚠ do not edit this section it is required for docs microsoft com ➟ github issue linking id version independent id content content source service mobile services github login adrianhall microsoft alias adhal ,0
42087,5425170405.0,IssuesEvent,2017-03-03 04:39:57,Princeton-CDH/winthrop-django,https://api.github.com/repos/Princeton-CDH/winthrop-django,closed,edit person - residences,awaiting testing,"As a data editor, when I’m editing a person I want to be able to document known residences and dates on the same page so that I don’t have to edit person information in multiple places.",1.0,"edit person - residences - As a data editor, when I’m editing a person I want to be able to document known residences and dates on the same page so that I don’t have to edit person information in multiple places.",0,edit person residences as a data editor when i’m editing a person i want to be able to document known residences and dates on the same page so that i don’t have to edit person information in multiple places ,0
59816,14476398430.0,IssuesEvent,2020-12-10 04:01:35,imatlin/Sonar-Plugin,https://api.github.com/repos/imatlin/Sonar-Plugin,opened,CVE-2019-11358 (Medium) detected in jquery-3.3.1.min.js,security vulnerability,"## CVE-2019-11358 - Medium Severity Vulnerability
Vulnerable Library - jquery-3.3.1.min.js
jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.
jQuery before 3.4.0, as used in Drupal, Backdrop CMS, and other products, mishandles jQuery.extend(true, {}, ...) because of Object.prototype pollution. If an unsanitized source object contained an enumerable __proto__ property, it could extend the native Object.prototype.
",0,cve medium detected in jquery min js cve medium severity vulnerability vulnerable library jquery min js javascript library for dom operations library home page a href path to vulnerable library sonar plugin src main resources static jquery min js dependency hierarchy x jquery min js vulnerable library found in head commit a href found in base branch master vulnerability details jquery before as used in drupal backdrop cms and other products mishandles jquery extend true because of object prototype pollution if an unsanitized source object contained an enumerable proto property it could extend the native object prototype publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction required scope changed impact metrics confidentiality impact low integrity impact low availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability false ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails jquery before as used in drupal backdrop cms and other products mishandles jquery extend true because of object prototype pollution if an unsanitized source object contained an enumerable proto property it could extend the native object prototype vulnerabilityurl ,0
1001,12110512947.0,IssuesEvent,2020-04-21 10:32:50,sohaibaslam/learning_site,https://api.github.com/repos/sohaibaslam/learning_site,opened,"Broken Crawlers 21, Apr 2020",crawler broken/unreliable,"1. **abckidsattmall cn(100%)**
1. **abcmart kr(100%)**
1. **abercrombie cn(100%)/hk(100%)/jp(100%)**
1. **americaneagle ca(100%)**
1. **ami cn(100%)/dk(100%)/jp(100%)/kr(100%)/uk(100%)**
1. **anthropologie (100%)/de(100%)/fr(100%)/uk(100%)**
1. **antonioli it(100%)/ru(100%)**
1. **asos (100%)/ae(100%)/au(100%)/ch(100%)/cn(100%)/fr(100%)/hk(100%)/id(100%)/my(100%)/nl(100%)/ph(100%)/pl(100%)/ru(100%)/sa(100%)/sg(100%)/th(100%)/us(100%)/vn(100%)**
1. **babyshop ae(100%)/sa(100%)**
1. **bellerose be(100%)**
1. **bnq uk(100%)**
1. **burlington us(100%)**
1. **cabourn uk(100%)**
1. **central th(100%)**
1. **coach uk(100%)**
1. **conforama fr(100%)/pt(100%)**
1. **converse au(100%)**
1. **cos (100%)/at(100%)/cz(100%)/de(100%)/es(100%)/fi(100%)/fr(100%)/hu(100%)/it(100%)/pl(100%)/sk(100%)**
1. **cotton au(100%)**
1. **daphneattmall cn(100%)**
1. **decathlon in(100%)**
1. **dinos jp(100%)**
1. **elganso nl(100%)**
1. **ellos dk(100%)**
1. **emag ro(100%)**
1. **exact za(100%)**
1. **falabella co(100%)**
1. **fbb in(100%)**
1. **fenwick uk(100%)**
1. **footaction us(100%)**
1. **footlocker be(100%)/de(100%)/dk(100%)/es(100%)/fr(100%)/it(100%)/lu(100%)/nl(100%)/no(100%)/se(100%)/uk(100%)**
1. **forloveandlemons de(100%)**
1. **gstar nz(100%)**
1. **hbx us(100%)**
1. **heine at(100%)**
1. **hermes at(100%)/ca(100%)**
1. **hm de(100%)/hk(100%)/ie(100%)/in(100%)/kw(100%)/mx(100%)/ph(100%)/pl(100%)/sa(100%)/tr(100%)/us(100%)**
1. **hollister cn(100%)/hk(100%)/jp(100%)/tw(100%)**
1. **hugobossattmall cn(100%)**
1. **ikea be(100%)/fr(100%)**
1. **isetan jp(100%)**
1. **justjeans nz(100%)**
1. **khelf br(100%)**
1. **laredouteapi ch(100%)/de(100%)/ru(100%)**
1. **lee at(100%)/be(100%)/de(100%)/dk(100%)/es(100%)/fr(100%)/ie(100%)/it(100%)/nl(100%)/pl(100%)/se(100%)/uk(100%)**
1. **lifestylestores in(100%)**
1. **lncc (100%)/eu(100%)/jp(100%)/kr(100%)/us(100%)**
1. **louisvuitton cn(100%)**
1. **luckybrand us(100%)**
1. **luigibertolli br(100%)**
1. **made ch(100%)/de(100%)/nl(100%)/uk(100%)**
1. **melijoe hk(100%)**
1. **meybodywear eu(100%)**
1. **michaelkors ca(100%)**
1. **monki fr(100%)/hu(100%)/ie(100%)/it(100%)/nl(100%)/pl(100%)/pt(100%)/si(100%)/sk(100%)**
1. **moosejaw us(100%)**
1. **mothercare sa(100%)**
1. **mq se(100%)**
1. **muji de(100%)/fr(100%)/uk(100%)**
1. **mumzworld sa(100%)**
1. **myvishal in(100%)**
1. **netaporter (100%)/ae(100%)/at(100%)/be(100%)/ca(100%)/cn(100%)/de(100%)/es(100%)/fr(100%)/ie(100%)/it(100%)/jp(100%)/kr(100%)/mo(100%)/nl(100%)/pt(100%)/ru(100%)/sa(100%)/tw(100%)/us(100%)**
1. **next ae(100%)/at(100%)/au(100%)/be(100%)/bh(100%)/ch(100%)/dk(100%)/es(100%)/fr(100%)/hk(100%)/ie(100%)/it(100%)/jp(100%)/kr(100%)/kw(100%)/lb(100%)/nl(100%)/no(100%)/nz(100%)/pl(100%)/qa(100%)/ro(100%)/sa(100%)/se(100%)/tr(100%)/us(100%)/za(100%)**
1. **nike kr(100%)**
1. **nudiejeans au(100%)/ca(100%)/de(100%)/it(100%)/jp(100%)/nl(100%)/se(100%)/uk(100%)/us(100%)**
1. **ovs eu(100%)**
1. **parfois al(100%)/ch(100%)/jo(100%)/ma(100%)/mx(100%)/sa(100%)/ve(100%)**
1. **patagonia es(100%)**
1. **popup br(100%)**
1. **prettysecrets in(100%)**
1. **pullandbearattmall cn(100%)**
1. **rakuten us(100%)**
1. **ralphlauren nl(100%)**
1. **runnerspoint de(100%)**
1. **saksfifthavenue mo(100%)**
1. **sandroattmall cn(100%)**
1. **savers uk(100%)**
1. **seedheritage nz(100%)**
1. **selfridges kr(100%)/kw(100%)/sa(100%)/sg(100%)**
1. **shoetique uk(100%)**
1. **shoulder br(100%)**
1. **siehan at(100%)/ch(100%)/de(100%)**
1. **simons ca(100%)**
1. **solebox uk(100%)**
1. **speedo (100%)/au(100%)**
1. **splashfashions ae(100%)/bh(100%)/sa(100%)**
1. **sport2000 de(100%)**
1. **ssense (100%)/cn(100%)/fr(100%)/it(100%)/jp(100%)/ru(100%)/tw(100%)/uk(100%)**
1. **stefaniamode au(100%)/dk(100%)/jp(100%)**
1. **stories at(100%)/dk(100%)/es(100%)/fi(100%)/fr(100%)/ie(100%)/it(100%)/pl(100%)/se(100%)**
1. **stylebop (100%)/au(100%)/ca(100%)/cn(100%)/de(100%)/es(100%)/fr(100%)/hk(100%)/jp(100%)/kr(100%)/mo(100%)/sg(100%)/us(100%)**
1. **surfdome uk(100%)**
1. **surfstitch (100%)**
1. **talbots us(100%)**
1. **tedbaker ca(100%)**
1. **thebay (100%)**
1. **theoutnet jp(100%)**
1. **tods ch(100%)/cn(100%)/gr(100%)/pt(100%)**
1. **tommybahama bh(100%)/de(100%)/in(100%)/om(100%)/ph(100%)/pl(100%)/tr(100%)/uk(100%)/za(100%)**
1. **tommyhilfiger us(100%)**
1. **topbrands ru(100%)**
1. **underarmour in(100%)/za(100%)**
1. **venteprivee de(100%)**
1. **walmart ca(100%)**
1. **warehouse at(100%)/au(100%)/ca(100%)/de(100%)/fr(100%)/ie(100%)/nl(100%)/nz(100%)/se(100%)/us(100%)**
1. **watchshop uk(100%)**
1. **wayfair ca(100%)/de(100%)/uk(100%)**
1. **weekday eu(100%)**
1. **wrangler de(100%)/es(100%)/fr(100%)/se(100%)/uk(100%)**
1. **yepme in(100%)**
1. **yours uk(100%)**
1. **zalandolounge de(100%)**
1. **zara pe(100%)/uy(100%)**
",True,"Broken Crawlers 21, Apr 2020 - 1. **abckidsattmall cn(100%)**
1. **abcmart kr(100%)**
1. **abercrombie cn(100%)/hk(100%)/jp(100%)**
1. **americaneagle ca(100%)**
1. **ami cn(100%)/dk(100%)/jp(100%)/kr(100%)/uk(100%)**
1. **anthropologie (100%)/de(100%)/fr(100%)/uk(100%)**
1. **antonioli it(100%)/ru(100%)**
1. **asos (100%)/ae(100%)/au(100%)/ch(100%)/cn(100%)/fr(100%)/hk(100%)/id(100%)/my(100%)/nl(100%)/ph(100%)/pl(100%)/ru(100%)/sa(100%)/sg(100%)/th(100%)/us(100%)/vn(100%)**
1. **babyshop ae(100%)/sa(100%)**
1. **bellerose be(100%)**
1. **bnq uk(100%)**
1. **burlington us(100%)**
1. **cabourn uk(100%)**
1. **central th(100%)**
1. **coach uk(100%)**
1. **conforama fr(100%)/pt(100%)**
1. **converse au(100%)**
1. **cos (100%)/at(100%)/cz(100%)/de(100%)/es(100%)/fi(100%)/fr(100%)/hu(100%)/it(100%)/pl(100%)/sk(100%)**
1. **cotton au(100%)**
1. **daphneattmall cn(100%)**
1. **decathlon in(100%)**
1. **dinos jp(100%)**
1. **elganso nl(100%)**
1. **ellos dk(100%)**
1. **emag ro(100%)**
1. **exact za(100%)**
1. **falabella co(100%)**
1. **fbb in(100%)**
1. **fenwick uk(100%)**
1. **footaction us(100%)**
1. **footlocker be(100%)/de(100%)/dk(100%)/es(100%)/fr(100%)/it(100%)/lu(100%)/nl(100%)/no(100%)/se(100%)/uk(100%)**
1. **forloveandlemons de(100%)**
1. **gstar nz(100%)**
1. **hbx us(100%)**
1. **heine at(100%)**
1. **hermes at(100%)/ca(100%)**
1. **hm de(100%)/hk(100%)/ie(100%)/in(100%)/kw(100%)/mx(100%)/ph(100%)/pl(100%)/sa(100%)/tr(100%)/us(100%)**
1. **hollister cn(100%)/hk(100%)/jp(100%)/tw(100%)**
1. **hugobossattmall cn(100%)**
1. **ikea be(100%)/fr(100%)**
1. **isetan jp(100%)**
1. **justjeans nz(100%)**
1. **khelf br(100%)**
1. **laredouteapi ch(100%)/de(100%)/ru(100%)**
1. **lee at(100%)/be(100%)/de(100%)/dk(100%)/es(100%)/fr(100%)/ie(100%)/it(100%)/nl(100%)/pl(100%)/se(100%)/uk(100%)**
1. **lifestylestores in(100%)**
1. **lncc (100%)/eu(100%)/jp(100%)/kr(100%)/us(100%)**
1. **louisvuitton cn(100%)**
1. **luckybrand us(100%)**
1. **luigibertolli br(100%)**
1. **made ch(100%)/de(100%)/nl(100%)/uk(100%)**
1. **melijoe hk(100%)**
1. **meybodywear eu(100%)**
1. **michaelkors ca(100%)**
1. **monki fr(100%)/hu(100%)/ie(100%)/it(100%)/nl(100%)/pl(100%)/pt(100%)/si(100%)/sk(100%)**
1. **moosejaw us(100%)**
1. **mothercare sa(100%)**
1. **mq se(100%)**
1. **muji de(100%)/fr(100%)/uk(100%)**
1. **mumzworld sa(100%)**
1. **myvishal in(100%)**
1. **netaporter (100%)/ae(100%)/at(100%)/be(100%)/ca(100%)/cn(100%)/de(100%)/es(100%)/fr(100%)/ie(100%)/it(100%)/jp(100%)/kr(100%)/mo(100%)/nl(100%)/pt(100%)/ru(100%)/sa(100%)/tw(100%)/us(100%)**
1. **next ae(100%)/at(100%)/au(100%)/be(100%)/bh(100%)/ch(100%)/dk(100%)/es(100%)/fr(100%)/hk(100%)/ie(100%)/it(100%)/jp(100%)/kr(100%)/kw(100%)/lb(100%)/nl(100%)/no(100%)/nz(100%)/pl(100%)/qa(100%)/ro(100%)/sa(100%)/se(100%)/tr(100%)/us(100%)/za(100%)**
1. **nike kr(100%)**
1. **nudiejeans au(100%)/ca(100%)/de(100%)/it(100%)/jp(100%)/nl(100%)/se(100%)/uk(100%)/us(100%)**
1. **ovs eu(100%)**
1. **parfois al(100%)/ch(100%)/jo(100%)/ma(100%)/mx(100%)/sa(100%)/ve(100%)**
1. **patagonia es(100%)**
1. **popup br(100%)**
1. **prettysecrets in(100%)**
1. **pullandbearattmall cn(100%)**
1. **rakuten us(100%)**
1. **ralphlauren nl(100%)**
1. **runnerspoint de(100%)**
1. **saksfifthavenue mo(100%)**
1. **sandroattmall cn(100%)**
1. **savers uk(100%)**
1. **seedheritage nz(100%)**
1. **selfridges kr(100%)/kw(100%)/sa(100%)/sg(100%)**
1. **shoetique uk(100%)**
1. **shoulder br(100%)**
1. **siehan at(100%)/ch(100%)/de(100%)**
1. **simons ca(100%)**
1. **solebox uk(100%)**
1. **speedo (100%)/au(100%)**
1. **splashfashions ae(100%)/bh(100%)/sa(100%)**
1. **sport2000 de(100%)**
1. **ssense (100%)/cn(100%)/fr(100%)/it(100%)/jp(100%)/ru(100%)/tw(100%)/uk(100%)**
1. **stefaniamode au(100%)/dk(100%)/jp(100%)**
1. **stories at(100%)/dk(100%)/es(100%)/fi(100%)/fr(100%)/ie(100%)/it(100%)/pl(100%)/se(100%)**
1. **stylebop (100%)/au(100%)/ca(100%)/cn(100%)/de(100%)/es(100%)/fr(100%)/hk(100%)/jp(100%)/kr(100%)/mo(100%)/sg(100%)/us(100%)**
1. **surfdome uk(100%)**
1. **surfstitch (100%)**
1. **talbots us(100%)**
1. **tedbaker ca(100%)**
1. **thebay (100%)**
1. **theoutnet jp(100%)**
1. **tods ch(100%)/cn(100%)/gr(100%)/pt(100%)**
1. **tommybahama bh(100%)/de(100%)/in(100%)/om(100%)/ph(100%)/pl(100%)/tr(100%)/uk(100%)/za(100%)**
1. **tommyhilfiger us(100%)**
1. **topbrands ru(100%)**
1. **underarmour in(100%)/za(100%)**
1. **venteprivee de(100%)**
1. **walmart ca(100%)**
1. **warehouse at(100%)/au(100%)/ca(100%)/de(100%)/fr(100%)/ie(100%)/nl(100%)/nz(100%)/se(100%)/us(100%)**
1. **watchshop uk(100%)**
1. **wayfair ca(100%)/de(100%)/uk(100%)**
1. **weekday eu(100%)**
1. **wrangler de(100%)/es(100%)/fr(100%)/se(100%)/uk(100%)**
1. **yepme in(100%)**
1. **yours uk(100%)**
1. **zalandolounge de(100%)**
1. **zara pe(100%)/uy(100%)**
",1,broken crawlers apr abckidsattmall cn abcmart kr abercrombie cn hk jp americaneagle ca ami cn dk jp kr uk anthropologie de fr uk antonioli it ru asos ae au ch cn fr hk id my nl ph pl ru sa sg th us vn babyshop ae sa bellerose be bnq uk burlington us cabourn uk central th coach uk conforama fr pt converse au cos at cz de es fi fr hu it pl sk cotton au daphneattmall cn decathlon in dinos jp elganso nl ellos dk emag ro exact za falabella co fbb in fenwick uk footaction us footlocker be de dk es fr it lu nl no se uk forloveandlemons de gstar nz hbx us heine at hermes at ca hm de hk ie in kw mx ph pl sa tr us hollister cn hk jp tw hugobossattmall cn ikea be fr isetan jp justjeans nz khelf br laredouteapi ch de ru lee at be de dk es fr ie it nl pl se uk lifestylestores in lncc eu jp kr us louisvuitton cn luckybrand us luigibertolli br made ch de nl uk melijoe hk meybodywear eu michaelkors ca monki fr hu ie it nl pl pt si sk moosejaw us mothercare sa mq se muji de fr uk mumzworld sa myvishal in netaporter ae at be ca cn de es fr ie it jp kr mo nl pt ru sa tw us next ae at au be bh ch dk es fr hk ie it jp kr kw lb nl no nz pl qa ro sa se tr us za nike kr nudiejeans au ca de it jp nl se uk us ovs eu parfois al ch jo ma mx sa ve patagonia es popup br prettysecrets in pullandbearattmall cn rakuten us ralphlauren nl runnerspoint de saksfifthavenue mo sandroattmall cn savers uk seedheritage nz selfridges kr kw sa sg shoetique uk shoulder br siehan at ch de simons ca solebox uk speedo au splashfashions ae bh sa de ssense cn fr it jp ru tw uk stefaniamode au dk jp stories at dk es fi fr ie it pl se stylebop au ca cn de es fr hk jp kr mo sg us surfdome uk surfstitch talbots us tedbaker ca thebay theoutnet jp tods ch cn gr pt tommybahama bh de in om ph pl tr uk za tommyhilfiger us topbrands ru underarmour in za venteprivee de walmart ca warehouse at au ca de fr ie nl nz se us watchshop uk wayfair ca de uk weekday eu wrangler de es fr se uk yepme in yours uk zalandolounge de zara pe uy ,1
398820,27214227201.0,IssuesEvent,2023-02-20 19:41:18,Azure/Azure-Functions,https://api.github.com/repos/Azure/Azure-Functions,opened,Blob Output Binding for Immutable Blob Container,documentation,"Hello,
I need to store blobs to Immutable Blob Container. For this purpose I create Function App (.net 6 isolated) with Blob Output Binding.
Everything works perfectly until I configured Immutable policy (Time-based retention) on the target container. Function App shows 409 Conflict response from blob storage.
From logs I see that SDK failed to find the blob.

After that it creates the empty blob.

And then tries to update the contents. But gets the error because the container is immutable.

**My question.**
How to configure Blob Output Binding to store blobs in an Immutable Blob Container?
Thank you!",1.0,"Blob Output Binding for Immutable Blob Container - Hello,
I need to store blobs to Immutable Blob Container. For this purpose I create Function App (.net 6 isolated) with Blob Output Binding.
Everything works perfectly until I configured Immutable policy (Time-based retention) on the target container. Function App shows 409 Conflict response from blob storage.
From logs I see that SDK failed to find the blob.

After that it creates the empty blob.

And then tries to update the contents. But gets the error because the container is immutable.

**My question.**
How to configure Blob Output Binding to store blobs in an Immutable Blob Container?
Thank you!",0,blob output binding for immutable blob container hello i need to store blobs to immutable blob container for this purpose i create function app net isolated with blob output binding everything works perfectly until i configured immutable policy time based retention on the target container function app shows conflict response from blob storage from logs i see that sdk failed to find the blob after that it creates the empty blob and then tries to update the contents but gets the error because the container is immutable my question how to configure blob output binding to store blobs in an immutable blob container thank you ,0
136091,30474001323.0,IssuesEvent,2023-07-17 15:16:50,rnmapbox/maps,https://api.github.com/repos/rnmapbox/maps,closed,[Bug]: On Android - onDidFinishLoadingStyle in MapView doesn't get triggered.,error-in-code,"### Mapbox Implementation
Mapbox
### Mapbox Version
default
### Platform
Android
### `@rnmapbox/maps` version
10.0.10
### Standalone component to reproduce
```javascript
import React from 'react';
import {
MapView,
ShapeSource,
LineLayer,
Camera,
} from '@rnmapbox/maps';
const aLine = {
type: 'LineString',
coordinates: [
[-74.00597, 40.71427],
[-74.00697, 40.71527],
],
};
class BugReportExample extends React.Component {
render() {
return (
console.log('foo')}>
);
}
}
```
### Observed behavior and steps to reproduce
On Android the func`onDidFinishLoadingStyle` doesn't get triggered.
### Expected behavior
When styles loaded this func - `onDidFinishLoadingStyle` would be triggered
### Notes / preliminary analysis
On IOS works fine.
### Additional links and references
_No response_",1.0,"[Bug]: On Android - onDidFinishLoadingStyle in MapView doesn't get triggered. - ### Mapbox Implementation
Mapbox
### Mapbox Version
default
### Platform
Android
### `@rnmapbox/maps` version
10.0.10
### Standalone component to reproduce
```javascript
import React from 'react';
import {
MapView,
ShapeSource,
LineLayer,
Camera,
} from '@rnmapbox/maps';
const aLine = {
type: 'LineString',
coordinates: [
[-74.00597, 40.71427],
[-74.00697, 40.71527],
],
};
class BugReportExample extends React.Component {
render() {
return (
console.log('foo')}>
);
}
}
```
### Observed behavior and steps to reproduce
On Android the func`onDidFinishLoadingStyle` doesn't get triggered.
### Expected behavior
When styles loaded this func - `onDidFinishLoadingStyle` would be triggered
### Notes / preliminary analysis
On IOS works fine.
### Additional links and references
_No response_",0, on android ondidfinishloadingstyle in mapview doesn t get triggered mapbox implementation mapbox mapbox version default platform android rnmapbox maps version standalone component to reproduce javascript import react from react import mapview shapesource linelayer camera from rnmapbox maps const aline type linestring coordinates class bugreportexample extends react component render return console log foo observed behavior and steps to reproduce on android the func ondidfinishloadingstyle doesn t get triggered expected behavior when styles loaded this func ondidfinishloadingstyle would be triggered notes preliminary analysis on ios works fine additional links and references no response ,0
81556,31018920406.0,IssuesEvent,2023-08-10 02:29:59,openzfs/zfs,https://api.github.com/repos/openzfs/zfs,closed,"Data corruption since generic_file_splice_read -> filemap_splice_read change (6.5 compat, but occurs on 6.4 too)",Type: Defect,"### System information
Type | Version/Name
--- | ---
Distribution Name | Arch
Distribution Version | Rolling release
Kernel Version | 6.4.8, 6.5rc1/2/3/4
Architecture | x86-64
OpenZFS Version | [commit 36261c8](https://github.com/openzfs/zfs/commit/36261c8238df462b214854ccea1df4f060cf0995)
### Describe the problem you're observing
After the recent changes to get OpenZFS compiling/running on 6.5, there appears to be a possible lingering data corruption bug. In the repeatable example below, it reliably inserts a long run of NULL bytes into a file, causing a build to fail (conveniently, the build of ZFS).
My expectation is that the bug probably exists for any kernel where `filemap_splice_read` exists, which recently has replaced `generic_file_splice_read` in other Linux filesystem code.
### Describe how to reproduce the problem
**Again - despite demonstrating the problem with the OpenZFS build, the problem only manifests itself when running on the ZFS branch at the commit listed above. It just so happens that I'm able to use our build to reproduce the bug.**
1. You need to be running ZFS patched up to the commit listed above. I have reproduced this on Kernel 6.4.8 and all 6.5 RC's up to rc4
2. `git clone https://github.com/openzfs/zfs.git`
3. `cd ./zfs`
4. `./autogen.sh`
5. `mkdir -p ../zfs-test`
6. `cd ../zfs-test`
7. `../zfs/configure --with-linux=/usr/src/linux` (or wherever your headers/source tree is)
Eventually, the `configure` will fail with the following message:
```
configure: error:
*** This kernel does not include the required loadable module
*** support!
***
*** To build OpenZFS as a loadable Linux kernel module
*** enable loadable module support by setting
*** `CONFIG_MODULES=y` in the kernel configuration and run
*** `make modules_prepare` in the Linux source tree.
***
*** If you don't intend to enable loadable kernel module
*** support, please compile OpenZFS as a Linux kernel built-in.
***
*** Prepare the Linux source tree by running `make prepare`,
*** use the OpenZFS `--enable-linux-builtin` configure option,
*** copy the OpenZFS sources into the Linux source tree using
*** `./copy-builtin `,
*** set `CONFIG_ZFS=y` in the kernel configuration and compile
*** kernel as usual.
```
I enter the directory of the failing test:
```
cd build/config_modules
```
Looking at the `config_modules.c` file, which is resulting in the failure:
```c
/* confdefs.h */
#define PACKAGE_NAME ""zfs""
#define PACKAGE_TARNAME ""zfs""
#define PACKAGE_VERSION ""2.2.99""
#define PACKAGE_STRING ""zfs 2.2.99""
#define PACKAGE_BUGREPORT """"
#define PACKAGE_URL """"
#define ZFS_META_NAME ""zfs""
#define ZFS_META_VERSION ""2.2.99""
#define SPL_META_VERSION ZFS_META_VERSION
#define ZFS_META_RELEASE ""1""
#define SPL_META_RELEASE ZFS_META_RELEASE
#define ZFS_META_LICENSE ""CDDL""
#define ZFS_META_ALIAS ""zfs-2.2.99-1""
#define SPL_META_ALIAS ZFS_META_ALIAS
#define ZFS_META_AUTHOR ""OpenZFS""
#define ZFS_META_KVER_MIN ""3.10""
#define ZFS_META_KVER_MAX ""6.4""
#define PACKAGE ""zfs""
#define VERSION ""2.2.99""
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
#include
#if !defined(CONFIG_MODULES)
#error CONFIG_MODULES not defined
#endif
int
main (void)
{
;
return 0;
}
```
Curiously, this bug does not manifest frequently and the system largely appears to run stable in many other use cases.
The section of `configure` that writes that file looks like below. Including this snippet here as it might help illuminate what conditions need to be true to trigger the bug:
```bash
cat confdefs.h - <<_ACEOF >build/config_modules/config_modules.c
#include
#if !defined(CONFIG_MODULES)
#error CONFIG_MODULES not defined
#endif
int
main (void)
{
;
return 0;
}
MODULE_DESCRIPTION(""conftest"");
MODULE_AUTHOR(ZFS_META_AUTHOR);
MODULE_VERSION(ZFS_META_VERSION ""-"" ZFS_META_RELEASE);
MODULE_LICENSE(""Dual BSD/GPL"");
_ACEOF
```
### Include any warning/errors/backtraces from the system logs
There are no errors reported to the console or in kernel messages",1.0,"Data corruption since generic_file_splice_read -> filemap_splice_read change (6.5 compat, but occurs on 6.4 too) - ### System information
Type | Version/Name
--- | ---
Distribution Name | Arch
Distribution Version | Rolling release
Kernel Version | 6.4.8, 6.5rc1/2/3/4
Architecture | x86-64
OpenZFS Version | [commit 36261c8](https://github.com/openzfs/zfs/commit/36261c8238df462b214854ccea1df4f060cf0995)
### Describe the problem you're observing
After the recent changes to get OpenZFS compiling/running on 6.5, there appears to be a possible lingering data corruption bug. In the repeatable example below, it reliably inserts a long run of NULL bytes into a file, causing a build to fail (conveniently, the build of ZFS).
My expectation is that the bug probably exists for any kernel where `filemap_splice_read` exists, which recently has replaced `generic_file_splice_read` in other Linux filesystem code.
### Describe how to reproduce the problem
**Again - despite demonstrating the problem with the OpenZFS build, the problem only manifests itself when running on the ZFS branch at the commit listed above. It just so happens that I'm able to use our build to reproduce the bug.**
1. You need to be running ZFS patched up to the commit listed above. I have reproduced this on Kernel 6.4.8 and all 6.5 RC's up to rc4
2. `git clone https://github.com/openzfs/zfs.git`
3. `cd ./zfs`
4. `./autogen.sh`
5. `mkdir -p ../zfs-test`
6. `cd ../zfs-test`
7. `../zfs/configure --with-linux=/usr/src/linux` (or wherever your headers/source tree is)
Eventually, the `configure` will fail with the following message:
```
configure: error:
*** This kernel does not include the required loadable module
*** support!
***
*** To build OpenZFS as a loadable Linux kernel module
*** enable loadable module support by setting
*** `CONFIG_MODULES=y` in the kernel configuration and run
*** `make modules_prepare` in the Linux source tree.
***
*** If you don't intend to enable loadable kernel module
*** support, please compile OpenZFS as a Linux kernel built-in.
***
*** Prepare the Linux source tree by running `make prepare`,
*** use the OpenZFS `--enable-linux-builtin` configure option,
*** copy the OpenZFS sources into the Linux source tree using
*** `./copy-builtin `,
*** set `CONFIG_ZFS=y` in the kernel configuration and compile
*** kernel as usual.
```
I enter the directory of the failing test:
```
cd build/config_modules
```
Looking at the `config_modules.c` file, which is resulting in the failure:
```c
/* confdefs.h */
#define PACKAGE_NAME ""zfs""
#define PACKAGE_TARNAME ""zfs""
#define PACKAGE_VERSION ""2.2.99""
#define PACKAGE_STRING ""zfs 2.2.99""
#define PACKAGE_BUGREPORT """"
#define PACKAGE_URL """"
#define ZFS_META_NAME ""zfs""
#define ZFS_META_VERSION ""2.2.99""
#define SPL_META_VERSION ZFS_META_VERSION
#define ZFS_META_RELEASE ""1""
#define SPL_META_RELEASE ZFS_META_RELEASE
#define ZFS_META_LICENSE ""CDDL""
#define ZFS_META_ALIAS ""zfs-2.2.99-1""
#define SPL_META_ALIAS ZFS_META_ALIAS
#define ZFS_META_AUTHOR ""OpenZFS""
#define ZFS_META_KVER_MIN ""3.10""
#define ZFS_META_KVER_MAX ""6.4""
#define PACKAGE ""zfs""
#define VERSION ""2.2.99""
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
#include
#if !defined(CONFIG_MODULES)
#error CONFIG_MODULES not defined
#endif
int
main (void)
{
;
return 0;
}
```
Curiously, this bug does not manifest frequently and the system largely appears to run stable in many other use cases.
The section of `configure` that writes that file looks like below. Including this snippet here as it might help illuminate what conditions need to be true to trigger the bug:
```bash
cat confdefs.h - <<_ACEOF >build/config_modules/config_modules.c
#include
#if !defined(CONFIG_MODULES)
#error CONFIG_MODULES not defined
#endif
int
main (void)
{
;
return 0;
}
MODULE_DESCRIPTION(""conftest"");
MODULE_AUTHOR(ZFS_META_AUTHOR);
MODULE_VERSION(ZFS_META_VERSION ""-"" ZFS_META_RELEASE);
MODULE_LICENSE(""Dual BSD/GPL"");
_ACEOF
```
### Include any warning/errors/backtraces from the system logs
There are no errors reported to the console or in kernel messages",0,data corruption since generic file splice read filemap splice read change compat but occurs on too system information type version name distribution name arch distribution version rolling release kernel version architecture openzfs version describe the problem you re observing after the recent changes to get openzfs compiling running on there appears to be a possible lingering data corruption bug in the repeatable example below it reliably inserts a long run of null bytes into a file causing a build to fail conveniently the build of zfs my expectation is that the bug probably exists for any kernel where filemap splice read exists which recently has replaced generic file splice read in other linux filesystem code describe how to reproduce the problem again despite demonstrating the problem with the openzfs build the problem only manifests itself when running on the zfs branch at the commit listed above it just so happens that i m able to use our build to reproduce the bug you need to be running zfs patched up to the commit listed above i have reproduced this on kernel and all rc s up to git clone cd zfs autogen sh mkdir p zfs test cd zfs test zfs configure with linux usr src linux or wherever your headers source tree is eventually the configure will fail with the following message configure error this kernel does not include the required loadable module support to build openzfs as a loadable linux kernel module enable loadable module support by setting config modules y in the kernel configuration and run make modules prepare in the linux source tree if you don t intend to enable loadable kernel module support please compile openzfs as a linux kernel built in prepare the linux source tree by running make prepare use the openzfs enable linux builtin configure option copy the openzfs sources into the linux source tree using copy builtin set config zfs y in the kernel configuration and compile kernel as usual i enter the directory of the failing test cd build config modules looking at the config modules c file which is resulting in the failure c confdefs h define package name zfs define package tarname zfs define package version define package string zfs define package bugreport define package url define zfs meta name zfs define zfs meta version define spl meta version zfs meta version define zfs meta release define spl meta release zfs meta release define zfs meta license cddl define zfs meta alias zfs define spl meta alias zfs meta alias define zfs meta author openzfs define zfs meta kver min define zfs meta kver max define package zfs define version include if defined config modules error config modules not defined endif int main void return curiously this bug does not manifest frequently and the system largely appears to run stable in many other use cases the section of configure that writes that file looks like below including this snippet here as it might help illuminate what conditions need to be true to trigger the bug bash cat confdefs h build config modules config modules c include if defined config modules error config modules not defined endif int main void return module description conftest module author zfs meta author module version zfs meta version zfs meta release module license dual bsd gpl aceof include any warning errors backtraces from the system logs there are no errors reported to the console or in kernel messages,0
59597,14422030838.0,IssuesEvent,2020-12-05 01:07:46,jgeraigery/pnc,https://api.github.com/repos/jgeraigery/pnc,opened,CVE-2018-21270 (High) detected in stringstream-0.0.5.tgz,security vulnerability,"## CVE-2018-21270 - High Severity Vulnerability
Vulnerable Library - stringstream-0.0.5.tgz
Path to dependency file: pnc/ui/.build-tmp/bower-cache/0dde7294b77dbbb5ad01dbe7838dbaa8/0.0.0-snapshot.4/packages/pnc-dto-types/package.json
Path to vulnerable library: pnc/ui/node_modules/bower/lib/node_modules/stringstream/package.json,pnc/ui/node_modules/bower/lib/node_modules/stringstream/package.json
Versions less than 0.0.6 of the Node.js stringstream module are vulnerable to an out-of-bounds read because of allocation of uninitialized buffers when a number is passed in the input stream (when using Node.js 4.x).
Path to dependency file: pnc/ui/.build-tmp/bower-cache/0dde7294b77dbbb5ad01dbe7838dbaa8/0.0.0-snapshot.4/packages/pnc-dto-types/package.json
Path to vulnerable library: pnc/ui/node_modules/bower/lib/node_modules/stringstream/package.json,pnc/ui/node_modules/bower/lib/node_modules/stringstream/package.json
Versions less than 0.0.6 of the Node.js stringstream module are vulnerable to an out-of-bounds read because of allocation of uninitialized buffers when a number is passed in the input stream (when using Node.js 4.x).
",0,cve high detected in stringstream tgz cve high severity vulnerability vulnerable library stringstream tgz encode and decode streams into string streams library home page a href path to dependency file pnc ui build tmp bower cache snapshot packages pnc dto types package json path to vulnerable library pnc ui node modules bower lib node modules stringstream package json pnc ui node modules bower lib node modules stringstream package json dependency hierarchy npx tgz root library npm tgz request tgz x stringstream tgz vulnerable library vulnerability details versions less than of the node js stringstream module are vulnerable to an out of bounds read because of allocation of uninitialized buffers when a number is passed in the input stream when using node js x publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact low availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution isopenpronvulnerability true ispackagebased true isdefaultbranch true packages vulnerabilityidentifier cve vulnerabilitydetails versions less than of the node js stringstream module are vulnerable to an out of bounds read because of allocation of uninitialized buffers when a number is passed in the input stream when using node js x vulnerabilityurl ,0
513968,14930123301.0,IssuesEvent,2021-01-25 01:59:02,GoogleChrome/lighthouse,https://api.github.com/repos/GoogleChrome/lighthouse,closed,Preload key requests audit still showing linked resources that have been placed in the head with preload parameters,needs-priority pending-close,"#### Provide the steps to reproduce
1. Run LH on https://gogonihon.com/en/
#### What is the current behavior?
Lighthouse audit for preload key requests has 3 fonts recommended to be preloaded.
#### What is the expected behavior?
I would expect that there are no fonts showing in the preload key requests audit, since they've already been place in the head with preload parameters.
#### Environment Information
* Affected Channels: DevTools - Lighthouse
* Lighthouse version:
* Chrome version: Version 87.0.4280.141 (Official Build) (x86_64)
* Node.js version:
* Operating System: OSX Big Sur 11.1
**Related issues**
I've also tried removing those fonts from my stylesheet, and the issue persists where Lighthouse keeps saying to preload these font resources.",1.0,"Preload key requests audit still showing linked resources that have been placed in the head with preload parameters - #### Provide the steps to reproduce
1. Run LH on https://gogonihon.com/en/
#### What is the current behavior?
Lighthouse audit for preload key requests has 3 fonts recommended to be preloaded.
#### What is the expected behavior?
I would expect that there are no fonts showing in the preload key requests audit, since they've already been place in the head with preload parameters.
#### Environment Information
* Affected Channels: DevTools - Lighthouse
* Lighthouse version:
* Chrome version: Version 87.0.4280.141 (Official Build) (x86_64)
* Node.js version:
* Operating System: OSX Big Sur 11.1
**Related issues**
I've also tried removing those fonts from my stylesheet, and the issue persists where Lighthouse keeps saying to preload these font resources.",0,preload key requests audit still showing linked resources that have been placed in the head with preload parameters provide the steps to reproduce run lh on what is the current behavior lighthouse audit for preload key requests has fonts recommended to be preloaded what is the expected behavior i would expect that there are no fonts showing in the preload key requests audit since they ve already been place in the head with preload parameters environment information affected channels devtools lighthouse lighthouse version chrome version version official build node js version operating system osx big sur related issues i ve also tried removing those fonts from my stylesheet and the issue persists where lighthouse keeps saying to preload these font resources ,0
222462,7432444549.0,IssuesEvent,2018-03-26 00:53:31,owtf/owtf,https://api.github.com/repos/owtf/owtf,opened,Update plugins and add new tools from Kali Linux.,Enhancement Moderate Fix Priority High help wanted,"The bundled plugins with OWTF have not been updated for a long time. They need to be updated (versions, run scripts in `owtf/scripts`) and newer tools in Kali Linux from https://tools.kali.org/tools-listing should be supported.
",1.0,"Update plugins and add new tools from Kali Linux. - The bundled plugins with OWTF have not been updated for a long time. They need to be updated (versions, run scripts in `owtf/scripts`) and newer tools in Kali Linux from https://tools.kali.org/tools-listing should be supported.
",0,update plugins and add new tools from kali linux the bundled plugins with owtf have not been updated for a long time they need to be updated versions run scripts in owtf scripts and newer tools in kali linux from should be supported ,0
1459,16374070435.0,IssuesEvent,2021-05-15 18:46:01,emmamei/cdkey,https://api.github.com/repos/emmamei/cdkey,closed,poseBufferedList is rebuilt in three places - is this overkill?,bug reliabilityfix simplification,"The buffered variable is filled during the `attach()` event, during the `changed()` event, and during the function `posePageN()` when the poses menu is built.
This seems like overkill, especially as this is supposed to be buffered. There is also the fact that the building process is different in `posePageN()` versus the other locations... Sounds like a function is required.",True,"poseBufferedList is rebuilt in three places - is this overkill? - The buffered variable is filled during the `attach()` event, during the `changed()` event, and during the function `posePageN()` when the poses menu is built.
This seems like overkill, especially as this is supposed to be buffered. There is also the fact that the building process is different in `posePageN()` versus the other locations... Sounds like a function is required.",1,posebufferedlist is rebuilt in three places is this overkill the buffered variable is filled during the attach event during the changed event and during the function posepagen when the poses menu is built this seems like overkill especially as this is supposed to be buffered there is also the fact that the building process is different in posepagen versus the other locations sounds like a function is required ,1
38042,8639913268.0,IssuesEvent,2018-11-23 22:39:15,bridgedotnet/Bridge,https://api.github.com/repos/bridgedotnet/Bridge,closed,Comparer.Default not using System.String.compare,defect in-progress,"Comparer.Default.Compare(str1, str2) gives a different result the str1CompareTo(str2).
The first case calls Bridge.compare, and seems to perform char sorting.
The second case calls System.String.compare and seems to correctly sort text.
https://deck.net/42fe359ec3524e6024107106ec9c9386
I would expect that Comparer.Default would call System.String.compare as System.String implements IComparer.
",1.0,"Comparer.Default not using System.String.compare - Comparer.Default.Compare(str1, str2) gives a different result the str1CompareTo(str2).
The first case calls Bridge.compare, and seems to perform char sorting.
The second case calls System.String.compare and seems to correctly sort text.
https://deck.net/42fe359ec3524e6024107106ec9c9386
I would expect that Comparer.Default would call System.String.compare as System.String implements IComparer.
",0,comparer default not using system string compare comparer default compare gives a different result the the first case calls bridge compare and seems to perform char sorting the second case calls system string compare and seems to correctly sort text i would expect that comparer default would call system string compare as system string implements icomparer ,0
142785,5477077102.0,IssuesEvent,2017-03-12 03:38:17,NCEAS/eml,https://api.github.com/repos/NCEAS/eml,closed,Data Manager Library: Checks for collapseDelimiter instead of collapseDelimiters,Category: datamanager Component: Bugzilla-Id Priority: Normal Status: Resolved Tracker: Bug,"---
Author Name: **Duane Costa** (Duane Costa)
Original Redmine Issue: 5317, https://projects.ecoinformatics.org/ecoinfo/issues/5317
Original Date: 2011-02-21
Original Assignee: Duane Costa
---
There are two lines in the Data Manager Library source code that contain an apparent bug. The code checks for an EML element named ""collapseDelimiter"" when it should be checking for ""collapseDelimiters"". These lines are at:
src/org/ecoinformatics/datamanager/parser/eml/Eml200Parser.java, line 1204:
elementName.equals(""collapseDelimiter"") &&
src/org/ecoinformatics/datamanager/parser/generic/GenericDataPackageParser.java, line 1278:
elementName.equals(""collapseDelimiter"") &&
In addition, there are a large number of method names, method parameters, instance variables, and local variables throughout the DML code that are named 'collapseDelimiter' when the more appropriate name for these constructs would be 'collapseDelimiters'. Since these are only names, they do not affect the code logic, but it would be good to clean these up and rename them in accordance with the actual EML element name, 'collapseDelimiters'.
",1.0,"Data Manager Library: Checks for collapseDelimiter instead of collapseDelimiters - ---
Author Name: **Duane Costa** (Duane Costa)
Original Redmine Issue: 5317, https://projects.ecoinformatics.org/ecoinfo/issues/5317
Original Date: 2011-02-21
Original Assignee: Duane Costa
---
There are two lines in the Data Manager Library source code that contain an apparent bug. The code checks for an EML element named ""collapseDelimiter"" when it should be checking for ""collapseDelimiters"". These lines are at:
src/org/ecoinformatics/datamanager/parser/eml/Eml200Parser.java, line 1204:
elementName.equals(""collapseDelimiter"") &&
src/org/ecoinformatics/datamanager/parser/generic/GenericDataPackageParser.java, line 1278:
elementName.equals(""collapseDelimiter"") &&
In addition, there are a large number of method names, method parameters, instance variables, and local variables throughout the DML code that are named 'collapseDelimiter' when the more appropriate name for these constructs would be 'collapseDelimiters'. Since these are only names, they do not affect the code logic, but it would be good to clean these up and rename them in accordance with the actual EML element name, 'collapseDelimiters'.
",0,data manager library checks for collapsedelimiter instead of collapsedelimiters author name duane costa duane costa original redmine issue original date original assignee duane costa there are two lines in the data manager library source code that contain an apparent bug the code checks for an eml element named collapsedelimiter when it should be checking for collapsedelimiters these lines are at src org ecoinformatics datamanager parser eml java line elementname equals collapsedelimiter src org ecoinformatics datamanager parser generic genericdatapackageparser java line elementname equals collapsedelimiter in addition there are a large number of method names method parameters instance variables and local variables throughout the dml code that are named collapsedelimiter when the more appropriate name for these constructs would be collapsedelimiters since these are only names they do not affect the code logic but it would be good to clean these up and rename them in accordance with the actual eml element name collapsedelimiters ,0
1042,12486695424.0,IssuesEvent,2020-05-31 04:06:14,jedmund/siero-bot,https://api.github.com/repos/jedmund/siero-bot,closed,Convert Siero to TypeScript,reliability,"Before Siero gets too large and unruly, convert files to TypeScript to get ahead of type errors/improve reliability.",True,"Convert Siero to TypeScript - Before Siero gets too large and unruly, convert files to TypeScript to get ahead of type errors/improve reliability.",1,convert siero to typescript before siero gets too large and unruly convert files to typescript to get ahead of type errors improve reliability ,1
1251,14291936748.0,IssuesEvent,2020-11-23 23:49:03,microsoft/azuredatastudio,https://api.github.com/repos/microsoft/azuredatastudio,closed,ModelView components not being initialized as disabled correctly ,Area - Reliability Bug,"A couple places aren't getting initialized as disabled correctly after #13261
**Problem 1:**
Project radio button should be disabled if there aren't any other projects open(this works correctly in the November release):
https://github.com/microsoft/azuredatastudio/blob/749989cd0b8cea6c00a3d6b0e5137cb512594237/extensions%2Fsql-database-projects%2Fsrc%2Fdialogs%2FaddDatabaseReferenceDialog.ts#L196
Steps to repro:
1. Go to Projects viewlet
2. Create a project by clicking ""Create new"" button
3. Right click on ""Database References"" in project tree and click ""Add database reference""
Expected: Project radio button should be disabled because no other projects can be added as a reference
Actual: Project radio button is enabled

**Problem 2:**
Workspace inputbox should be disabled. It gets disabled if you toggle between the radio buttons
https://github.com/microsoft/azuredatastudio/blob/ddc8c000901dc2b9bafc7be5e91085a2a4b99a88/extensions%2Fdata-workspace%2Fsrc%2Fdialogs%2FdialogBase.ts#L92-L95
Steps to repro:
1. Go to Projects viewlet
2. Click ""Open Existing"" button to open dialog
Expected: workspace inputbox should be disabled
Actual: workspace inputbox is enabled

",True,"ModelView components not being initialized as disabled correctly - A couple places aren't getting initialized as disabled correctly after #13261
**Problem 1:**
Project radio button should be disabled if there aren't any other projects open(this works correctly in the November release):
https://github.com/microsoft/azuredatastudio/blob/749989cd0b8cea6c00a3d6b0e5137cb512594237/extensions%2Fsql-database-projects%2Fsrc%2Fdialogs%2FaddDatabaseReferenceDialog.ts#L196
Steps to repro:
1. Go to Projects viewlet
2. Create a project by clicking ""Create new"" button
3. Right click on ""Database References"" in project tree and click ""Add database reference""
Expected: Project radio button should be disabled because no other projects can be added as a reference
Actual: Project radio button is enabled

**Problem 2:**
Workspace inputbox should be disabled. It gets disabled if you toggle between the radio buttons
https://github.com/microsoft/azuredatastudio/blob/ddc8c000901dc2b9bafc7be5e91085a2a4b99a88/extensions%2Fdata-workspace%2Fsrc%2Fdialogs%2FdialogBase.ts#L92-L95
Steps to repro:
1. Go to Projects viewlet
2. Click ""Open Existing"" button to open dialog
Expected: workspace inputbox should be disabled
Actual: workspace inputbox is enabled

",1,modelview components not being initialized as disabled correctly a couple places aren t getting initialized as disabled correctly after problem project radio button should be disabled if there aren t any other projects open this works correctly in the november release steps to repro go to projects viewlet create a project by clicking create new button right click on database references in project tree and click add database reference expected project radio button should be disabled because no other projects can be added as a reference actual project radio button is enabled problem workspace inputbox should be disabled it gets disabled if you toggle between the radio buttons steps to repro go to projects viewlet click open existing button to open dialog expected workspace inputbox should be disabled actual workspace inputbox is enabled ,1
132148,18526948616.0,IssuesEvent,2021-10-20 21:52:58,Azure/autorest,https://api.github.com/repos/Azure/autorest,closed,Constant ObjectSchema,Modeler design-discussion P1 - Required triage,"If every property in an ObjectSchema is constant, I think it should have a ConstantSchema. See ConstantProduct in Validation: https://github.com/Azure/autorest.testserver/blob/e0d8dcad0f06f45ad6ec4416e45b326a139a63ff/swagger/validation.json#L239",1.0,"Constant ObjectSchema - If every property in an ObjectSchema is constant, I think it should have a ConstantSchema. See ConstantProduct in Validation: https://github.com/Azure/autorest.testserver/blob/e0d8dcad0f06f45ad6ec4416e45b326a139a63ff/swagger/validation.json#L239",0,constant objectschema if every property in an objectschema is constant i think it should have a constantschema see constantproduct in validation ,0
2850,28210656373.0,IssuesEvent,2023-04-05 03:50:46,NVIDIA/spark-rapids,https://api.github.com/repos/NVIDIA/spark-rapids,closed,[FEA] Should we fallback to ARENA if RMM fails to start up an async pool,feature request reliability,"We currently try and detect if the driver/runtime support the async allocator: https://github.com/NVIDIA/spark-rapids/blob/branch-23.02/sql-plugin/src/main/scala/com/nvidia/spark/rapids/RapidsConf.scala#L1966.
That said, it doesn't appear this is a sufficient check: https://github.com/NVIDIA/spark-rapids/discussions/7636. The user reports RMM itself failing to start, which happens after we have decided that ASYNC is OK given the software versions. In the user's case, it looks like it is related to a vGPU setup.
I could see us keeping the current behavior, but perhaps catch the RMM exception and point to a FAQ page that describes each scenario where RMM would fail to start with async, and pointers to the pool config to set ARENA to unblock users. Or I could see us fallback to ARENA automatically, perhaps complaining in the logs (though chances are the log message will be ignored).
Either way, we could do better than letting the RMM exception be thrown on its own. I am wondering if others have preferences on which way to go, automatic fallback or error with pointers on how to proceed.",True,"[FEA] Should we fallback to ARENA if RMM fails to start up an async pool - We currently try and detect if the driver/runtime support the async allocator: https://github.com/NVIDIA/spark-rapids/blob/branch-23.02/sql-plugin/src/main/scala/com/nvidia/spark/rapids/RapidsConf.scala#L1966.
That said, it doesn't appear this is a sufficient check: https://github.com/NVIDIA/spark-rapids/discussions/7636. The user reports RMM itself failing to start, which happens after we have decided that ASYNC is OK given the software versions. In the user's case, it looks like it is related to a vGPU setup.
I could see us keeping the current behavior, but perhaps catch the RMM exception and point to a FAQ page that describes each scenario where RMM would fail to start with async, and pointers to the pool config to set ARENA to unblock users. Or I could see us fallback to ARENA automatically, perhaps complaining in the logs (though chances are the log message will be ignored).
Either way, we could do better than letting the RMM exception be thrown on its own. I am wondering if others have preferences on which way to go, automatic fallback or error with pointers on how to proceed.",1, should we fallback to arena if rmm fails to start up an async pool we currently try and detect if the driver runtime support the async allocator that said it doesn t appear this is a sufficient check the user reports rmm itself failing to start which happens after we have decided that async is ok given the software versions in the user s case it looks like it is related to a vgpu setup i could see us keeping the current behavior but perhaps catch the rmm exception and point to a faq page that describes each scenario where rmm would fail to start with async and pointers to the pool config to set arena to unblock users or i could see us fallback to arena automatically perhaps complaining in the logs though chances are the log message will be ignored either way we could do better than letting the rmm exception be thrown on its own i am wondering if others have preferences on which way to go automatic fallback or error with pointers on how to proceed ,1
597366,18162078309.0,IssuesEvent,2021-09-27 10:47:08,wso2/product-microgateway,https://api.github.com/repos/wso2/product-microgateway,closed,Unwanted error logs when configuring multiple JWT configs,Type/Bug Priority/Normal ballerina-mgw,"### Description:
When configuring multiple JWT configurations in the micro-gw.conf file and invoking the API, the token will validate sequentially and printing error logs. However, the request getting successful.
### Steps to reproduce:
- Add multiple JWT configurations to the **micro-gw.conf** file as below.
```
[[jwtTokenConfig]]
issuer = ""https://localhost:9443/oauth2/token""
certificateAlias = ""wso2carbonjwt1""
validateSubscription = false
consumerKeyClaim = ""aud""
[[jwtTokenConfig]]
issuer = ""https://localhost:9444/oauth2/token""
certificateAlias = ""wso2carbonjwt2""
validateSubscription = false
consumerKeyClaim = ""aud""
```
- Generate an access token which includes the issuer https://localhost:9444/oauth2/token
- Invoke the API and the request getting successful.
- However, in the **microgateway.log** file, able to see error logs related to JWT validation by using the 1st JWT configs.
### Affected Product Version:
3.2.0",1.0,"Unwanted error logs when configuring multiple JWT configs - ### Description:
When configuring multiple JWT configurations in the micro-gw.conf file and invoking the API, the token will validate sequentially and printing error logs. However, the request getting successful.
### Steps to reproduce:
- Add multiple JWT configurations to the **micro-gw.conf** file as below.
```
[[jwtTokenConfig]]
issuer = ""https://localhost:9443/oauth2/token""
certificateAlias = ""wso2carbonjwt1""
validateSubscription = false
consumerKeyClaim = ""aud""
[[jwtTokenConfig]]
issuer = ""https://localhost:9444/oauth2/token""
certificateAlias = ""wso2carbonjwt2""
validateSubscription = false
consumerKeyClaim = ""aud""
```
- Generate an access token which includes the issuer https://localhost:9444/oauth2/token
- Invoke the API and the request getting successful.
- However, in the **microgateway.log** file, able to see error logs related to JWT validation by using the 1st JWT configs.
### Affected Product Version:
3.2.0",0,unwanted error logs when configuring multiple jwt configs description when configuring multiple jwt configurations in the micro gw conf file and invoking the api the token will validate sequentially and printing error logs however the request getting successful steps to reproduce add multiple jwt configurations to the micro gw conf file as below issuer certificatealias validatesubscription false consumerkeyclaim aud issuer certificatealias validatesubscription false consumerkeyclaim aud generate an access token which includes the issuer invoke the api and the request getting successful however in the microgateway log file able to see error logs related to jwt validation by using the jwt configs affected product version ,0
94395,15962369901.0,IssuesEvent,2021-04-16 01:10:00,xinYG/bootstrap-timepicker,https://api.github.com/repos/xinYG/bootstrap-timepicker,reopened,CVE-2019-20920 (High) detected in multiple libraries,security vulnerability,"## CVE-2019-20920 - High Severity Vulnerability
Vulnerable Libraries - opennmsopennms-source-24.1.2-1, nodev4.3.2, nodev4.3.2
Vulnerability Details
Handlebars before 3.0.8 and 4.x before 4.5.3 is vulnerable to Arbitrary Code Execution. The lookup helper fails to properly validate templates, allowing attackers to submit templates that execute arbitrary JavaScript. This can be used to run arbitrary code on a server processing Handlebars templates or in a victim's browser (effectively serving as XSS).
",True,"CVE-2019-20920 (High) detected in multiple libraries - ## CVE-2019-20920 - High Severity Vulnerability
Vulnerable Libraries - opennmsopennms-source-24.1.2-1, nodev4.3.2, nodev4.3.2
Vulnerability Details
Handlebars before 3.0.8 and 4.x before 4.5.3 is vulnerable to Arbitrary Code Execution. The lookup helper fails to properly validate templates, allowing attackers to submit templates that execute arbitrary JavaScript. This can be used to run arbitrary code on a server processing Handlebars templates or in a victim's browser (effectively serving as XSS).
",0,cve high detected in multiple libraries cve high severity vulnerability vulnerable libraries opennmsopennms source vulnerability details handlebars before and x before is vulnerable to arbitrary code execution the lookup helper fails to properly validate templates allowing attackers to submit templates that execute arbitrary javascript this can be used to run arbitrary code on a server processing handlebars templates or in a victim s browser effectively serving as xss publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope changed impact metrics confidentiality impact high integrity impact low availability impact low for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution handlebars isopenpronvulnerability true ispackagebased true isdefaultbranch true packages basebranches vulnerabilityidentifier cve vulnerabilitydetails handlebars before and x before is vulnerable to arbitrary code execution the lookup helper fails to properly validate templates allowing attackers to submit templates that execute arbitrary javascript this can be used to run arbitrary code on a server processing handlebars templates or in a victim browser effectively serving as xss vulnerabilityurl ,0
381,7153290481.0,IssuesEvent,2018-01-26 00:52:03,dotnet/roslyn,https://api.github.com/repos/dotnet/roslyn,closed,Null reference exception on shutdown VS with Interactive window executing,4 - In Review Area-Interactive Interactive-ScriptingIDE Tenet-Reliability,"From VSO: 514822
```
Microsoft.VisualStudio.InteractiveWindow!Microsoft.VisualStudio.InteractiveWindow.InteractiveWindow.UIThread[[Microsoft.VisualStudio.Text.Span,_Microsoft.VisualStudio.Text.Data]]
at Microsoft.VisualStudio.InteractiveWindow!Microsoft.VisualStudio.InteractiveWindow.InteractiveWindow.Microsoft.VisualStudio.InteractiveWindow.IInteractiveWindow.WriteLine in InteractiveWindow.cs
at Microsoft.VisualStudio.InteractiveWindow!Microsoft.VisualStudio.InteractiveWindow.InteractiveWindowWriter.WriteLine in OutputWriter.cs
at mscorlib.ni!System.IO.TextWriter.WriteLine in textwriter.cs
at Microsoft.CodeAnalysis.InteractiveFeatures!Microsoft.CodeAnalysis.Interactive.InteractiveHost.ReportProcessExited in InteractiveHost.cs
at Microsoft.CodeAnalysis.InteractiveFeatures!Microsoft.CodeAnalysis.Interactive.InteractiveHost.OnProcessExited in InteractiveHost.cs
at Microsoft.CodeAnalysis.InteractiveFeatures!Microsoft.CodeAnalysis.Interactive.InteractiveHost+RemoteService+__c__DisplayClass8_0 in InteractiveHost.RemoteService.cs
```",True,"Null reference exception on shutdown VS with Interactive window executing - From VSO: 514822
```
Microsoft.VisualStudio.InteractiveWindow!Microsoft.VisualStudio.InteractiveWindow.InteractiveWindow.UIThread[[Microsoft.VisualStudio.Text.Span,_Microsoft.VisualStudio.Text.Data]]
at Microsoft.VisualStudio.InteractiveWindow!Microsoft.VisualStudio.InteractiveWindow.InteractiveWindow.Microsoft.VisualStudio.InteractiveWindow.IInteractiveWindow.WriteLine in InteractiveWindow.cs
at Microsoft.VisualStudio.InteractiveWindow!Microsoft.VisualStudio.InteractiveWindow.InteractiveWindowWriter.WriteLine in OutputWriter.cs
at mscorlib.ni!System.IO.TextWriter.WriteLine in textwriter.cs
at Microsoft.CodeAnalysis.InteractiveFeatures!Microsoft.CodeAnalysis.Interactive.InteractiveHost.ReportProcessExited in InteractiveHost.cs
at Microsoft.CodeAnalysis.InteractiveFeatures!Microsoft.CodeAnalysis.Interactive.InteractiveHost.OnProcessExited in InteractiveHost.cs
at Microsoft.CodeAnalysis.InteractiveFeatures!Microsoft.CodeAnalysis.Interactive.InteractiveHost+RemoteService+__c__DisplayClass8_0 in InteractiveHost.RemoteService.cs
```",1,null reference exception on shutdown vs with interactive window executing from vso microsoft visualstudio interactivewindow microsoft visualstudio interactivewindow interactivewindow uithread at microsoft visualstudio interactivewindow microsoft visualstudio interactivewindow interactivewindow microsoft visualstudio interactivewindow iinteractivewindow writeline in interactivewindow cs at microsoft visualstudio interactivewindow microsoft visualstudio interactivewindow interactivewindowwriter writeline in outputwriter cs at mscorlib ni system io textwriter writeline in textwriter cs at microsoft codeanalysis interactivefeatures microsoft codeanalysis interactive interactivehost reportprocessexited in interactivehost cs at microsoft codeanalysis interactivefeatures microsoft codeanalysis interactive interactivehost onprocessexited in interactivehost cs at microsoft codeanalysis interactivefeatures microsoft codeanalysis interactive interactivehost remoteservice c in interactivehost remoteservice cs ,1
5626,5097417020.0,IssuesEvent,2017-01-03 21:25:31,NLog/NLog,https://api.github.com/repos/NLog/NLog,closed,proposal: support bufferization in user code and provide method that can flush several events at once,discussion feature performance wontfix,"Situation: my user code bufferize ""verbose"" tracing messages till the system event (e.g. ""interesting input"" or exception ) and then flush them.
For better performance (as I imagine logging: for each logging operation there is expensive ""open destination stream"" operation ) it would be nice to have new `Logger` method like:
` public void Log(IEnumerable logEvents);`
that accepts several events. Now it is absent. (Do not worry about real time datetime, stacktrace and other attributes - If user would need them there is possibility to add them as properties)
There is of course possibility to log bufferized events at once as one big concatenated text, but this is not very attractive since there is strong feeling that bufferized event's record should no differ to much from not bufferized (some new custom properties that user can add by himself is OK)...
Thank you for your work.",True,"proposal: support bufferization in user code and provide method that can flush several events at once - Situation: my user code bufferize ""verbose"" tracing messages till the system event (e.g. ""interesting input"" or exception ) and then flush them.
For better performance (as I imagine logging: for each logging operation there is expensive ""open destination stream"" operation ) it would be nice to have new `Logger` method like:
` public void Log(IEnumerable logEvents);`
that accepts several events. Now it is absent. (Do not worry about real time datetime, stacktrace and other attributes - If user would need them there is possibility to add them as properties)
There is of course possibility to log bufferized events at once as one big concatenated text, but this is not very attractive since there is strong feeling that bufferized event's record should no differ to much from not bufferized (some new custom properties that user can add by himself is OK)...
Thank you for your work.",0,proposal support bufferization in user code and provide method that can flush several events at once situation my user code bufferize verbose tracing messages till the system event e g interesting input or exception and then flush them for better performance as i imagine logging for each logging operation there is expensive open destination stream operation it would be nice to have new logger method like public void log ienumerable logevents that accepts several events now it is absent do not worry about real time datetime stacktrace and other attributes if user would need them there is possibility to add them as properties there is of course possibility to log bufferized events at once as one big concatenated text but this is not very attractive since there is strong feeling that bufferized event s record should no differ to much from not bufferized some new custom properties that user can add by himself is ok thank you for your work ,0
1505,16620343934.0,IssuesEvent,2021-06-02 23:21:17,timberio/vector,https://api.github.com/repos/timberio/vector,opened,BatchNotifier should support partial failures.,domain: performance domain: reliability type: enhancement,"Currently, `BatchNotifier` can be attached to multiple events, but it does not report partial failures. This means that even if only one event in a batch fails, the entire batch could be marked as a failure.
In order to not only unlock higher-performance acknowledgement via batching, but to provide maximally correct processing, `BatchNotifier` should be modified/extended to support tracking the status of individual events that have been attached to it.",True,"BatchNotifier should support partial failures. - Currently, `BatchNotifier` can be attached to multiple events, but it does not report partial failures. This means that even if only one event in a batch fails, the entire batch could be marked as a failure.
In order to not only unlock higher-performance acknowledgement via batching, but to provide maximally correct processing, `BatchNotifier` should be modified/extended to support tracking the status of individual events that have been attached to it.",1,batchnotifier should support partial failures currently batchnotifier can be attached to multiple events but it does not report partial failures this means that even if only one event in a batch fails the entire batch could be marked as a failure in order to not only unlock higher performance acknowledgement via batching but to provide maximally correct processing batchnotifier should be modified extended to support tracking the status of individual events that have been attached to it ,1
50204,3006246791.0,IssuesEvent,2015-07-27 09:08:52,Itseez/opencv,https://api.github.com/repos/Itseez/opencv,opened,Add new create() method for Feature2D,auto-transferred category: features2d feature priority: normal,"Transferred from http://code.opencv.org/issues/2333
```
|| Maria Dimashova on 2012-09-05 09:58
|| Priority: Normal
|| Affected: None
|| Category: features2d
|| Tracker: Feature
|| Difficulty: None
|| PR: None
|| Platform: None / None
```
Add new create() method for Feature2D
-----------
```
with arguments Ptr and Ptr.
```
History
-------
##### Alexander Shishkov on 2012-09-07 13:32
```
- Target version deleted (3.0)
- Assignee deleted (Maria Dimashova)
```",1.0,"Add new create() method for Feature2D - Transferred from http://code.opencv.org/issues/2333
```
|| Maria Dimashova on 2012-09-05 09:58
|| Priority: Normal
|| Affected: None
|| Category: features2d
|| Tracker: Feature
|| Difficulty: None
|| PR: None
|| Platform: None / None
```
Add new create() method for Feature2D
-----------
```
with arguments Ptr and Ptr.
```
History
-------
##### Alexander Shishkov on 2012-09-07 13:32
```
- Target version deleted (3.0)
- Assignee deleted (Maria Dimashova)
```",0,add new create method for transferred from maria dimashova on priority normal affected none category tracker feature difficulty none pr none platform none none add new create method for with arguments ptr and ptr history alexander shishkov on target version deleted assignee deleted maria dimashova ,0
547865,16048735449.0,IssuesEvent,2021-04-22 16:25:55,HabitRPG/habitica,https://api.github.com/repos/HabitRPG/habitica,closed,New Achievements and Badges: Checked in X Days in a Row,priority: medium section: Avatar/User Modal status: issue: on hold type: medium level coding,"These new achievements and badges are ready to be implemented! These should be received when a player checks in for the number of days indicated by each badge. The badge images and the names/mouseover text for the user achievement page can be found here:
https://trello.com/c/uxR0r5R0/5-checked-in-x-days-in-a-row-badges",1.0,"New Achievements and Badges: Checked in X Days in a Row - These new achievements and badges are ready to be implemented! These should be received when a player checks in for the number of days indicated by each badge. The badge images and the names/mouseover text for the user achievement page can be found here:
https://trello.com/c/uxR0r5R0/5-checked-in-x-days-in-a-row-badges",0,new achievements and badges checked in x days in a row these new achievements and badges are ready to be implemented these should be received when a player checks in for the number of days indicated by each badge the badge images and the names mouseover text for the user achievement page can be found here ,0
1652,18069028216.0,IssuesEvent,2021-09-20 23:08:16,Azure/azure-sdk-for-java,https://api.github.com/repos/Azure/azure-sdk-for-java,closed,[BUG] Out of memory issues when sending messages to Event Hub,question Event Hubs Client customer-reported pillar-reliability needs-team-attention,"**Describe the bug**
We run two Kubernetes cluster (INT & PROD) where we have a service which sends on four Event Hubs. On INT we get a OutOfMemory every few weeks. Our analyzes of the heap dump showed us, that two of the four com.azure.messaging.eventhubs.implementation.EventHubConnectionProcessor seem to ""collect"" Nodes in their ConcurrentLinkedDeque. They send the messages to the Event Hubs but do not remove the Nodes from the deque and release the memory.
Those Node instances contain org.apache.qpid.proton.engine.impl.TransportImpl subscriber for one EventHubConnectionProcessor and datadog.trace.instrumentation.reactor.core.TracingSubscriber for the other. But it seems those subscribers aren't the problem since the two other EventHubConnectionProcessor instances have the same subscriber types and there we have no problems with them.
Remarkable is the fact, that the two EventHubConnectionProcessor, which make problems, have the lowest data throughput. The other two EventHubConnectionProcessor's on INT and all four EventHubConnectionProcessor's on PROD cluster have much more data to handle but work fine.
**Stack Trace**
```
java.lang.OutOfMemoryError: Java heap space
at com.azure.core.amqp.implementation.ReactorSender.lambda$send$10(ReactorSender.java:241)
at com.azure.core.amqp.implementation.ReactorSender$$Lambda$1609/328835047.apply(Unknown Source)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:125)
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2397)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:110)
at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:54)
at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157)
at datadog.trace.instrumentation.reactor.core.TracingSubscriber.lambda$onNext$0(TracingSubscriber.java:38)
at datadog.trace.instrumentation.reactor.core.TracingSubscriber$$Lambda$602/1872410525.run(Unknown Source)
at datadog.trace.instrumentation.reactor.core.TracingSubscriber.withActiveSpan(TracingSubscriber.java:60)
at datadog.trace.instrumentation.reactor.core.TracingSubscriber.onNext(TracingSubscriber.java:38)
at reactor.core.publisher.FluxHide$SuppressFuseableSubscriber.onNext(FluxHide.java:136)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1815)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)
at datadog.trace.instrumentation.reactor.core.TracingSubscriber.lambda$onNext$0(TracingSubscriber.java:38)
at datadog.trace.instrumentation.reactor.core.TracingSubscriber$$Lambda$602/1872410525.run(Unknown Source)
at datadog.trace.instrumentation.reactor.core.TracingSubscriber.withActiveSpan(TracingSubscriber.java:60)
at datadog.trace.instrumentation.reactor.core.TracingSubscriber.onNext(TracingSubscriber.java:38)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1815)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)
at datadog.trace.instrumentation.reactor.core.TracingSubscriber.lambda$onNext$0(TracingSubscriber.java:38)
at datadog.trace.instrumentation.reactor.core.TracingSubscriber$$Lambda$602/1872410525.run(Unknown Source)
at datadog.trace.instrumentation.reactor.core.TracingSubscriber.withActiveSpan(TracingSubscriber.java:60)
at datadog.trace.instrumentation.reactor.core.TracingSubscriber.onNext(TracingSubscriber.java:38)
at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
at datadog.trace.instrumentation.reactor.core.TracingSubscriber.lambda$onNext$0(TracingSubscriber.java:38)
at datadog.trace.instrumentation.reactor.core.TracingSubscriber$$Lambda$602/1872410525.run(Unknown Source)
at datadog.trace.instrumentation.reactor.core.TracingSubscriber.withActiveSpan(TracingSubscriber.java:60)
at datadog.trace.instrumentation.reactor.core.TracingSubscriber.onNext(TracingSubscriber.java:38)
at reactor.core.publisher.FluxFirstWithSignal$FirstEmittingSubscriber.onNext(FluxFirstWithSignal.java:329)
at datadog.trace.instrumentation.reactor.core.TracingSubscriber.lambda$onNext$0(TracingSubscriber.java:38)
```
**Code Snippet**
For working with the Event Hubs we use com.azure.messaging.eventhubs.EventHubProducerAsyncClient.
We use it like this:
```
eventHubClient.send(data).toFuture().whenComplete((msg, ex) -> {
if (ex != null) {
log.error(""Error while sending data"", ex);
messageCounterRegistry.increaseSentFailed(ehClass, data.size());
throw new CompletionException(""Error occurred sending batch of event class: "" + ehClass, ex);
} else {
messageCounterRegistry.increaseSentMessages(ehClass, data.size());
MessageDeduplication.add(trackingIds);
}
});
```
We cannot see any failed sendings in our counter registry or logs.
**To Reproduce**
We cannot reproduce the OOM manually but every 6 weeks (+/-) it happens.
**Expected behavior**
The two mentioned EventHubConnectionProcessor instances should release the memory by removing the subscribers from their ConcurrentLinkedDeque after sending the messages to the Event Hubs.
**Setup (please complete the following information):**
- OS: Ubuntu 20.04
- Library/Libraries:
com.azure:azure-identity:1.2.5
com.azure:azure-messaging-eventhubs:5.7.0
- Java version: 1.8.0_292 Zulu JDK
- Environment: Container on Kubernetes
- Frameworks: Spring Boot 2.3.9.RELEASE
Sorry, that I can't be more specific, but this is all I see from my point of view.",True,"[BUG] Out of memory issues when sending messages to Event Hub - **Describe the bug**
We run two Kubernetes cluster (INT & PROD) where we have a service which sends on four Event Hubs. On INT we get a OutOfMemory every few weeks. Our analyzes of the heap dump showed us, that two of the four com.azure.messaging.eventhubs.implementation.EventHubConnectionProcessor seem to ""collect"" Nodes in their ConcurrentLinkedDeque. They send the messages to the Event Hubs but do not remove the Nodes from the deque and release the memory.
Those Node instances contain org.apache.qpid.proton.engine.impl.TransportImpl subscriber for one EventHubConnectionProcessor and datadog.trace.instrumentation.reactor.core.TracingSubscriber for the other. But it seems those subscribers aren't the problem since the two other EventHubConnectionProcessor instances have the same subscriber types and there we have no problems with them.
Remarkable is the fact, that the two EventHubConnectionProcessor, which make problems, have the lowest data throughput. The other two EventHubConnectionProcessor's on INT and all four EventHubConnectionProcessor's on PROD cluster have much more data to handle but work fine.
**Stack Trace**
```
java.lang.OutOfMemoryError: Java heap space
at com.azure.core.amqp.implementation.ReactorSender.lambda$send$10(ReactorSender.java:241)
at com.azure.core.amqp.implementation.ReactorSender$$Lambda$1609/328835047.apply(Unknown Source)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:125)
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2397)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onSubscribe(MonoFlatMap.java:110)
at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:54)
at reactor.core.publisher.InternalMonoOperator.subscribe(InternalMonoOperator.java:64)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:157)
at datadog.trace.instrumentation.reactor.core.TracingSubscriber.lambda$onNext$0(TracingSubscriber.java:38)
at datadog.trace.instrumentation.reactor.core.TracingSubscriber$$Lambda$602/1872410525.run(Unknown Source)
at datadog.trace.instrumentation.reactor.core.TracingSubscriber.withActiveSpan(TracingSubscriber.java:60)
at datadog.trace.instrumentation.reactor.core.TracingSubscriber.onNext(TracingSubscriber.java:38)
at reactor.core.publisher.FluxHide$SuppressFuseableSubscriber.onNext(FluxHide.java:136)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1815)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)
at datadog.trace.instrumentation.reactor.core.TracingSubscriber.lambda$onNext$0(TracingSubscriber.java:38)
at datadog.trace.instrumentation.reactor.core.TracingSubscriber$$Lambda$602/1872410525.run(Unknown Source)
at datadog.trace.instrumentation.reactor.core.TracingSubscriber.withActiveSpan(TracingSubscriber.java:60)
at datadog.trace.instrumentation.reactor.core.TracingSubscriber.onNext(TracingSubscriber.java:38)
at reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1815)
at reactor.core.publisher.MonoFlatMap$FlatMapInner.onNext(MonoFlatMap.java:249)
at datadog.trace.instrumentation.reactor.core.TracingSubscriber.lambda$onNext$0(TracingSubscriber.java:38)
at datadog.trace.instrumentation.reactor.core.TracingSubscriber$$Lambda$602/1872410525.run(Unknown Source)
at datadog.trace.instrumentation.reactor.core.TracingSubscriber.withActiveSpan(TracingSubscriber.java:60)
at datadog.trace.instrumentation.reactor.core.TracingSubscriber.onNext(TracingSubscriber.java:38)
at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120)
at datadog.trace.instrumentation.reactor.core.TracingSubscriber.lambda$onNext$0(TracingSubscriber.java:38)
at datadog.trace.instrumentation.reactor.core.TracingSubscriber$$Lambda$602/1872410525.run(Unknown Source)
at datadog.trace.instrumentation.reactor.core.TracingSubscriber.withActiveSpan(TracingSubscriber.java:60)
at datadog.trace.instrumentation.reactor.core.TracingSubscriber.onNext(TracingSubscriber.java:38)
at reactor.core.publisher.FluxFirstWithSignal$FirstEmittingSubscriber.onNext(FluxFirstWithSignal.java:329)
at datadog.trace.instrumentation.reactor.core.TracingSubscriber.lambda$onNext$0(TracingSubscriber.java:38)
```
**Code Snippet**
For working with the Event Hubs we use com.azure.messaging.eventhubs.EventHubProducerAsyncClient.
We use it like this:
```
eventHubClient.send(data).toFuture().whenComplete((msg, ex) -> {
if (ex != null) {
log.error(""Error while sending data"", ex);
messageCounterRegistry.increaseSentFailed(ehClass, data.size());
throw new CompletionException(""Error occurred sending batch of event class: "" + ehClass, ex);
} else {
messageCounterRegistry.increaseSentMessages(ehClass, data.size());
MessageDeduplication.add(trackingIds);
}
});
```
We cannot see any failed sendings in our counter registry or logs.
**To Reproduce**
We cannot reproduce the OOM manually but every 6 weeks (+/-) it happens.
**Expected behavior**
The two mentioned EventHubConnectionProcessor instances should release the memory by removing the subscribers from their ConcurrentLinkedDeque after sending the messages to the Event Hubs.
**Setup (please complete the following information):**
- OS: Ubuntu 20.04
- Library/Libraries:
com.azure:azure-identity:1.2.5
com.azure:azure-messaging-eventhubs:5.7.0
- Java version: 1.8.0_292 Zulu JDK
- Environment: Container on Kubernetes
- Frameworks: Spring Boot 2.3.9.RELEASE
Sorry, that I can't be more specific, but this is all I see from my point of view.",1, out of memory issues when sending messages to event hub describe the bug we run two kubernetes cluster int prod where we have a service which sends on four event hubs on int we get a outofmemory every few weeks our analyzes of the heap dump showed us that two of the four com azure messaging eventhubs implementation eventhubconnectionprocessor seem to collect nodes in their concurrentlinkeddeque they send the messages to the event hubs but do not remove the nodes from the deque and release the memory those node instances contain org apache qpid proton engine impl transportimpl subscriber for one eventhubconnectionprocessor and datadog trace instrumentation reactor core tracingsubscriber for the other but it seems those subscribers aren t the problem since the two other eventhubconnectionprocessor instances have the same subscriber types and there we have no problems with them remarkable is the fact that the two eventhubconnectionprocessor which make problems have the lowest data throughput the other two eventhubconnectionprocessor s on int and all four eventhubconnectionprocessor s on prod cluster have much more data to handle but work fine stack trace java lang outofmemoryerror java heap space at com azure core amqp implementation reactorsender lambda send reactorsender java at com azure core amqp implementation reactorsender lambda apply unknown source at reactor core publisher monoflatmap flatmapmain onnext monoflatmap java at reactor core publisher operators scalarsubscription request operators java at reactor core publisher monoflatmap flatmapmain onsubscribe monoflatmap java at reactor core publisher monojust subscribe monojust java at reactor core publisher internalmonooperator subscribe internalmonooperator java at reactor core publisher monoflatmap flatmapmain onnext monoflatmap java at datadog trace instrumentation reactor core tracingsubscriber lambda onnext tracingsubscriber java at datadog trace instrumentation reactor core tracingsubscriber lambda run unknown source at datadog trace instrumentation reactor core tracingsubscriber withactivespan tracingsubscriber java at datadog trace instrumentation reactor core tracingsubscriber onnext tracingsubscriber java at reactor core publisher fluxhide suppressfuseablesubscriber onnext fluxhide java at reactor core publisher operators monosubscriber complete operators java at reactor core publisher monoflatmap flatmapinner onnext monoflatmap java at datadog trace instrumentation reactor core tracingsubscriber lambda onnext tracingsubscriber java at datadog trace instrumentation reactor core tracingsubscriber lambda run unknown source at datadog trace instrumentation reactor core tracingsubscriber withactivespan tracingsubscriber java at datadog trace instrumentation reactor core tracingsubscriber onnext tracingsubscriber java at reactor core publisher operators monosubscriber complete operators java at reactor core publisher monoflatmap flatmapinner onnext monoflatmap java at datadog trace instrumentation reactor core tracingsubscriber lambda onnext tracingsubscriber java at datadog trace instrumentation reactor core tracingsubscriber lambda run unknown source at datadog trace instrumentation reactor core tracingsubscriber withactivespan tracingsubscriber java at datadog trace instrumentation reactor core tracingsubscriber onnext tracingsubscriber java at reactor core publisher fluxmap mapsubscriber onnext fluxmap java at datadog trace instrumentation reactor core tracingsubscriber lambda onnext tracingsubscriber java at datadog trace instrumentation reactor core tracingsubscriber lambda run unknown source at datadog trace instrumentation reactor core tracingsubscriber withactivespan tracingsubscriber java at datadog trace instrumentation reactor core tracingsubscriber onnext tracingsubscriber java at reactor core publisher fluxfirstwithsignal firstemittingsubscriber onnext fluxfirstwithsignal java at datadog trace instrumentation reactor core tracingsubscriber lambda onnext tracingsubscriber java code snippet for working with the event hubs we use com azure messaging eventhubs eventhubproducerasyncclient we use it like this eventhubclient send data tofuture whencomplete msg ex if ex null log error error while sending data ex messagecounterregistry increasesentfailed ehclass data size throw new completionexception error occurred sending batch of event class ehclass ex else messagecounterregistry increasesentmessages ehclass data size messagededuplication add trackingids we cannot see any failed sendings in our counter registry or logs to reproduce we cannot reproduce the oom manually but every weeks it happens expected behavior the two mentioned eventhubconnectionprocessor instances should release the memory by removing the subscribers from their concurrentlinkeddeque after sending the messages to the event hubs setup please complete the following information os ubuntu library libraries com azure azure identity com azure azure messaging eventhubs java version zulu jdk environment container on kubernetes frameworks spring boot release sorry that i can t be more specific but this is all i see from my point of view ,1
2426,25293228084.0,IssuesEvent,2022-11-17 03:11:01,medic/cht-roadmap,https://api.github.com/repos/medic/cht-roadmap,closed,Server-side reliability under load,strat: Make server components reliable under load,"**Overview**
CHT apps currently suffer from performance issues when operating with thousands of users. CHT apps should be able to scale to thousands of users and remain responsive.
Work on this initiative will come together as CHT 4.0 and involve a server-side architecture shift to architecture v3.
The tasks for this effort can be seen on the [Arch v3 board](https://github.com/orgs/medic/projects/103/views/1).",True,"Server-side reliability under load - **Overview**
CHT apps currently suffer from performance issues when operating with thousands of users. CHT apps should be able to scale to thousands of users and remain responsive.
Work on this initiative will come together as CHT 4.0 and involve a server-side architecture shift to architecture v3.
The tasks for this effort can be seen on the [Arch v3 board](https://github.com/orgs/medic/projects/103/views/1).",1,server side reliability under load overview cht apps currently suffer from performance issues when operating with thousands of users cht apps should be able to scale to thousands of users and remain responsive work on this initiative will come together as cht and involve a server side architecture shift to architecture the tasks for this effort can be seen on the ,1
7243,6802731957.0,IssuesEvent,2017-11-02 21:15:32,jupyter-incubator/enterprise_gateway,https://api.github.com/repos/jupyter-incubator/enterprise_gateway,closed,Address scalability issues around ssh tunneling,Enhanced Security Runtime,"Since each tunneled port requires an ssh process, this might introduce scalability issues with max-process limits, etc.
Here's some output of a system running two kernels over tunneled ports. Note that the 6th (of 12) ssh process for each pertains to the newly introduced ""gateway communication port"" used for signalling (interrupt) kernels (and possibly other things).
```
elyra 5577 1 0 10:00 pts/0 00:00:01 /opt/anaconda2/bin/python /opt/anaconda2/bin/jupyter-enterprisegateway --ip=0.0.0.0 --port=8888 --port_retries=0 --log-level=DEBUG
elyra 5595 1 0 10:00 ? 00:00:00 /usr/bin/ssh -f -S none -L 127.0.0.1:60629:172.16.187.221:54073 172.16.187.221 sleep 9223372036854775807
elyra 5598 1 0 10:00 ? 00:00:00 /usr/bin/ssh -f -S none -L 127.0.0.1:44838:172.16.187.221:47418 172.16.187.221 sleep 9223372036854775807
elyra 5601 1 0 10:00 ? 00:00:00 /usr/bin/ssh -f -S none -L 127.0.0.1:38697:172.16.187.221:33526 172.16.187.221 sleep 9223372036854775807
elyra 5604 1 0 10:00 ? 00:00:00 /usr/bin/ssh -f -S none -L 127.0.0.1:36525:172.16.187.221:43768 172.16.187.221 sleep 9223372036854775807
elyra 5607 1 0 10:00 ? 00:00:00 /usr/bin/ssh -f -S none -L 127.0.0.1:58505:172.16.187.221:50969 172.16.187.221 sleep 9223372036854775807
elyra 5612 1 0 10:00 ? 00:00:00 /usr/bin/ssh -f -S none -L 127.0.0.1:57346:172.16.187.221:49737 172.16.187.221 sleep 9223372036854775807
elyra 6608 1 0 10:04 ? 00:00:00 /usr/bin/ssh -f -S none -L 127.0.0.1:39107:172.16.187.129:37925 172.16.187.129 sleep 9223372036854775807
elyra 6611 1 0 10:04 ? 00:00:00 /usr/bin/ssh -f -S none -L 127.0.0.1:58035:172.16.187.129:49791 172.16.187.129 sleep 9223372036854775807
elyra 6614 1 0 10:04 ? 00:00:00 /usr/bin/ssh -f -S none -L 127.0.0.1:41467:172.16.187.129:50190 172.16.187.129 sleep 9223372036854775807
elyra 6617 1 0 10:04 ? 00:00:00 /usr/bin/ssh -f -S none -L 127.0.0.1:47796:172.16.187.129:44118 172.16.187.129 sleep 9223372036854775807
elyra 6620 1 0 10:04 ? 00:00:00 /usr/bin/ssh -f -S none -L 127.0.0.1:40475:172.16.187.129:42303 172.16.187.129 sleep 9223372036854775807
elyra 6626 1 0 10:04 ? 00:00:00 /usr/bin/ssh -f -S none -L 127.0.0.1:34658:172.16.187.129:48502 172.16.187.129 sleep 9223372036854775807
```
In addition, it was determined that the ssh processes are not terminated when the kernel is shutdown. This will definitely trigger scalability issues, so we need to address the shutdown of the ssh processes at a minimum.",True,"Address scalability issues around ssh tunneling - Since each tunneled port requires an ssh process, this might introduce scalability issues with max-process limits, etc.
Here's some output of a system running two kernels over tunneled ports. Note that the 6th (of 12) ssh process for each pertains to the newly introduced ""gateway communication port"" used for signalling (interrupt) kernels (and possibly other things).
```
elyra 5577 1 0 10:00 pts/0 00:00:01 /opt/anaconda2/bin/python /opt/anaconda2/bin/jupyter-enterprisegateway --ip=0.0.0.0 --port=8888 --port_retries=0 --log-level=DEBUG
elyra 5595 1 0 10:00 ? 00:00:00 /usr/bin/ssh -f -S none -L 127.0.0.1:60629:172.16.187.221:54073 172.16.187.221 sleep 9223372036854775807
elyra 5598 1 0 10:00 ? 00:00:00 /usr/bin/ssh -f -S none -L 127.0.0.1:44838:172.16.187.221:47418 172.16.187.221 sleep 9223372036854775807
elyra 5601 1 0 10:00 ? 00:00:00 /usr/bin/ssh -f -S none -L 127.0.0.1:38697:172.16.187.221:33526 172.16.187.221 sleep 9223372036854775807
elyra 5604 1 0 10:00 ? 00:00:00 /usr/bin/ssh -f -S none -L 127.0.0.1:36525:172.16.187.221:43768 172.16.187.221 sleep 9223372036854775807
elyra 5607 1 0 10:00 ? 00:00:00 /usr/bin/ssh -f -S none -L 127.0.0.1:58505:172.16.187.221:50969 172.16.187.221 sleep 9223372036854775807
elyra 5612 1 0 10:00 ? 00:00:00 /usr/bin/ssh -f -S none -L 127.0.0.1:57346:172.16.187.221:49737 172.16.187.221 sleep 9223372036854775807
elyra 6608 1 0 10:04 ? 00:00:00 /usr/bin/ssh -f -S none -L 127.0.0.1:39107:172.16.187.129:37925 172.16.187.129 sleep 9223372036854775807
elyra 6611 1 0 10:04 ? 00:00:00 /usr/bin/ssh -f -S none -L 127.0.0.1:58035:172.16.187.129:49791 172.16.187.129 sleep 9223372036854775807
elyra 6614 1 0 10:04 ? 00:00:00 /usr/bin/ssh -f -S none -L 127.0.0.1:41467:172.16.187.129:50190 172.16.187.129 sleep 9223372036854775807
elyra 6617 1 0 10:04 ? 00:00:00 /usr/bin/ssh -f -S none -L 127.0.0.1:47796:172.16.187.129:44118 172.16.187.129 sleep 9223372036854775807
elyra 6620 1 0 10:04 ? 00:00:00 /usr/bin/ssh -f -S none -L 127.0.0.1:40475:172.16.187.129:42303 172.16.187.129 sleep 9223372036854775807
elyra 6626 1 0 10:04 ? 00:00:00 /usr/bin/ssh -f -S none -L 127.0.0.1:34658:172.16.187.129:48502 172.16.187.129 sleep 9223372036854775807
```
In addition, it was determined that the ssh processes are not terminated when the kernel is shutdown. This will definitely trigger scalability issues, so we need to address the shutdown of the ssh processes at a minimum.",0,address scalability issues around ssh tunneling since each tunneled port requires an ssh process this might introduce scalability issues with max process limits etc here s some output of a system running two kernels over tunneled ports note that the of ssh process for each pertains to the newly introduced gateway communication port used for signalling interrupt kernels and possibly other things elyra pts opt bin python opt bin jupyter enterprisegateway ip port port retries log level debug elyra usr bin ssh f s none l sleep elyra usr bin ssh f s none l sleep elyra usr bin ssh f s none l sleep elyra usr bin ssh f s none l sleep elyra usr bin ssh f s none l sleep elyra usr bin ssh f s none l sleep elyra usr bin ssh f s none l sleep elyra usr bin ssh f s none l sleep elyra usr bin ssh f s none l sleep elyra usr bin ssh f s none l sleep elyra usr bin ssh f s none l sleep elyra usr bin ssh f s none l sleep in addition it was determined that the ssh processes are not terminated when the kernel is shutdown this will definitely trigger scalability issues so we need to address the shutdown of the ssh processes at a minimum ,0
534295,15613867214.0,IssuesEvent,2021-03-19 17:00:22,ushibutatory/umamusume-birthdays,https://api.github.com/repos/ushibutatory/umamusume-birthdays,closed,[WARN] you don't need @types/moment installed,Priority/Middle Type/Bug,"https://github.com/ushibutatory/umamusume-birthdays/runs/2150277730?check_suite_focus=true#step:3:65
ローカルで開発してる時にこんな警告出てたかな……。見落としたのかもしれない。
ともかく、不要なパッケージをインストールしてしまっている模様。",1.0,"[WARN] you don't need @types/moment installed - https://github.com/ushibutatory/umamusume-birthdays/runs/2150277730?check_suite_focus=true#step:3:65
ローカルで開発してる時にこんな警告出てたかな……。見落としたのかもしれない。
ともかく、不要なパッケージをインストールしてしまっている模様。",0, you don t need types moment installed ローカルで開発してる時にこんな警告出てたかな……。見落としたのかもしれない。 ともかく、不要なパッケージをインストールしてしまっている模様。,0
560,8583878792.0,IssuesEvent,2018-11-13 21:02:38,Microsoft/VFSForGit,https://api.github.com/repos/Microsoft/VFSForGit,closed,"""System.ArgumentException: Stream was not readable"" is sometimes thrown from RetryableReadToEnd",MountFailure MountReliability,"We've had 80+ reports of this in the last 30 days. The exception is being thrown because the stream is returning `false` for `CanRead`, but at this time it's unclear how\why the stream is ending up in this state.
*Error*
```
System.ArgumentException: Stream was not readable.\r\n
at System.IO.StreamReader..ctor(Stream stream, Encoding encoding, Boolean detectEncodingFromByteOrderMarks, Int32 bufferSize, Boolean leaveOpen)\r\n
at System.IO.StreamReader..ctor(Stream stream)\r\n
at GVFS.Common.Http.GitEndPointResponseData.RetryableReadToEnd()\r\n
at GVFS.Common.Http.GitObjectsHttpRequestor.<>c__DisplayClass8_0.b__0(Int32 tryCount)\r\n
at GVFS.Common.RetryWrapper`1.Invoke(Func`2 toInvoke)\r\n
at GVFS.Common.Http.GitObjectsHttpRequestor.QueryForFileSizes(IEnumerable`1 objectIds, CancellationToken cancellationToken)\r\n
at GVFS.Virtualization.Projection.GitIndexProjection.FileOrFolderData.PopulateSizesFromRemote(ITracer tracer, GVFSGitObjects gitObjects, BlobSizesConnection blobSizesConnection, HashSet`1 missingShas, List`1 childrenMissingSizes, CancellationToken cancellationToken)\r\n
at GVFS.Virtualization.Projection.GitIndexProjection.FileOrFolderData.FolderOnly_PopulateSizes(ITracer tracer, GVFSGitObjects gitObjects, BlobSizesConnection blobSizesConnection, Dictionary`2 availableSizes, CancellationToken cancellationToken)\r\n
at GVFS.Virtualization.Projection.GitIndexProjection.GetProjectedItems(CancellationToken cancellationToken, BlobSizesConnection blobSizesConnection, String folderPath)\r\n
at GVFS.Windows.WindowsFileSystemVirtualizer.StartDirectoryEnumerationAsyncHandler(CancellationToken cancellationToken, BlobSizesConnection blobSizesConnection, Int32 commandId, Guid enumerationId, String virtualPath) in E:\\A\\_work\\45\\s\\GVFS\\GVFS.Windows\\WindowsFileSystemVirtualizer.cs:line 429"",
""ErrorMessage"":""StartDirectoryEnumerationAsyncHandler caught unhandled exception, exiting process""
```
",True,"""System.ArgumentException: Stream was not readable"" is sometimes thrown from RetryableReadToEnd - We've had 80+ reports of this in the last 30 days. The exception is being thrown because the stream is returning `false` for `CanRead`, but at this time it's unclear how\why the stream is ending up in this state.
*Error*
```
System.ArgumentException: Stream was not readable.\r\n
at System.IO.StreamReader..ctor(Stream stream, Encoding encoding, Boolean detectEncodingFromByteOrderMarks, Int32 bufferSize, Boolean leaveOpen)\r\n
at System.IO.StreamReader..ctor(Stream stream)\r\n
at GVFS.Common.Http.GitEndPointResponseData.RetryableReadToEnd()\r\n
at GVFS.Common.Http.GitObjectsHttpRequestor.<>c__DisplayClass8_0.b__0(Int32 tryCount)\r\n
at GVFS.Common.RetryWrapper`1.Invoke(Func`2 toInvoke)\r\n
at GVFS.Common.Http.GitObjectsHttpRequestor.QueryForFileSizes(IEnumerable`1 objectIds, CancellationToken cancellationToken)\r\n
at GVFS.Virtualization.Projection.GitIndexProjection.FileOrFolderData.PopulateSizesFromRemote(ITracer tracer, GVFSGitObjects gitObjects, BlobSizesConnection blobSizesConnection, HashSet`1 missingShas, List`1 childrenMissingSizes, CancellationToken cancellationToken)\r\n
at GVFS.Virtualization.Projection.GitIndexProjection.FileOrFolderData.FolderOnly_PopulateSizes(ITracer tracer, GVFSGitObjects gitObjects, BlobSizesConnection blobSizesConnection, Dictionary`2 availableSizes, CancellationToken cancellationToken)\r\n
at GVFS.Virtualization.Projection.GitIndexProjection.GetProjectedItems(CancellationToken cancellationToken, BlobSizesConnection blobSizesConnection, String folderPath)\r\n
at GVFS.Windows.WindowsFileSystemVirtualizer.StartDirectoryEnumerationAsyncHandler(CancellationToken cancellationToken, BlobSizesConnection blobSizesConnection, Int32 commandId, Guid enumerationId, String virtualPath) in E:\\A\\_work\\45\\s\\GVFS\\GVFS.Windows\\WindowsFileSystemVirtualizer.cs:line 429"",
""ErrorMessage"":""StartDirectoryEnumerationAsyncHandler caught unhandled exception, exiting process""
```
",1, system argumentexception stream was not readable is sometimes thrown from retryablereadtoend we ve had reports of this in the last days the exception is being thrown because the stream is returning false for canread but at this time it s unclear how why the stream is ending up in this state error system argumentexception stream was not readable r n at system io streamreader ctor stream stream encoding encoding boolean detectencodingfrombyteordermarks buffersize boolean leaveopen r n at system io streamreader ctor stream stream r n at gvfs common http gitendpointresponsedata retryablereadtoend r n at gvfs common http gitobjectshttprequestor c b trycount r n at gvfs common retrywrapper invoke func toinvoke r n at gvfs common http gitobjectshttprequestor queryforfilesizes ienumerable objectids cancellationtoken cancellationtoken r n at gvfs virtualization projection gitindexprojection fileorfolderdata populatesizesfromremote itracer tracer gvfsgitobjects gitobjects blobsizesconnection blobsizesconnection hashset missingshas list childrenmissingsizes cancellationtoken cancellationtoken r n at gvfs virtualization projection gitindexprojection fileorfolderdata folderonly populatesizes itracer tracer gvfsgitobjects gitobjects blobsizesconnection blobsizesconnection dictionary availablesizes cancellationtoken cancellationtoken r n at gvfs virtualization projection gitindexprojection getprojecteditems cancellationtoken cancellationtoken blobsizesconnection blobsizesconnection string folderpath r n at gvfs windows windowsfilesystemvirtualizer startdirectoryenumerationasynchandler cancellationtoken cancellationtoken blobsizesconnection blobsizesconnection commandid guid enumerationid string virtualpath in e a work s gvfs gvfs windows windowsfilesystemvirtualizer cs line errormessage startdirectoryenumerationasynchandler caught unhandled exception exiting process ,1
129358,18091246886.0,IssuesEvent,2021-09-22 02:01:19,atlslscsrv-app/upgraded-waddle,https://api.github.com/repos/atlslscsrv-app/upgraded-waddle,closed,CVE-2017-16042 (High) detected in growl-1.9.2.tgz - autoclosed,security vulnerability,"## CVE-2017-16042 - High Severity Vulnerability
Vulnerable Library - growl-1.9.2.tgz
Growl adds growl notification support to nodejs. Growl before 1.10.2 does not properly sanitize input before passing it to exec, allowing for arbitrary command execution.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",True,"CVE-2017-16042 (High) detected in growl-1.9.2.tgz - autoclosed - ## CVE-2017-16042 - High Severity Vulnerability
Vulnerable Library - growl-1.9.2.tgz
Growl adds growl notification support to nodejs. Growl before 1.10.2 does not properly sanitize input before passing it to exec, allowing for arbitrary command execution.
***
Step up your Open Source Security Game with WhiteSource [here](https://www.whitesourcesoftware.com/full_solution_bolt_github)",0,cve high detected in growl tgz autoclosed cve high severity vulnerability vulnerable library growl tgz growl unobtrusive notifications library home page a href path to dependency file upgraded waddle node modules native node modules forked node pty package json path to vulnerable library upgraded waddle node modules native node modules forked node pty node modules growl package json dependency hierarchy mocha tgz root library x growl tgz vulnerable library found in head commit a href found in base branch master vulnerability details growl adds growl notification support to nodejs growl before does not properly sanitize input before passing it to exec allowing for arbitrary command execution publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity low privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact high availability impact high for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution step up your open source security game with whitesource ,0
1558,17029044781.0,IssuesEvent,2021-07-04 06:55:54,FoundationDB/fdb-kubernetes-operator,https://api.github.com/repos/FoundationDB/fdb-kubernetes-operator,opened,Make use of RetryOnConflict,good first issue reliability,"The operator sometimes failed to update the status of a resource because another system has modified it. Instead of aborting and restarting the reconcile loop we should make use of the [RetryOnConflict](https://pkg.go.dev/k8s.io/client-go/util/retry?utm_source=godoc#RetryOnConflict) method provided by the Kubernetes client. That allows to retry an operation if a conflict was encountered. Since we only update the status of the `FoundationDBCluster` a retry with fetching the latest ""version"" of the spec should be fine.",True,"Make use of RetryOnConflict - The operator sometimes failed to update the status of a resource because another system has modified it. Instead of aborting and restarting the reconcile loop we should make use of the [RetryOnConflict](https://pkg.go.dev/k8s.io/client-go/util/retry?utm_source=godoc#RetryOnConflict) method provided by the Kubernetes client. That allows to retry an operation if a conflict was encountered. Since we only update the status of the `FoundationDBCluster` a retry with fetching the latest ""version"" of the spec should be fine.",1,make use of retryonconflict the operator sometimes failed to update the status of a resource because another system has modified it instead of aborting and restarting the reconcile loop we should make use of the method provided by the kubernetes client that allows to retry an operation if a conflict was encountered since we only update the status of the foundationdbcluster a retry with fetching the latest version of the spec should be fine ,1
454,7621116133.0,IssuesEvent,2018-05-03 07:16:13,LiskHQ/lisk,https://api.github.com/repos/LiskHQ/lisk,closed,Remove snapshotting logic from rounds module,*medium :hammer: reliability,"### Expected behavior
We shouldn't have any logic related with snapshotting process in rounds module, as it's wrong place for doing that.
### Actual behavior
We have snapshotting logic in rounds module, for example here:
https://github.com/LiskHQ/lisk/blob/982b858a195f248e1e21c298968f9a1de5d74943/modules/rounds.js#L242-L246
### Steps to reproduce
N/A
### Which version(s) does this affect? (Environment, OS, etc...)
1.0.0, 0.9.x",True,"Remove snapshotting logic from rounds module - ### Expected behavior
We shouldn't have any logic related with snapshotting process in rounds module, as it's wrong place for doing that.
### Actual behavior
We have snapshotting logic in rounds module, for example here:
https://github.com/LiskHQ/lisk/blob/982b858a195f248e1e21c298968f9a1de5d74943/modules/rounds.js#L242-L246
### Steps to reproduce
N/A
### Which version(s) does this affect? (Environment, OS, etc...)
1.0.0, 0.9.x",1,remove snapshotting logic from rounds module expected behavior we shouldn t have any logic related with snapshotting process in rounds module as it s wrong place for doing that actual behavior we have snapshotting logic in rounds module for example here steps to reproduce n a which version s does this affect environment os etc x,1
1223,14099499333.0,IssuesEvent,2020-11-06 01:33:16,FoundationDB/fdb-kubernetes-operator,https://api.github.com/repos/FoundationDB/fdb-kubernetes-operator,closed,Preventing connection string selection from flapping in the UpdateStatus method,reliability,"When the seedConnectionString in the spec is different from the connectionString in the status, we try both to see which one works. There are circumstances where they could be different but both could work, such as when the seedConnectionString is out of date but references addresses of processes that are still in the cluster. In this case, we sometimes pick the the seedConnetionString and set that as the connection string in the status. This causes us to try and update the cluster file in the pods, then realize the connection string is out of date and change it back, flapping back and forth. We should make sure the connection string we set in UpdateStatus is always the latest one.",True,"Preventing connection string selection from flapping in the UpdateStatus method - When the seedConnectionString in the spec is different from the connectionString in the status, we try both to see which one works. There are circumstances where they could be different but both could work, such as when the seedConnectionString is out of date but references addresses of processes that are still in the cluster. In this case, we sometimes pick the the seedConnetionString and set that as the connection string in the status. This causes us to try and update the cluster file in the pods, then realize the connection string is out of date and change it back, flapping back and forth. We should make sure the connection string we set in UpdateStatus is always the latest one.",1,preventing connection string selection from flapping in the updatestatus method when the seedconnectionstring in the spec is different from the connectionstring in the status we try both to see which one works there are circumstances where they could be different but both could work such as when the seedconnectionstring is out of date but references addresses of processes that are still in the cluster in this case we sometimes pick the the seedconnetionstring and set that as the connection string in the status this causes us to try and update the cluster file in the pods then realize the connection string is out of date and change it back flapping back and forth we should make sure the connection string we set in updatestatus is always the latest one ,1
2138,23686217706.0,IssuesEvent,2022-08-29 06:35:44,adoptium/infrastructure,https://api.github.com/repos/adoptium/infrastructure,closed,"""Ansible Playbook / macOS"" github check failing for all PRs",reliability,"Ref: https://github.com/adoptium/infrastructure/runs/7750861675?check_suite_focus=true
```
TASK [Common : Install Build Tool Casks] ***************************************
failed: [localhost] (item=adoptopenjdk10) => {""ansible_loop_var"": ""item"", ""changed"": false, ""item"": ""adoptopenjdk10"", ""msg"": ""Error: Calling Cask::DSL::Version#before_colon is disabled! Use Cask::DSL::Version#csv instead.\nPlease report this issue to the adoptopenjdk/openjdk tap (not Homebrew/brew or Homebrew/core), or even better, submit a PR to fix it:\n /usr/local/Homebrew/Library/Taps/adoptopenjdk/homebrew-openjdk/Casks/adoptopenjdk10.rb:14""}
changed: [localhost] => (item=packages)
```",True,"""Ansible Playbook / macOS"" github check failing for all PRs - Ref: https://github.com/adoptium/infrastructure/runs/7750861675?check_suite_focus=true
```
TASK [Common : Install Build Tool Casks] ***************************************
failed: [localhost] (item=adoptopenjdk10) => {""ansible_loop_var"": ""item"", ""changed"": false, ""item"": ""adoptopenjdk10"", ""msg"": ""Error: Calling Cask::DSL::Version#before_colon is disabled! Use Cask::DSL::Version#csv instead.\nPlease report this issue to the adoptopenjdk/openjdk tap (not Homebrew/brew or Homebrew/core), or even better, submit a PR to fix it:\n /usr/local/Homebrew/Library/Taps/adoptopenjdk/homebrew-openjdk/Casks/adoptopenjdk10.rb:14""}
changed: [localhost] => (item=packages)
```",1, ansible playbook macos github check failing for all prs ref task failed item ansible loop var item changed false item msg error calling cask dsl version before colon is disabled use cask dsl version csv instead nplease report this issue to the adoptopenjdk openjdk tap not homebrew brew or homebrew core or even better submit a pr to fix it n usr local homebrew library taps adoptopenjdk homebrew openjdk casks rb changed item packages ,1
2132,23636118383.0,IssuesEvent,2022-08-25 13:26:31,jasp-stats/jasp-issues,https://api.github.com/repos/jasp-stats/jasp-issues,closed,Bland-Altman plots in JASP,Module: jaspReliability,"
* Enhancement: Add ability to generate Bland-Altman plots in JASP
* Purpose: When analysing agreement between two methods of measurement, it is usual/useful to be able to display results as a plot of difference versus mean for each data pair, alongside bias and 95% confidence LoAs
* Use-case: Assessing level of agreement between two different measurement methods
**Is your feature request related to a problem? Please describe.**
Correlation plots can be misleading when verifying level of agreement between two different measurement methods. A Bland Altman plot gives a much better visualisation of the level of agreement.
**Describe the solution you'd like**
Ability to generate Bland-Altman plots as described in: ""Statistical Methods for Assessing Agreement Between Two Methods of Clinical Measurement"", J.Bland, D.Altman, Lancet, 1986; i: 307-310
**Describe alternatives you've considered**
**Additional context**
",True,"Bland-Altman plots in JASP -
* Enhancement: Add ability to generate Bland-Altman plots in JASP
* Purpose: When analysing agreement between two methods of measurement, it is usual/useful to be able to display results as a plot of difference versus mean for each data pair, alongside bias and 95% confidence LoAs
* Use-case: Assessing level of agreement between two different measurement methods
**Is your feature request related to a problem? Please describe.**
Correlation plots can be misleading when verifying level of agreement between two different measurement methods. A Bland Altman plot gives a much better visualisation of the level of agreement.
**Describe the solution you'd like**
Ability to generate Bland-Altman plots as described in: ""Statistical Methods for Assessing Agreement Between Two Methods of Clinical Measurement"", J.Bland, D.Altman, Lancet, 1986; i: 307-310
**Describe alternatives you've considered**
**Additional context**
",1,bland altman plots in jasp enhancement add ability to generate bland altman plots in jasp purpose when analysing agreement between two methods of measurement it is usual useful to be able to display results as a plot of difference versus mean for each data pair alongside bias and confidence loas use case assessing level of agreement between two different measurement methods is your feature request related to a problem please describe correlation plots can be misleading when verifying level of agreement between two different measurement methods a bland altman plot gives a much better visualisation of the level of agreement describe the solution you d like ability to generate bland altman plots as described in statistical methods for assessing agreement between two methods of clinical measurement j bland d altman lancet i describe alternatives you ve considered additional context ,1
406651,11900610830.0,IssuesEvent,2020-03-30 10:58:04,luna/ide,https://api.github.com/repos/luna/ide,closed,Define rule to determine which identifiers in node expression AST are aliases,Category: IDE Change: Non-Breaking Difficulty: Core Contributor Priority: Highest Status: Duplicate Type: Enhancement,"### Summary
The output of this task is to explain rule what ids in node expressions should be interpreted as aliases of other nodes, and instruction how the ast should change when the nodes will be disconnected.
### Value
Being able to identify node connections.
### Specification
There are three cases where ""new"" ident is defined (possibly shadowing other): left side of =, left side of ->, right side of colon. But we should pick only typical cases for creating connections (e.g. don't consider the nodes whose expressions are blocks)
### Acceptance Criteria & Test Cases
Working tests.",1.0,"Define rule to determine which identifiers in node expression AST are aliases - ### Summary
The output of this task is to explain rule what ids in node expressions should be interpreted as aliases of other nodes, and instruction how the ast should change when the nodes will be disconnected.
### Value
Being able to identify node connections.
### Specification
There are three cases where ""new"" ident is defined (possibly shadowing other): left side of =, left side of ->, right side of colon. But we should pick only typical cases for creating connections (e.g. don't consider the nodes whose expressions are blocks)
### Acceptance Criteria & Test Cases
Working tests.",0,define rule to determine which identifiers in node expression ast are aliases summary the output of this task is to explain rule what ids in node expressions should be interpreted as aliases of other nodes and instruction how the ast should change when the nodes will be disconnected value being able to identify node connections specification there are three cases where new ident is defined possibly shadowing other left side of left side of right side of colon but we should pick only typical cases for creating connections e g don t consider the nodes whose expressions are blocks acceptance criteria test cases working tests ,0
180749,13957506123.0,IssuesEvent,2020-10-24 07:11:09,AY2021S1-CS2113T-W12-2/tp,https://api.github.com/repos/AY2021S1-CS2113T-W12-2/tp,closed,Add test cases for maxNumber in TaskList,aspect.Testing priority.Medium type.Task,There will be problems if `maxNumber` in `TaskList` is not updated properly. We should add test cases for this to prevent anything from breaking.,1.0,Add test cases for maxNumber in TaskList - There will be problems if `maxNumber` in `TaskList` is not updated properly. We should add test cases for this to prevent anything from breaking.,0,add test cases for maxnumber in tasklist there will be problems if maxnumber in tasklist is not updated properly we should add test cases for this to prevent anything from breaking ,0
961,11802932036.0,IssuesEvent,2020-03-18 22:46:45,NuGet/Home,https://api.github.com/repos/NuGet/Home,reopened,dotnet restore failing with TaskCanceledException,Area:DotnetCLI Area:ErrorHandling Area:Plugin Area:Reliability Resolution:Duplicate Type:Bug,"_From @ColinM9991 on March 1, 2019 13:9_
## Steps to reproduce
N/A
This issue is intermittent, however when it happens it then lingers for some time.
## Description
We are using the dotnet CLI to restore packages for an ASP.NET Core application targeting .NET Core in our TeamCity build process, and are intermittently having failed builds where the restore task fails as soon as it picks up the first project file
```
[12:43:57] Step 1/13: Restore Packages (.NET CLI (dotnet)) (30s)
[12:43:58] [Step 1/13] dotnet.exe restore Project.sln --disable-parallel
[12:43:58] [Step 1/13] restore (29s)
[12:43:58] [restore] Starting: ""C:\Program Files\dotnet\dotnet.exe"" restore Project.sln --disable-parallel
[12:43:58] [restore] in directory: C:\buildAgent\work\g0124fa0e71e5f68
[12:44:16] [restore] Restoring packages for C:\buildAgent\work\g0124fa0e71e5f68\ProjectA\Common\ProjectB.csproj...
[12:44:28] [restore] C:\Program Files\dotnet\sdk\2.2.102\NuGet.targets(114,5): error : A task was canceled.
[12:44:28] [restore]
[12:44:28] [restore] Build FAILED.
[12:44:28] [restore]
[12:44:28] [restore] 0 Warning(s)
[12:44:28] [restore] 1 Error(s)
[12:44:28] [restore]
[12:44:28] [restore] Time Elapsed 00:00:24.84
[12:44:28] [restore]
[12:44:28] [restore]
[12:44:28] [restore] C:\Program Files\dotnet\sdk\2.2.102\NuGet.targets(114,5): error : A task was canceled.
[12:44:28] [restore] Process exited with code 1
[12:44:28] [Step 1/13] Process exited with code 1
[12:44:28] [Step 1/13] Step Restore Packages (.NET CLI (dotnet)) failed
```
## Expected behavior
In the scenario where there is an actual error with the codebase then the dotnet CLI should return a meaningful error message as to what the problem is
In the scenario where nothing is wrong and the dotnet CLI fails for no reason, the expected behavior would be for the restore task to complete successfully as it does in other builds of the same codebase at previous times.
## Actual behavior
`dotnet restore` fails with `TaskCanceledException` and no indication as to what the issue is.
## Environment data
`dotnet --info` output:
```
[13:07:51] [Step 2/2] .NET Core SDK (reflecting any global.json):
[13:07:51] [Step 2/2] Version: 2.2.102
[13:07:51] [Step 2/2] Commit: 96ff75a873
[13:07:51] [Step 2/2]
[13:07:51] [Step 2/2] Runtime Environment:
[13:07:51] [Step 2/2] OS Name: Windows
[13:07:51] [Step 2/2] OS Version: 10.0.17763
[13:07:51] [Step 2/2] OS Platform: Windows
[13:07:51] [Step 2/2] RID: win10-x64
[13:07:51] [Step 2/2] Base Path: C:\Program Files\dotnet\sdk\2.2.102\
[13:07:51] [Step 2/2]
[13:07:51] [Step 2/2] Host (useful for support):
[13:07:51] [Step 2/2] Version: 2.2.1
[13:07:51] [Step 2/2] Commit: 878dd11e62
[13:07:51] [Step 2/2]
[13:07:51] [Step 2/2] .NET Core SDKs installed:
[13:07:51] [Step 2/2] 1.1.12 [C:\Program Files\dotnet\sdk]
[13:07:51] [Step 2/2] 2.1.202 [C:\Program Files\dotnet\sdk]
[13:07:51] [Step 2/2] 2.1.504 [C:\Program Files\dotnet\sdk]
[13:07:51] [Step 2/2] 2.2.102 [C:\Program Files\dotnet\sdk]
[13:07:51] [Step 2/2]
[13:07:51] [Step 2/2] .NET Core runtimes installed:
[13:07:51] [Step 2/2] Microsoft.AspNetCore.All 2.1.8 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
[13:07:51] [Step 2/2] Microsoft.AspNetCore.All 2.2.1 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
[13:07:51] [Step 2/2] Microsoft.AspNetCore.App 2.1.8 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
[13:07:51] [Step 2/2] Microsoft.AspNetCore.App 2.2.1 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
[13:07:51] [Step 2/2] Microsoft.NETCore.App 1.0.14 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
[13:07:51] [Step 2/2] Microsoft.NETCore.App 1.1.11 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
[13:07:51] [Step 2/2] Microsoft.NETCore.App 2.0.9 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
[13:07:51] [Step 2/2] Microsoft.NETCore.App 2.1.8 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
[13:07:51] [Step 2/2] Microsoft.NETCore.App 2.2.1 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
```
NuGet.CommandLine version: 4.9.3
_Copied from original issue: dotnet/cli#10905_",True,"dotnet restore failing with TaskCanceledException - _From @ColinM9991 on March 1, 2019 13:9_
## Steps to reproduce
N/A
This issue is intermittent, however when it happens it then lingers for some time.
## Description
We are using the dotnet CLI to restore packages for an ASP.NET Core application targeting .NET Core in our TeamCity build process, and are intermittently having failed builds where the restore task fails as soon as it picks up the first project file
```
[12:43:57] Step 1/13: Restore Packages (.NET CLI (dotnet)) (30s)
[12:43:58] [Step 1/13] dotnet.exe restore Project.sln --disable-parallel
[12:43:58] [Step 1/13] restore (29s)
[12:43:58] [restore] Starting: ""C:\Program Files\dotnet\dotnet.exe"" restore Project.sln --disable-parallel
[12:43:58] [restore] in directory: C:\buildAgent\work\g0124fa0e71e5f68
[12:44:16] [restore] Restoring packages for C:\buildAgent\work\g0124fa0e71e5f68\ProjectA\Common\ProjectB.csproj...
[12:44:28] [restore] C:\Program Files\dotnet\sdk\2.2.102\NuGet.targets(114,5): error : A task was canceled.
[12:44:28] [restore]
[12:44:28] [restore] Build FAILED.
[12:44:28] [restore]
[12:44:28] [restore] 0 Warning(s)
[12:44:28] [restore] 1 Error(s)
[12:44:28] [restore]
[12:44:28] [restore] Time Elapsed 00:00:24.84
[12:44:28] [restore]
[12:44:28] [restore]
[12:44:28] [restore] C:\Program Files\dotnet\sdk\2.2.102\NuGet.targets(114,5): error : A task was canceled.
[12:44:28] [restore] Process exited with code 1
[12:44:28] [Step 1/13] Process exited with code 1
[12:44:28] [Step 1/13] Step Restore Packages (.NET CLI (dotnet)) failed
```
## Expected behavior
In the scenario where there is an actual error with the codebase then the dotnet CLI should return a meaningful error message as to what the problem is
In the scenario where nothing is wrong and the dotnet CLI fails for no reason, the expected behavior would be for the restore task to complete successfully as it does in other builds of the same codebase at previous times.
## Actual behavior
`dotnet restore` fails with `TaskCanceledException` and no indication as to what the issue is.
## Environment data
`dotnet --info` output:
```
[13:07:51] [Step 2/2] .NET Core SDK (reflecting any global.json):
[13:07:51] [Step 2/2] Version: 2.2.102
[13:07:51] [Step 2/2] Commit: 96ff75a873
[13:07:51] [Step 2/2]
[13:07:51] [Step 2/2] Runtime Environment:
[13:07:51] [Step 2/2] OS Name: Windows
[13:07:51] [Step 2/2] OS Version: 10.0.17763
[13:07:51] [Step 2/2] OS Platform: Windows
[13:07:51] [Step 2/2] RID: win10-x64
[13:07:51] [Step 2/2] Base Path: C:\Program Files\dotnet\sdk\2.2.102\
[13:07:51] [Step 2/2]
[13:07:51] [Step 2/2] Host (useful for support):
[13:07:51] [Step 2/2] Version: 2.2.1
[13:07:51] [Step 2/2] Commit: 878dd11e62
[13:07:51] [Step 2/2]
[13:07:51] [Step 2/2] .NET Core SDKs installed:
[13:07:51] [Step 2/2] 1.1.12 [C:\Program Files\dotnet\sdk]
[13:07:51] [Step 2/2] 2.1.202 [C:\Program Files\dotnet\sdk]
[13:07:51] [Step 2/2] 2.1.504 [C:\Program Files\dotnet\sdk]
[13:07:51] [Step 2/2] 2.2.102 [C:\Program Files\dotnet\sdk]
[13:07:51] [Step 2/2]
[13:07:51] [Step 2/2] .NET Core runtimes installed:
[13:07:51] [Step 2/2] Microsoft.AspNetCore.All 2.1.8 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
[13:07:51] [Step 2/2] Microsoft.AspNetCore.All 2.2.1 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
[13:07:51] [Step 2/2] Microsoft.AspNetCore.App 2.1.8 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
[13:07:51] [Step 2/2] Microsoft.AspNetCore.App 2.2.1 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
[13:07:51] [Step 2/2] Microsoft.NETCore.App 1.0.14 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
[13:07:51] [Step 2/2] Microsoft.NETCore.App 1.1.11 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
[13:07:51] [Step 2/2] Microsoft.NETCore.App 2.0.9 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
[13:07:51] [Step 2/2] Microsoft.NETCore.App 2.1.8 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
[13:07:51] [Step 2/2] Microsoft.NETCore.App 2.2.1 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
```
NuGet.CommandLine version: 4.9.3
_Copied from original issue: dotnet/cli#10905_",1,dotnet restore failing with taskcanceledexception from on march steps to reproduce n a this issue is intermittent however when it happens it then lingers for some time description we are using the dotnet cli to restore packages for an asp net core application targeting net core in our teamcity build process and are intermittently having failed builds where the restore task fails as soon as it picks up the first project file step restore packages net cli dotnet dotnet exe restore project sln disable parallel restore starting c program files dotnet dotnet exe restore project sln disable parallel in directory c buildagent work restoring packages for c buildagent work projecta common projectb csproj c program files dotnet sdk nuget targets error a task was canceled build failed warning s error s time elapsed c program files dotnet sdk nuget targets error a task was canceled process exited with code process exited with code step restore packages net cli dotnet failed expected behavior in the scenario where there is an actual error with the codebase then the dotnet cli should return a meaningful error message as to what the problem is in the scenario where nothing is wrong and the dotnet cli fails for no reason the expected behavior would be for the restore task to complete successfully as it does in other builds of the same codebase at previous times actual behavior dotnet restore fails with taskcanceledexception and no indication as to what the issue is environment data dotnet info output net core sdk reflecting any global json version commit runtime environment os name windows os version os platform windows rid base path c program files dotnet sdk host useful for support version commit net core sdks installed net core runtimes installed microsoft aspnetcore all microsoft aspnetcore all microsoft aspnetcore app microsoft aspnetcore app microsoft netcore app microsoft netcore app microsoft netcore app microsoft netcore app microsoft netcore app nuget commandline version copied from original issue dotnet cli ,1
702,9962336755.0,IssuesEvent,2019-07-07 13:45:30,jupyterhub/mybinder.org-deploy,https://api.github.com/repos/jupyterhub/mybinder.org-deploy,closed,"Deploy to all production clusters, then test them",help wanted site reliability,"Currently we deploy to GKE, then test GKE, then deploy to OVH, then test OVH.
Sometimes the test of GKE fails (for no good reason) and we end up not deploying to OVH.
I think it would be good to first do the two deploys and then test both of them. This way we keep the two clusters more consistent. If someone else also thinks this is a good idea/would make our life easier start a PR :)
This change would be in `.travis.yml` and a good way to learn a bit more about how the automatic deploy setup works.",True,"Deploy to all production clusters, then test them - Currently we deploy to GKE, then test GKE, then deploy to OVH, then test OVH.
Sometimes the test of GKE fails (for no good reason) and we end up not deploying to OVH.
I think it would be good to first do the two deploys and then test both of them. This way we keep the two clusters more consistent. If someone else also thinks this is a good idea/would make our life easier start a PR :)
This change would be in `.travis.yml` and a good way to learn a bit more about how the automatic deploy setup works.",1,deploy to all production clusters then test them currently we deploy to gke then test gke then deploy to ovh then test ovh sometimes the test of gke fails for no good reason and we end up not deploying to ovh i think it would be good to first do the two deploys and then test both of them this way we keep the two clusters more consistent if someone else also thinks this is a good idea would make our life easier start a pr this change would be in travis yml and a good way to learn a bit more about how the automatic deploy setup works ,1
451611,13039275796.0,IssuesEvent,2020-07-28 16:28:41,dnnsoftware/Dnn.Platform,https://api.github.com/repos/dnnsoftware/Dnn.Platform,closed,"Email not sent when user is ""authorized""",Area: AE > PersonaBar Ext > Security.Web Effort: Medium Priority: Medium Status: Ready for Development Type: Bug,"
## Description of bug
On a site with User Registration set to ""Private"", an email is not sent out when the admin authorizes the user.
## Steps to reproduce
1. Site setup for Private registration (Persona -> Settings -> Security -> Member Accounts -> Registration Settings
2. User registers. Email is received by both user and site admin.
3. Site admin authorizes user, either via the ""authorize"" link, or via Persona -> Manage -> Users -> User -> Authorize User
## Current result
User does not receive confirmation email that he's been authorized
## Expected result
User should receive confirmation email that he's been authorized.
## Affected version
* [x] 9.2.1
## Affected browser
* [x] Chrome
* [x] Firefox
* [x] Safari
* [x] Internet Explorer
* [x] Edge
",1.0,"Email not sent when user is ""authorized"" -
## Description of bug
On a site with User Registration set to ""Private"", an email is not sent out when the admin authorizes the user.
## Steps to reproduce
1. Site setup for Private registration (Persona -> Settings -> Security -> Member Accounts -> Registration Settings
2. User registers. Email is received by both user and site admin.
3. Site admin authorizes user, either via the ""authorize"" link, or via Persona -> Manage -> Users -> User -> Authorize User
## Current result
User does not receive confirmation email that he's been authorized
## Expected result
User should receive confirmation email that he's been authorized.
## Affected version
* [x] 9.2.1
## Affected browser
* [x] Chrome
* [x] Firefox
* [x] Safari
* [x] Internet Explorer
* [x] Edge
",0,email not sent when user is authorized description of bug on a site with user registration set to private an email is not sent out when the admin authorizes the user steps to reproduce site setup for private registration persona settings security member accounts registration settings user registers email is received by both user and site admin site admin authorizes user either via the authorize link or via persona manage users user authorize user current result user does not receive confirmation email that he s been authorized expected result user should receive confirmation email that he s been authorized affected version affected browser chrome firefox safari internet explorer edge ,0
407751,27630375721.0,IssuesEvent,2023-03-10 10:22:46,spring-cloud/spring-cloud-openfeign,https://api.github.com/repos/spring-cloud/spring-cloud-openfeign,closed,[Documentation] Feign loadbalancer configuration,documentation,"Hi! Sorry for asking that question
I went through the docs and found the following:
https://docs.spring.io/spring-cloud-openfeign/docs/current/reference/html/#spring-cloud-feign-overriding-defaults
```
feign:
client:
config:
feignName:
connectTimeout: 5000
readTimeout: 5000
loggerLevel: full
errorDecoder: com.example.SimpleErrorDecoder
retryer: com.example.SimpleRetryer
requestInterceptors:
- com.example.FooRequestInterceptor
- com.example.BarRequestInterceptor
decode404: false
encoder: com.example.SimpleEncoder
decoder: com.example.SimpleDecoder
contract: com.example.SimpleContract
```
Here there is feignName, but it seams that if @FeignClient has contextId - we need to specify contextId instead than name
Also it's not really obvious whether we need to specify contextId, feignName or service-name from Eureka when we configure load-balancer
https://docs.spring.io/spring-cloud-commons/docs/current/reference/html/#configuring-individual-loadbalancerclients
```
spring:
cloud:
loadbalancer:
clients:
hello-world-client:
retry:
max-retries-on-same-service-instance: 0
max-retries-on-next-service-instance: 2
retry-on-all-operations: true
retryable-status-codes:
- 400
- 500
```
It would be nice if somebody clarified that moment :)
",1.0,"[Documentation] Feign loadbalancer configuration - Hi! Sorry for asking that question
I went through the docs and found the following:
https://docs.spring.io/spring-cloud-openfeign/docs/current/reference/html/#spring-cloud-feign-overriding-defaults
```
feign:
client:
config:
feignName:
connectTimeout: 5000
readTimeout: 5000
loggerLevel: full
errorDecoder: com.example.SimpleErrorDecoder
retryer: com.example.SimpleRetryer
requestInterceptors:
- com.example.FooRequestInterceptor
- com.example.BarRequestInterceptor
decode404: false
encoder: com.example.SimpleEncoder
decoder: com.example.SimpleDecoder
contract: com.example.SimpleContract
```
Here there is feignName, but it seams that if @FeignClient has contextId - we need to specify contextId instead than name
Also it's not really obvious whether we need to specify contextId, feignName or service-name from Eureka when we configure load-balancer
https://docs.spring.io/spring-cloud-commons/docs/current/reference/html/#configuring-individual-loadbalancerclients
```
spring:
cloud:
loadbalancer:
clients:
hello-world-client:
retry:
max-retries-on-same-service-instance: 0
max-retries-on-next-service-instance: 2
retry-on-all-operations: true
retryable-status-codes:
- 400
- 500
```
It would be nice if somebody clarified that moment :)
",0, feign loadbalancer configuration hi sorry for asking that question i went through the docs and found the following feign client config feignname connecttimeout readtimeout loggerlevel full errordecoder com example simpleerrordecoder retryer com example simpleretryer requestinterceptors com example foorequestinterceptor com example barrequestinterceptor false encoder com example simpleencoder decoder com example simpledecoder contract com example simplecontract here there is feignname but it seams that if feignclient has contextid we need to specify contextid instead than name also it s not really obvious whether we need to specify contextid feignname or service name from eureka when we configure load balancer spring cloud loadbalancer clients hello world client retry max retries on same service instance max retries on next service instance retry on all operations true retryable status codes it would be nice if somebody clarified that moment ,0
1293,14658351411.0,IssuesEvent,2020-12-28 17:41:50,PrismarineJS/mineflayer,https://api.github.com/repos/PrismarineJS/mineflayer,closed,Make it easier to reuse auth tokens (create a module to store them in profile files ?),reliability,"Each time I run my bot (a web client) on my Canada Dedicated Server, then run it on my home server Mojang will automatically reset my password due to ""suspicious activity"".
Test results: 10 different people had to reset their passwords.
",True,"Make it easier to reuse auth tokens (create a module to store them in profile files ?) - Each time I run my bot (a web client) on my Canada Dedicated Server, then run it on my home server Mojang will automatically reset my password due to ""suspicious activity"".
Test results: 10 different people had to reset their passwords.
",1,make it easier to reuse auth tokens create a module to store them in profile files each time i run my bot a web client on my canada dedicated server then run it on my home server mojang will automatically reset my password due to suspicious activity test results different people had to reset their passwords ,1
320663,9784431571.0,IssuesEvent,2019-06-08 19:09:02,QueerNeko/mastodon,https://api.github.com/repos/QueerNeko/mastodon,opened,Opção de condensar notificações,enhancement low priority,"### Ideia
Opção nas preferências ou na própria coluna de notificação de usar https://github.com/psydwannabe/mastodon-snippets/blob/master/CSS/condense-notifications.css para que pessoas não precisem excluir boosts/favs das suas colunas de notificação para ter uma coluna de notificações mais navegável.
### Motivos
- Notificações de postagens grandes ocupam muito espaço
- Especialmente quando são muitas notificações
- Pode ser útil ver qual é a postagem em si às vezes sem clicar, então não quero que essa opção seja forçada em todo mundo",1.0,"Opção de condensar notificações - ### Ideia
Opção nas preferências ou na própria coluna de notificação de usar https://github.com/psydwannabe/mastodon-snippets/blob/master/CSS/condense-notifications.css para que pessoas não precisem excluir boosts/favs das suas colunas de notificação para ter uma coluna de notificações mais navegável.
### Motivos
- Notificações de postagens grandes ocupam muito espaço
- Especialmente quando são muitas notificações
- Pode ser útil ver qual é a postagem em si às vezes sem clicar, então não quero que essa opção seja forçada em todo mundo",0,opção de condensar notificações ideia opção nas preferências ou na própria coluna de notificação de usar para que pessoas não precisem excluir boosts favs das suas colunas de notificação para ter uma coluna de notificações mais navegável motivos notificações de postagens grandes ocupam muito espaço especialmente quando são muitas notificações pode ser útil ver qual é a postagem em si às vezes sem clicar então não quero que essa opção seja forçada em todo mundo,0
592,8756170320.0,IssuesEvent,2018-12-14 16:53:13,status-im/status-react,https://api.github.com/repos/status-im/status-react,closed,"""Not sent"" message is shown as ""Sent"" if it was sent during 2 sec after coming online",bug chat chat-reliability medium-severity offline,"### Description
*Type*: Bug
*Summary*: if you send message immediately after switching off offline mode - it is shown as Sent, but in fact it wasn't sent.
#### Expected behavior
status is ""Not sent. Tap for options"" with resend option.
#### Actual behavior
status is ""sent""
### Reproduction
- Open Status
- Type a message
- Turn off WI-FI
- Wait for 10 sec
- Turn on WI-FI and immediately (during 2 sec) send a message
### Additional Information
* Status version: nightly 01/10/2018
* Operating System: Android, IOS, desktop
* TF session: https://app.testfairy.com/projects/4803590-status/builds/8611567/sessions/4402803746/?accessToken=tykYFpL1X6sUO6Njg9IAkpeAVAQ
In status-desktop it is reproducible in another way as well in #5919 (look at https://github.com/status-im/status-react/pull/5919#issuecomment-425896202)
",True,"""Not sent"" message is shown as ""Sent"" if it was sent during 2 sec after coming online - ### Description
*Type*: Bug
*Summary*: if you send message immediately after switching off offline mode - it is shown as Sent, but in fact it wasn't sent.
#### Expected behavior
status is ""Not sent. Tap for options"" with resend option.
#### Actual behavior
status is ""sent""
### Reproduction
- Open Status
- Type a message
- Turn off WI-FI
- Wait for 10 sec
- Turn on WI-FI and immediately (during 2 sec) send a message
### Additional Information
* Status version: nightly 01/10/2018
* Operating System: Android, IOS, desktop
* TF session: https://app.testfairy.com/projects/4803590-status/builds/8611567/sessions/4402803746/?accessToken=tykYFpL1X6sUO6Njg9IAkpeAVAQ
In status-desktop it is reproducible in another way as well in #5919 (look at https://github.com/status-im/status-react/pull/5919#issuecomment-425896202)
",1, not sent message is shown as sent if it was sent during sec after coming online description type bug summary if you send message immediately after switching off offline mode it is shown as sent but in fact it wasn t sent expected behavior status is not sent tap for options with resend option img width alt offl src actual behavior status is sent img width alt offl src reproduction open status type a message turn off wi fi wait for sec turn on wi fi and immediately during sec send a message additional information status version nightly operating system android ios desktop tf session in status desktop it is reproducible in another way as well in look at ,1
447827,12893593547.0,IssuesEvent,2020-07-13 21:59:51,ArctosDB/arctos,https://api.github.com/repos/ArctosDB/arctos,opened,UTM coords not displaying properly,Display/Interface Function-Locality/Event/Georeferencing Priority-Normal,"I entered UTM coords, as seen here (ex: https://arctos.database.museum/guid/MVZ:Bird:192163):

These are not displaying properly on the specimen detail or event detail page (numbers cut off):


",1.0,"UTM coords not displaying properly - I entered UTM coords, as seen here (ex: https://arctos.database.museum/guid/MVZ:Bird:192163):

These are not displaying properly on the specimen detail or event detail page (numbers cut off):


",0,utm coords not displaying properly i entered utm coords as seen here ex these are not displaying properly on the specimen detail or event detail page numbers cut off ,0
1133,13243445858.0,IssuesEvent,2020-08-19 11:28:03,sohaibaslam/learning_site,https://api.github.com/repos/sohaibaslam/learning_site,opened,"Broken Crawlers 19, Aug 2020",crawler broken/unreliable,"1. **abcmart kr(100%)**
1. **accessorize de(100%)/fr(100%)**
1. **additionelle ca(100%)/us(100%)**
1. **adolfodominguez de(100%)**
1. **aldo eu(100%)**
1. **americaneagle ca(100%)**
1. **ami ca(100%)/ch(100%)/cn(100%)/de(100%)/dk(100%)/fr(100%)/it(100%)/jp(100%)/kr(100%)/li(100%)/mx(100%)/pl(100%)/ru(100%)/se(100%)/uk(100%)/us(100%)**
1. **anthropologie (100%)/de(100%)/fr(100%)/uk(100%)**
1. **asos ae(100%)/au(100%)/ch(100%)/cn(100%)/hk(100%)/id(100%)/my(100%)/nl(100%)/ph(100%)/pl(100%)/ru(100%)/sa(100%)/sg(100%)/th(100%)/vn(100%)**
1. **babyshop ae(100%)/sa(100%)**
1. **browns (100%)/ae(100%)/au(100%)/ca(100%)/cn(100%)/dk(100%)/eu(100%)/hk(100%)/jp(100%)/kr(100%)/no(100%)/pl(100%)/ru(100%)/sa(100%)/se(100%)/us(100%)/za(100%)**
1. **burlington us(100%)**
1. **centrepoint sa(100%)**
1. **charlesandkeith th(100%)/uk(100%)**
1. **coldwatercreek us(100%)**
1. **conforama fr(100%)**
1. **converse au(100%)**
1. **cotton au(100%)**
1. **countryroad (100%)**
1. **davidjones (100%)**
1. **debenhams au(100%)/ca(100%)/ch(100%)/dk(100%)/eu(100%)/no(100%)/nz(100%)/se(100%)/sg(100%)/us(100%)**
1. **destinationmaternity us(100%)**
1. **dwsports uk(100%)**
1. **footaction us(100%)**
1. **footlocker dk(100%)/it(100%)/no(100%)**
1. **harrods (100%)**
1. **hermes at(100%)/ca(100%)/fr(100%)/it(100%)/se(100%)**
1. **hollister cn(100%)**
1. **kmart au(100%)**
1. **lanvin cn(100%)**
1. **lcwaikiki tr(100%)**
1. **lee au(100%)**
1. **lifestylestores in(100%)**
1. **lodenfrey de(100%)**
1. **luckybrand ca(100%)**
1. **luigibertolli br(100%)**
1. **luisaspagnoli fr(100%)/it(100%)/jp(100%)/uk(100%)/us(100%)**
1. **made ch(100%)/es(100%)**
1. **maxandco uk(100%)**
1. **maxfashion ae(100%)/bh(100%)/sa(100%)**
1. **maxmara de(100%)/dk(100%)/fr(100%)/it(100%)/jp(100%)/kr(100%)/pl(100%)/se(100%)/uk(100%)/us(100%)**
1. **michaelkors ca(100%)**
1. **mothercare ae(100%)/kw(100%)/sa(100%)**
1. **okini (100%)**
1. **paige us(100%)**
1. **paulsmith au(100%)/eu(100%)/uk(100%)/us(100%)**
1. **peterhahn de(100%)**
1. **poloralphlauren id(100%)**
1. **popup br(100%)**
1. **prettysecrets in(100%)**
1. **rakuten us(100%)**
1. **ripley cl(100%)**
1. **roots ca(100%)**
1. **saksfifthavenue mo(100%)**
1. **saksoff5th us(100%)**
1. **shein in(100%)**
1. **simons ca(100%)/us(100%)**
1. **skechers us(100%)**
1. **soccer us(100%)**
1. **solebox de(100%)/uk(100%)**
1. **speedo au(100%)/us(100%)**
1. **splashfashions ae(100%)/bh(100%)/sa(100%)**
1. **stefaniamode au(100%)/ca(100%)/dk(100%)/eu(100%)/hk(100%)/it(100%)/jp(100%)/pl(100%)/ru(100%)/se(100%)/tr(100%)/uk(100%)/us(100%)**
1. **studio uk(100%)**
1. **stylebop de(100%)**
1. **suitsupply ae(100%)/kr(100%)**
1. **thenorthface jp(100%)**
1. **theoutnet jp(100%)**
1. **therake au(100%)/cn(100%)/de(100%)/es(100%)/fr(100%)/in(100%)/it(100%)/nl(100%)/se(100%)/uk(100%)/us(100%)**
1. **tods cn(100%)/gr(100%)/pt(100%)**
1. **tommybahama bh(100%)/in(100%)/kr(100%)/ph(100%)/sa(100%)/sg(100%)**
1. **topbrands ru(100%)**
1. **ullapopken de(100%)**
1. **underarmour ru(100%)**
1. **venteprivee de(100%)/it(100%)**
1. **vip cn(100%)**
1. **walmart ca(100%)**
1. **weekendmaxmara bg(100%)/cz(100%)/dk(100%)/eu(100%)/hu(100%)/it(100%)/ro(100%)/se(100%)/uk(100%)**
1. **witchery au(100%)/nz(100%)**
1. **zalandolounge de(100%)**
1. **zegna it(100%)/uk(100%)/us(100%)**
",True,"Broken Crawlers 19, Aug 2020 - 1. **abcmart kr(100%)**
1. **accessorize de(100%)/fr(100%)**
1. **additionelle ca(100%)/us(100%)**
1. **adolfodominguez de(100%)**
1. **aldo eu(100%)**
1. **americaneagle ca(100%)**
1. **ami ca(100%)/ch(100%)/cn(100%)/de(100%)/dk(100%)/fr(100%)/it(100%)/jp(100%)/kr(100%)/li(100%)/mx(100%)/pl(100%)/ru(100%)/se(100%)/uk(100%)/us(100%)**
1. **anthropologie (100%)/de(100%)/fr(100%)/uk(100%)**
1. **asos ae(100%)/au(100%)/ch(100%)/cn(100%)/hk(100%)/id(100%)/my(100%)/nl(100%)/ph(100%)/pl(100%)/ru(100%)/sa(100%)/sg(100%)/th(100%)/vn(100%)**
1. **babyshop ae(100%)/sa(100%)**
1. **browns (100%)/ae(100%)/au(100%)/ca(100%)/cn(100%)/dk(100%)/eu(100%)/hk(100%)/jp(100%)/kr(100%)/no(100%)/pl(100%)/ru(100%)/sa(100%)/se(100%)/us(100%)/za(100%)**
1. **burlington us(100%)**
1. **centrepoint sa(100%)**
1. **charlesandkeith th(100%)/uk(100%)**
1. **coldwatercreek us(100%)**
1. **conforama fr(100%)**
1. **converse au(100%)**
1. **cotton au(100%)**
1. **countryroad (100%)**
1. **davidjones (100%)**
1. **debenhams au(100%)/ca(100%)/ch(100%)/dk(100%)/eu(100%)/no(100%)/nz(100%)/se(100%)/sg(100%)/us(100%)**
1. **destinationmaternity us(100%)**
1. **dwsports uk(100%)**
1. **footaction us(100%)**
1. **footlocker dk(100%)/it(100%)/no(100%)**
1. **harrods (100%)**
1. **hermes at(100%)/ca(100%)/fr(100%)/it(100%)/se(100%)**
1. **hollister cn(100%)**
1. **kmart au(100%)**
1. **lanvin cn(100%)**
1. **lcwaikiki tr(100%)**
1. **lee au(100%)**
1. **lifestylestores in(100%)**
1. **lodenfrey de(100%)**
1. **luckybrand ca(100%)**
1. **luigibertolli br(100%)**
1. **luisaspagnoli fr(100%)/it(100%)/jp(100%)/uk(100%)/us(100%)**
1. **made ch(100%)/es(100%)**
1. **maxandco uk(100%)**
1. **maxfashion ae(100%)/bh(100%)/sa(100%)**
1. **maxmara de(100%)/dk(100%)/fr(100%)/it(100%)/jp(100%)/kr(100%)/pl(100%)/se(100%)/uk(100%)/us(100%)**
1. **michaelkors ca(100%)**
1. **mothercare ae(100%)/kw(100%)/sa(100%)**
1. **okini (100%)**
1. **paige us(100%)**
1. **paulsmith au(100%)/eu(100%)/uk(100%)/us(100%)**
1. **peterhahn de(100%)**
1. **poloralphlauren id(100%)**
1. **popup br(100%)**
1. **prettysecrets in(100%)**
1. **rakuten us(100%)**
1. **ripley cl(100%)**
1. **roots ca(100%)**
1. **saksfifthavenue mo(100%)**
1. **saksoff5th us(100%)**
1. **shein in(100%)**
1. **simons ca(100%)/us(100%)**
1. **skechers us(100%)**
1. **soccer us(100%)**
1. **solebox de(100%)/uk(100%)**
1. **speedo au(100%)/us(100%)**
1. **splashfashions ae(100%)/bh(100%)/sa(100%)**
1. **stefaniamode au(100%)/ca(100%)/dk(100%)/eu(100%)/hk(100%)/it(100%)/jp(100%)/pl(100%)/ru(100%)/se(100%)/tr(100%)/uk(100%)/us(100%)**
1. **studio uk(100%)**
1. **stylebop de(100%)**
1. **suitsupply ae(100%)/kr(100%)**
1. **thenorthface jp(100%)**
1. **theoutnet jp(100%)**
1. **therake au(100%)/cn(100%)/de(100%)/es(100%)/fr(100%)/in(100%)/it(100%)/nl(100%)/se(100%)/uk(100%)/us(100%)**
1. **tods cn(100%)/gr(100%)/pt(100%)**
1. **tommybahama bh(100%)/in(100%)/kr(100%)/ph(100%)/sa(100%)/sg(100%)**
1. **topbrands ru(100%)**
1. **ullapopken de(100%)**
1. **underarmour ru(100%)**
1. **venteprivee de(100%)/it(100%)**
1. **vip cn(100%)**
1. **walmart ca(100%)**
1. **weekendmaxmara bg(100%)/cz(100%)/dk(100%)/eu(100%)/hu(100%)/it(100%)/ro(100%)/se(100%)/uk(100%)**
1. **witchery au(100%)/nz(100%)**
1. **zalandolounge de(100%)**
1. **zegna it(100%)/uk(100%)/us(100%)**
",1,broken crawlers aug abcmart kr accessorize de fr additionelle ca us adolfodominguez de aldo eu americaneagle ca ami ca ch cn de dk fr it jp kr li mx pl ru se uk us anthropologie de fr uk asos ae au ch cn hk id my nl ph pl ru sa sg th vn babyshop ae sa browns ae au ca cn dk eu hk jp kr no pl ru sa se us za burlington us centrepoint sa charlesandkeith th uk coldwatercreek us conforama fr converse au cotton au countryroad davidjones debenhams au ca ch dk eu no nz se sg us destinationmaternity us dwsports uk footaction us footlocker dk it no harrods hermes at ca fr it se hollister cn kmart au lanvin cn lcwaikiki tr lee au lifestylestores in lodenfrey de luckybrand ca luigibertolli br luisaspagnoli fr it jp uk us made ch es maxandco uk maxfashion ae bh sa maxmara de dk fr it jp kr pl se uk us michaelkors ca mothercare ae kw sa okini paige us paulsmith au eu uk us peterhahn de poloralphlauren id popup br prettysecrets in rakuten us ripley cl roots ca saksfifthavenue mo us shein in simons ca us skechers us soccer us solebox de uk speedo au us splashfashions ae bh sa stefaniamode au ca dk eu hk it jp pl ru se tr uk us studio uk stylebop de suitsupply ae kr thenorthface jp theoutnet jp therake au cn de es fr in it nl se uk us tods cn gr pt tommybahama bh in kr ph sa sg topbrands ru ullapopken de underarmour ru venteprivee de it vip cn walmart ca weekendmaxmara bg cz dk eu hu it ro se uk witchery au nz zalandolounge de zegna it uk us ,1
219442,16831258590.0,IssuesEvent,2021-06-18 05:24:56,OPM/ResInsight,https://api.github.com/repos/OPM/ResInsight,closed,Python : Improve API documentation,Documentation Enhancement,"ResInsight generates Python code based on application code. The documentation of these structures is currently difficult to find on api.resinsight.org. Investigate how to improve this documentation.
Suggestions
- try to configure Sphinx to create more documentation on auto-generated code
- combine the code in \GrpcInterface\Python\rips\generated\resinsight_classes.py with other code in folder d:\gitroot\ResInsight\GrpcInterface\Python\rips\
- May have to refactor the way we combine the auto-generated and the manually created code to make sphinx understand it.
## Linting
Introduce Python code linting using black. https://github.com/psf/black
See WebViz for example of how to use black
https://github.com/equinor/webviz-config/blob/master/.github/workflows/webviz-config.yml
**Related issues**
#7214
#7141
",1.0,"Python : Improve API documentation - ResInsight generates Python code based on application code. The documentation of these structures is currently difficult to find on api.resinsight.org. Investigate how to improve this documentation.
Suggestions
- try to configure Sphinx to create more documentation on auto-generated code
- combine the code in \GrpcInterface\Python\rips\generated\resinsight_classes.py with other code in folder d:\gitroot\ResInsight\GrpcInterface\Python\rips\
- May have to refactor the way we combine the auto-generated and the manually created code to make sphinx understand it.
## Linting
Introduce Python code linting using black. https://github.com/psf/black
See WebViz for example of how to use black
https://github.com/equinor/webviz-config/blob/master/.github/workflows/webviz-config.yml
**Related issues**
#7214
#7141
",0,python improve api documentation resinsight generates python code based on application code the documentation of these structures is currently difficult to find on api resinsight org investigate how to improve this documentation suggestions try to configure sphinx to create more documentation on auto generated code combine the code in grpcinterface python rips generated resinsight classes py with other code in folder d gitroot resinsight grpcinterface python rips may have to refactor the way we combine the auto generated and the manually created code to make sphinx understand it linting introduce python code linting using black see webviz for example of how to use black related issues ,0
20497,3814948358.0,IssuesEvent,2016-03-28 15:49:03,mozilla/pdf.js,https://api.github.com/repos/mozilla/pdf.js,closed,"The ""read with streaming"" unit-test (in network_spec.js) fails on the bots when run using the `unittest` command",1-test,"As testing in PR #7116 shows, the [""read with streaming"" unit-test](https://github.com/mozilla/pdf.js/blob/master/test/unit/network_spec.js#L67) fails on the bots when run using the `unittest` command. *However*, the unit-test pass when run using the `test` command.
This issue thus seems to be identical to https://github.com/mozilla/pdf.js/pull/6209#issuecomment-159606071, which means that we either need to use a locally available PDF file for that test, or change the unit-test framework to be able to deal with linked files.
/cc @brendandahl, @yurydelendik
",1.0,"The ""read with streaming"" unit-test (in network_spec.js) fails on the bots when run using the `unittest` command - As testing in PR #7116 shows, the [""read with streaming"" unit-test](https://github.com/mozilla/pdf.js/blob/master/test/unit/network_spec.js#L67) fails on the bots when run using the `unittest` command. *However*, the unit-test pass when run using the `test` command.
This issue thus seems to be identical to https://github.com/mozilla/pdf.js/pull/6209#issuecomment-159606071, which means that we either need to use a locally available PDF file for that test, or change the unit-test framework to be able to deal with linked files.
/cc @brendandahl, @yurydelendik
",0,the read with streaming unit test in network spec js fails on the bots when run using the unittest command as testing in pr shows the fails on the bots when run using the unittest command however the unit test pass when run using the test command this issue thus seems to be identical to which means that we either need to use a locally available pdf file for that test or change the unit test framework to be able to deal with linked files cc brendandahl yurydelendik ,0
692,9830051833.0,IssuesEvent,2019-06-16 04:40:39,dotnet/corefx,https://api.github.com/repos/dotnet/corefx,closed,"Address ""System.Net.Sockets.SocketException: Address already in use"" on K8S/Linux using HttpClient/TCP",area-System.Net.Http.SocketsHttpHandler bug tenet-compatibility tenet-reliability,"~Assumption: Duplicate of #32027 which was fixed by #32046 - goal: Port it (once confirmed it is truly duplicate).~
This is HttpClient/TCP spin off. UdpClient is covered fully by #32027.
# Issue Title
""System.Net.Sockets.SocketException: Address already in use"" on Linux
# General
Our .net core(v 2.2.0) services are running on Azure Kubernettes Linux environment. Recently we experimenced a lot of error ""System.Net.Http.HttpRequestException: Address already in use"" while calling dependencies, e.g. Active Directory, CosmosDB and other services. Once the issue started, we kept getting the same errors and had to restart the service to get rid of it. Our http clients are using DNS address, not specific ip and port. The following is the call stack on one example. What can cause such issues and how to fix it?
```
System.Net.Http.HttpRequestException: Address already in use --->
System.Net.Sockets.SocketException: Address already in use#N#
at System.Net.Http.ConnectHelper.ConnectAsync(String host, Int32 port, CancellationToken cancellationToken)#N# ---
End of inner exception stack trace ---#N#
at System.Net.Http.ConnectHelper.ConnectAsync(String host, Int32 port, CancellationToken cancellationToken)#N#
at System.Net.Http.HttpConnectionPool.CreateConnectionAsync(HttpRequestMessage request, CancellationToken cancellationToken)#N#
at System.Net.Http.HttpConnectionPool.WaitForCreatedConnectionAsync(ValueTask`1 creationTask)#N#
at System.Net.Http.HttpConnectionPool.SendWithRetryAsync(HttpRequestMessage request, Boolean doRequestAuth, CancellationToken cancellationToken)#N#
at System.Net.Http.RedirectHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)#N#
at System.Net.Http.DiagnosticsHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)#N#
at System.Net.Http.HttpClient.FinishSendAsyncBuffered(Task`1 sendTask, HttpRequestMessage request, CancellationTokenSource cts, Boolean disposeCts)#N#
at Microsoft.IdentityModel.Clients.ActiveDirectory.Internal.Http.HttpClientWrapper.GetResponseAsync()#N#
at Microsoft.IdentityModel.Clients.ActiveDirectory.Internal.Http.AdalHttpClient.GetResponseAsync[T](Boolean respondToDeviceAuthChallenge)#N#
at Microsoft.IdentityModel.Clients.ActiveDirectory.Internal.Http.AdalHttpClient.GetResponseAsync[T]()#N#
at Microsoft.IdentityModel.Clients.ActiveDirectory.Internal.Flows.AcquireTokenHandlerBase.SendHttpMessageAsync(IRequestParameters requestParameters)#N#
at Microsoft.IdentityModel.Clients.ActiveDirectory.Internal.Flows.AcquireTokenHandlerBase.SendTokenRequestAsync()#N#
at Microsoft.IdentityModel.Clients.ActiveDirectory.Internal.Flows.AcquireTokenHandlerBase.CheckAndAcquireTokenUsingBrokerAsync()#N#
at Microsoft.IdentityModel.Clients.ActiveDirectory.Internal.Flows.AcquireTokenHandlerBase.RunAsync()#N#
at Microsoft.IdentityModel.Clients.ActiveDirectory.AuthenticationContext.AcquireTokenForClientCommonAsync(String resource, ClientKey clientKey)#N#
at Microsoft.IdentityModel.Clients.ActiveDirectory.AuthenticationContext.AcquireTokenAsync(String resource, ClientCredential clientCredential)#N#
```
",True,"Address ""System.Net.Sockets.SocketException: Address already in use"" on K8S/Linux using HttpClient/TCP - ~Assumption: Duplicate of #32027 which was fixed by #32046 - goal: Port it (once confirmed it is truly duplicate).~
This is HttpClient/TCP spin off. UdpClient is covered fully by #32027.
# Issue Title
""System.Net.Sockets.SocketException: Address already in use"" on Linux
# General
Our .net core(v 2.2.0) services are running on Azure Kubernettes Linux environment. Recently we experimenced a lot of error ""System.Net.Http.HttpRequestException: Address already in use"" while calling dependencies, e.g. Active Directory, CosmosDB and other services. Once the issue started, we kept getting the same errors and had to restart the service to get rid of it. Our http clients are using DNS address, not specific ip and port. The following is the call stack on one example. What can cause such issues and how to fix it?
```
System.Net.Http.HttpRequestException: Address already in use --->
System.Net.Sockets.SocketException: Address already in use#N#
at System.Net.Http.ConnectHelper.ConnectAsync(String host, Int32 port, CancellationToken cancellationToken)#N# ---
End of inner exception stack trace ---#N#
at System.Net.Http.ConnectHelper.ConnectAsync(String host, Int32 port, CancellationToken cancellationToken)#N#
at System.Net.Http.HttpConnectionPool.CreateConnectionAsync(HttpRequestMessage request, CancellationToken cancellationToken)#N#
at System.Net.Http.HttpConnectionPool.WaitForCreatedConnectionAsync(ValueTask`1 creationTask)#N#
at System.Net.Http.HttpConnectionPool.SendWithRetryAsync(HttpRequestMessage request, Boolean doRequestAuth, CancellationToken cancellationToken)#N#
at System.Net.Http.RedirectHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)#N#
at System.Net.Http.DiagnosticsHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)#N#
at System.Net.Http.HttpClient.FinishSendAsyncBuffered(Task`1 sendTask, HttpRequestMessage request, CancellationTokenSource cts, Boolean disposeCts)#N#
at Microsoft.IdentityModel.Clients.ActiveDirectory.Internal.Http.HttpClientWrapper.GetResponseAsync()#N#
at Microsoft.IdentityModel.Clients.ActiveDirectory.Internal.Http.AdalHttpClient.GetResponseAsync[T](Boolean respondToDeviceAuthChallenge)#N#
at Microsoft.IdentityModel.Clients.ActiveDirectory.Internal.Http.AdalHttpClient.GetResponseAsync[T]()#N#
at Microsoft.IdentityModel.Clients.ActiveDirectory.Internal.Flows.AcquireTokenHandlerBase.SendHttpMessageAsync(IRequestParameters requestParameters)#N#
at Microsoft.IdentityModel.Clients.ActiveDirectory.Internal.Flows.AcquireTokenHandlerBase.SendTokenRequestAsync()#N#
at Microsoft.IdentityModel.Clients.ActiveDirectory.Internal.Flows.AcquireTokenHandlerBase.CheckAndAcquireTokenUsingBrokerAsync()#N#
at Microsoft.IdentityModel.Clients.ActiveDirectory.Internal.Flows.AcquireTokenHandlerBase.RunAsync()#N#
at Microsoft.IdentityModel.Clients.ActiveDirectory.AuthenticationContext.AcquireTokenForClientCommonAsync(String resource, ClientKey clientKey)#N#
at Microsoft.IdentityModel.Clients.ActiveDirectory.AuthenticationContext.AcquireTokenAsync(String resource, ClientCredential clientCredential)#N#
```
",1,address system net sockets socketexception address already in use on linux using httpclient tcp assumption duplicate of which was fixed by goal port it once confirmed it is truly duplicate this is httpclient tcp spin off udpclient is covered fully by issue title system net sockets socketexception address already in use on linux general our net core v services are running on azure kubernettes linux environment recently we experimenced a lot of error system net http httprequestexception address already in use while calling dependencies e g active directory cosmosdb and other services once the issue started we kept getting the same errors and had to restart the service to get rid of it our http clients are using dns address not specific ip and port the following is the call stack on one example what can cause such issues and how to fix it system net http httprequestexception address already in use system net sockets socketexception address already in use n at system net http connecthelper connectasync string host port cancellationtoken cancellationtoken n end of inner exception stack trace n at system net http connecthelper connectasync string host port cancellationtoken cancellationtoken n at system net http httpconnectionpool createconnectionasync httprequestmessage request cancellationtoken cancellationtoken n at system net http httpconnectionpool waitforcreatedconnectionasync valuetask creationtask n at system net http httpconnectionpool sendwithretryasync httprequestmessage request boolean dorequestauth cancellationtoken cancellationtoken n at system net http redirecthandler sendasync httprequestmessage request cancellationtoken cancellationtoken n at system net http diagnosticshandler sendasync httprequestmessage request cancellationtoken cancellationtoken n at system net http httpclient finishsendasyncbuffered task sendtask httprequestmessage request cancellationtokensource cts boolean disposects n at microsoft identitymodel clients activedirectory internal http httpclientwrapper getresponseasync n at microsoft identitymodel clients activedirectory internal http adalhttpclient getresponseasync boolean respondtodeviceauthchallenge n at microsoft identitymodel clients activedirectory internal http adalhttpclient getresponseasync n at microsoft identitymodel clients activedirectory internal flows acquiretokenhandlerbase sendhttpmessageasync irequestparameters requestparameters n at microsoft identitymodel clients activedirectory internal flows acquiretokenhandlerbase sendtokenrequestasync n at microsoft identitymodel clients activedirectory internal flows acquiretokenhandlerbase checkandacquiretokenusingbrokerasync n at microsoft identitymodel clients activedirectory internal flows acquiretokenhandlerbase runasync n at microsoft identitymodel clients activedirectory authenticationcontext acquiretokenforclientcommonasync string resource clientkey clientkey n at microsoft identitymodel clients activedirectory authenticationcontext acquiretokenasync string resource clientcredential clientcredential n ,1
2884,29187064143.0,IssuesEvent,2023-05-19 16:19:10,pulumi/pulumi-aws,https://api.github.com/repos/pulumi/pulumi-aws,closed,Created SQS redrive policy shows up in diff,kind/bug impact/reliability customer/feedback bug/diff,"### What happened?
Created a SQS queue and associated dead-letter-queue using the following code:
```
def create_sqs_queue_with_dlq(
queue_name: str, *,
message_retention_seconds: int = 3600,
delay_seconds: int = 60,
visibility_timeout_seconds: int = 60,
content_based_deduplication: bool = False,
fifo_queue: bool = False) -> Queue:
if fifo_queue:
queue_name = queue_name + "".fifo""
queue = Queue(
queue_name,
name=queue_name, # Avoid Pulumi random char suffix.
message_retention_seconds=message_retention_seconds,
delay_seconds=delay_seconds,
visibility_timeout_seconds=visibility_timeout_seconds,
content_based_deduplication=content_based_deduplication,
fifo_queue=fifo_queue)
# Create a Dead-Letter-Queue
dlq_name: str = f""{queue_name}-DLQ""
queue_dlq = Queue(
dlq_name,
name=dlq_name, # Avoid Pulumi random char suffix.
message_retention_seconds=1209600, # 60 * 60 * 24 * 14 - SQS max message retention is 14 days
visibility_timeout_seconds=visibility_timeout_seconds,
fifo_queue=fifo_queue,
redrive_allow_policy=queue.arn.apply(
lambda arn: json.dumps({
""redrivePermission"": ""byQueue"",
""sourceQueueArns"": [arn],
})
)
)
# Create a redrive policy for the queue to send messages to dlq.
redrive_policy = RedrivePolicy(
""redrivePolicy"",
queue_url=queue.id,
redrive_policy=queue_dlq.arn.apply(
lambda arn: json.dumps({
""deadLetterTargetArn"": arn,
""maxReceiveCount"": 4,
})
)
)
return queue
```
The AWS resources get created and what shows in the console is fine. But when I run `pulumi up` again, instead of nothing to do, I see the following:
```
Type Name Plan Info
pulumi:pulumi:Stack myproject-development
~ └─ aws:sqs:RedrivePolicy redrivePolicy update [diff: ~redrivePolicy]
Resources:
~ 1 to update
5 unchanged
```
Pulumi seems to not recognize that the redrive policy has been created correctly.
Output of `diff` :
```
$ pulumi preview --diff
Previewing update (project/myproj-compute/myproj-development)
pulumi:pulumi:Stack: (same)
[urn=urn:pulumi:myproj-development::myproj-compute::pulumi:pulumi:Stack::myproj-compute-myproj-development]
~ aws:sqs/redrivePolicy:RedrivePolicy: (update)
[id=https://sqs.us-west-2.amazonaws.com//development-myapp-sqs]
[urn=urn:pulumi:myproj-development::myproj-compute::aws:sqs/redrivePolicy:RedrivePolicy::redrivePolicy]
[provider=urn:pulumi:myproj-development::myproj-compute::pulumi:providers:aws::default_5_18_0::3e6f7b6a-3749-4965-8a22-261e940628a0]
~ redrivePolicy: (json) {
deadLetterTargetArn: ""arn:aws:sqs:us-west-2::development-myapp-sqs-DLQ""
maxReceiveCount : 4
}
Resources:
~ 1 to update
5 unchanged
```
### Steps to reproduce
Using the above code snippet to create an AWS SQS queue and dll should show the issue.
### Expected Behavior
After running `pulumi up` once and successfully creating the resources, I should not see any further updates when running `pulumi up` or diff subsequently without any modification to code.
### Actual Behavior
Pulumi indicates that the SQS redrive policy has not been created.
### Output of `pulumi about`
_No response_
### Additional context
_No response_
### Contributing
Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
",True,"Created SQS redrive policy shows up in diff - ### What happened?
Created a SQS queue and associated dead-letter-queue using the following code:
```
def create_sqs_queue_with_dlq(
queue_name: str, *,
message_retention_seconds: int = 3600,
delay_seconds: int = 60,
visibility_timeout_seconds: int = 60,
content_based_deduplication: bool = False,
fifo_queue: bool = False) -> Queue:
if fifo_queue:
queue_name = queue_name + "".fifo""
queue = Queue(
queue_name,
name=queue_name, # Avoid Pulumi random char suffix.
message_retention_seconds=message_retention_seconds,
delay_seconds=delay_seconds,
visibility_timeout_seconds=visibility_timeout_seconds,
content_based_deduplication=content_based_deduplication,
fifo_queue=fifo_queue)
# Create a Dead-Letter-Queue
dlq_name: str = f""{queue_name}-DLQ""
queue_dlq = Queue(
dlq_name,
name=dlq_name, # Avoid Pulumi random char suffix.
message_retention_seconds=1209600, # 60 * 60 * 24 * 14 - SQS max message retention is 14 days
visibility_timeout_seconds=visibility_timeout_seconds,
fifo_queue=fifo_queue,
redrive_allow_policy=queue.arn.apply(
lambda arn: json.dumps({
""redrivePermission"": ""byQueue"",
""sourceQueueArns"": [arn],
})
)
)
# Create a redrive policy for the queue to send messages to dlq.
redrive_policy = RedrivePolicy(
""redrivePolicy"",
queue_url=queue.id,
redrive_policy=queue_dlq.arn.apply(
lambda arn: json.dumps({
""deadLetterTargetArn"": arn,
""maxReceiveCount"": 4,
})
)
)
return queue
```
The AWS resources get created and what shows in the console is fine. But when I run `pulumi up` again, instead of nothing to do, I see the following:
```
Type Name Plan Info
pulumi:pulumi:Stack myproject-development
~ └─ aws:sqs:RedrivePolicy redrivePolicy update [diff: ~redrivePolicy]
Resources:
~ 1 to update
5 unchanged
```
Pulumi seems to not recognize that the redrive policy has been created correctly.
Output of `diff` :
```
$ pulumi preview --diff
Previewing update (project/myproj-compute/myproj-development)
pulumi:pulumi:Stack: (same)
[urn=urn:pulumi:myproj-development::myproj-compute::pulumi:pulumi:Stack::myproj-compute-myproj-development]
~ aws:sqs/redrivePolicy:RedrivePolicy: (update)
[id=https://sqs.us-west-2.amazonaws.com//development-myapp-sqs]
[urn=urn:pulumi:myproj-development::myproj-compute::aws:sqs/redrivePolicy:RedrivePolicy::redrivePolicy]
[provider=urn:pulumi:myproj-development::myproj-compute::pulumi:providers:aws::default_5_18_0::3e6f7b6a-3749-4965-8a22-261e940628a0]
~ redrivePolicy: (json) {
deadLetterTargetArn: ""arn:aws:sqs:us-west-2::development-myapp-sqs-DLQ""
maxReceiveCount : 4
}
Resources:
~ 1 to update
5 unchanged
```
### Steps to reproduce
Using the above code snippet to create an AWS SQS queue and dll should show the issue.
### Expected Behavior
After running `pulumi up` once and successfully creating the resources, I should not see any further updates when running `pulumi up` or diff subsequently without any modification to code.
### Actual Behavior
Pulumi indicates that the SQS redrive policy has not been created.
### Output of `pulumi about`
_No response_
### Additional context
_No response_
### Contributing
Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
",1,created sqs redrive policy shows up in diff what happened created a sqs queue and associated dead letter queue using the following code def create sqs queue with dlq queue name str message retention seconds int delay seconds int visibility timeout seconds int content based deduplication bool false fifo queue bool false queue if fifo queue queue name queue name fifo queue queue queue name name queue name avoid pulumi random char suffix message retention seconds message retention seconds delay seconds delay seconds visibility timeout seconds visibility timeout seconds content based deduplication content based deduplication fifo queue fifo queue create a dead letter queue dlq name str f queue name dlq queue dlq queue dlq name name dlq name avoid pulumi random char suffix message retention seconds sqs max message retention is days visibility timeout seconds visibility timeout seconds fifo queue fifo queue redrive allow policy queue arn apply lambda arn json dumps redrivepermission byqueue sourcequeuearns create a redrive policy for the queue to send messages to dlq redrive policy redrivepolicy redrivepolicy queue url queue id redrive policy queue dlq arn apply lambda arn json dumps deadlettertargetarn arn maxreceivecount return queue the aws resources get created and what shows in the console is fine but when i run pulumi up again instead of nothing to do i see the following type name plan info pulumi pulumi stack myproject development └─ aws sqs redrivepolicy redrivepolicy update resources to update unchanged pulumi seems to not recognize that the redrive policy has been created correctly output of diff pulumi preview diff previewing update project myproj compute myproj development pulumi pulumi stack same aws sqs redrivepolicy redrivepolicy update redrivepolicy json deadlettertargetarn arn aws sqs us west development myapp sqs dlq maxreceivecount resources to update unchanged steps to reproduce using the above code snippet to create an aws sqs queue and dll should show the issue expected behavior after running pulumi up once and successfully creating the resources i should not see any further updates when running pulumi up or diff subsequently without any modification to code actual behavior pulumi indicates that the sqs redrive policy has not been created output of pulumi about no response additional context no response contributing vote on this issue by adding a 👍 reaction to contribute a fix for this issue leave a comment and link to your pull request if you ve opened one already ,1
1259,14503060156.0,IssuesEvent,2020-12-11 22:01:48,dotnet/roslyn,https://api.github.com/repos/dotnet/roslyn,closed,Unreachable code exception is thrown for incomplete With statement,Area-Compilers Tenet-Reliability,"_This issue has been moved from [a ticket on Developer Community](https://developercommunity2.visualstudio.com/t/Structures-and-Multidimensional-Array/726880)._
---
[regression] [worked-in:Community Preview 16.2.0 (untested)]
What steps will reproduce the problem?
1.Create a structure
Structure MyStruct
Dim A As Integer
Dim B As Integer
Dim C As Integer
End Structure
and create an array
Private MyArr(9, 3, 11) As MyStruct
2.
Trying to assign values
Dim n, p, r As Integer
For n = 0 To 9
For p = 0 To 3
For r = 0 To 11
With MyArr(n, error)
. A = n
. B = p
. C = r
End With
Next
Next
Next
3.
As soon as I try to write , p (after (n)) VS crashes
4.
The same problem apears in VS Community 16.2.4
Apparently this happens after upgrading both versions
Forcing the introdution (commenting the line, for example) and writing (n, p, r) and uncommenting the line, no error apears and the project is created
---
### Original Comments
#### Feedback Bot on 9/9/2019, 03:51 AM:
We have directed your feedback to the appropriate engineering team for further evaluation. The team will review the feedback and notify you about the next steps.
#### Feedback Bot on 10/31/2019, 07:13 PM:
I have detected that for the last 35 days, this issue didn't have any product team activity and a very small amount of new votes or comments. Based on this, its severity, and affected area, it’s my experience that this issue is very unlikely to be fixed.
#### Wenwen Fan [MSFT] on 12/1/2020, 00:57 AM:
Thank you for taking the time to log this issue! Verified on VS2019 build 16.8.2, we create a VB winform project with the code:
Public Class Form1
Structure MyStruct
Dim A As Integer
Dim B As Integer
Dim C As Integer
End Structure
Private MyArr(9, 3, 11) As MyStruct
Private Sub Form1_Load(sender As Object, e As EventArgs) Handles MyBase.Load
Dim n, p, r As Integer
For n = 0 To 9
For p = 0 To 3
For r = 0 To 11
With MyArr(n, p, r)
.A = n
.B = p
.C = r
End With
Next
Next
Next
End Sub
End Class
When We write p after n, VS not crashed, and popup two errors bellow the menu bar:
“IntroduceVariableCodeRefactoringProvider” encountered an error has been disabled.
“VisualBasicAddAwaitCodeRefactoringProvider” encountered an error and has been disabled.
Could you please have a try with the latest build, thank you.
#### Feedback Bot on 12/8/2020, 07:33 PM:
We will close this report in 14 days because we don’t have enough information to investigate further. To keep the problem open, please provide the requested details.
---
### Original Solutions
(no solutions)",True,"Unreachable code exception is thrown for incomplete With statement - _This issue has been moved from [a ticket on Developer Community](https://developercommunity2.visualstudio.com/t/Structures-and-Multidimensional-Array/726880)._
---
[regression] [worked-in:Community Preview 16.2.0 (untested)]
What steps will reproduce the problem?
1.Create a structure
Structure MyStruct
Dim A As Integer
Dim B As Integer
Dim C As Integer
End Structure
and create an array
Private MyArr(9, 3, 11) As MyStruct
2.
Trying to assign values
Dim n, p, r As Integer
For n = 0 To 9
For p = 0 To 3
For r = 0 To 11
With MyArr(n, error)
. A = n
. B = p
. C = r
End With
Next
Next
Next
3.
As soon as I try to write , p (after (n)) VS crashes
4.
The same problem apears in VS Community 16.2.4
Apparently this happens after upgrading both versions
Forcing the introdution (commenting the line, for example) and writing (n, p, r) and uncommenting the line, no error apears and the project is created
---
### Original Comments
#### Feedback Bot on 9/9/2019, 03:51 AM:
We have directed your feedback to the appropriate engineering team for further evaluation. The team will review the feedback and notify you about the next steps.
#### Feedback Bot on 10/31/2019, 07:13 PM:
I have detected that for the last 35 days, this issue didn't have any product team activity and a very small amount of new votes or comments. Based on this, its severity, and affected area, it’s my experience that this issue is very unlikely to be fixed.
#### Wenwen Fan [MSFT] on 12/1/2020, 00:57 AM:
Thank you for taking the time to log this issue! Verified on VS2019 build 16.8.2, we create a VB winform project with the code:
Public Class Form1
Structure MyStruct
Dim A As Integer
Dim B As Integer
Dim C As Integer
End Structure
Private MyArr(9, 3, 11) As MyStruct
Private Sub Form1_Load(sender As Object, e As EventArgs) Handles MyBase.Load
Dim n, p, r As Integer
For n = 0 To 9
For p = 0 To 3
For r = 0 To 11
With MyArr(n, p, r)
.A = n
.B = p
.C = r
End With
Next
Next
Next
End Sub
End Class
When We write p after n, VS not crashed, and popup two errors bellow the menu bar:
“IntroduceVariableCodeRefactoringProvider” encountered an error has been disabled.
“VisualBasicAddAwaitCodeRefactoringProvider” encountered an error and has been disabled.
Could you please have a try with the latest build, thank you.
#### Feedback Bot on 12/8/2020, 07:33 PM:
We will close this report in 14 days because we don’t have enough information to investigate further. To keep the problem open, please provide the requested details.
---
### Original Solutions
(no solutions)",1,unreachable code exception is thrown for incomplete with statement this issue has been moved from what steps will reproduce the problem create a structure structure mystruct dim a as integer dim b as integer dim c as integer end structure and create an array private myarr as mystruct trying to assign values dim n p r as integer for n to for p to for r to with myarr n error a n b p c r end with next next next as soon as i try to write p after n vs crashes the same problem apears in vs community apparently this happens after upgrading both versions forcing the introdution commenting the line for example and writing n p r and uncommenting the line no error apears and the project is created original comments feedback bot on am we have directed your feedback to the appropriate engineering team for further evaluation the team will review the feedback and notify you about the next steps feedback bot on am thank you for sharing your feedback our teams prioritize action on product issues with broad customer impact see details at feedback bot on pm i have detected that for the last days this issue didn t have any product team activity and a very small amount of new votes or comments nbsp based on this its severity and affected area it’s my experience that this issue is very unlikely to be fixed wenwen fan on am thank you for taking the time to log this issue verified on build we create a vb winform project with the code public class structure mystruct dim a as integer dim b as integer dim c as integer end structure private myarr as mystruct private sub load sender as object e as eventargs handles mybase load dim n p r as integer for n to for p to for r to with myarr n p r a n b p c r end with next next next end sub end class when we write p after n vs not crashed and popup two errors bellow the menu bar “introducevariablecoderefactoringprovider” encountered an error has been disabled “visualbasicaddawaitcoderefactoringprovider” encountered an error and has been disabled could you please have a try with the latest build thank you feedback bot on pm we will close this report in days because we don’t have enough information to investigate further to keep the problem open please provide the requested details original solutions no solutions ,1
786820,27694825947.0,IssuesEvent,2023-03-14 00:46:03,open-sauced/insights,https://api.github.com/repos/open-sauced/insights,closed,Bug: Copy link is not permalink,🐛 bug 👀 needs triage high-priority,"### Describe the bug

### Steps to reproduce
This link ""copy link"" should always copy the permalink and not the profile.
### Affected services
opensauced.pizza
### Platforms
_No response_
### Browsers
_No response_
### Environment
_No response_
### Additional context
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
### Contributing Docs
- [ ] I agree to follow this project's Contribution Docs",1.0,"Bug: Copy link is not permalink - ### Describe the bug

### Steps to reproduce
This link ""copy link"" should always copy the permalink and not the profile.
### Affected services
opensauced.pizza
### Platforms
_No response_
### Browsers
_No response_
### Environment
_No response_
### Additional context
_No response_
### Code of Conduct
- [X] I agree to follow this project's Code of Conduct
### Contributing Docs
- [ ] I agree to follow this project's Contribution Docs",0,bug copy link is not permalink describe the bug steps to reproduce this link copy link should always copy the permalink and not the profile affected services opensauced pizza platforms no response browsers no response environment no response additional context no response code of conduct i agree to follow this project s code of conduct contributing docs i agree to follow this project s contribution docs,0
830200,31994506134.0,IssuesEvent,2023-09-21 08:21:58,AdguardTeam/AdguardBrowserExtension,https://api.github.com/repos/AdguardTeam/AdguardBrowserExtension,closed,Firefox extension No Rules apply when turn on the pc,Bug Priority: P4,"### AdGuard Extension version
4.1.57
### Browser version
Firefox 116.0.2
### OS version
Manjaro, Windows 11, MacOS
### What filters do you have enabled?
AdGuard Base filter, AdGuard Chinese filter, AdGuard Tracking Protection filter
### What Stealth Mode options do you have enabled?
Block trackers
### Issue Details
The Firefox extension will not apply any rules form the filters unless I update the filter manually every time I turn on my pc
### Expected Behavior
It is expected to have some rules to being applied when I turn on the pc and start up the browser. It should also update the filters by itself
### Screenshots
Screenshot 1:
### Additional Information
_No response_",1.0,"Firefox extension No Rules apply when turn on the pc - ### AdGuard Extension version
4.1.57
### Browser version
Firefox 116.0.2
### OS version
Manjaro, Windows 11, MacOS
### What filters do you have enabled?
AdGuard Base filter, AdGuard Chinese filter, AdGuard Tracking Protection filter
### What Stealth Mode options do you have enabled?
Block trackers
### Issue Details
The Firefox extension will not apply any rules form the filters unless I update the filter manually every time I turn on my pc
### Expected Behavior
It is expected to have some rules to being applied when I turn on the pc and start up the browser. It should also update the filters by itself
### Screenshots
Screenshot 1:
### Additional Information
_No response_",0,firefox extension no rules apply when turn on the pc adguard extension version browser version firefox os version manjaro windows macos what filters do you have enabled adguard base filter adguard chinese filter adguard tracking protection filter what stealth mode options do you have enabled block trackers issue details the firefox extension will not apply any rules form the filters unless i update the filter manually every time i turn on my pc expected behavior it is expected to have some rules to being applied when i turn on the pc and start up the browser it should also update the filters by itself screenshots screenshot additional information no response ,0
266,5894888783.0,IssuesEvent,2017-05-18 04:19:55,dotnet/corefx,https://api.github.com/repos/dotnet/corefx,closed,Use SetThreadErrorMode instead of SetErrorMode,area-System.IO bug tenet-reliability,"SetErrorMode Windows API is process global and thus it suffers from race condition when multiple threads are flipping the error mode back and forth.
File I/O on Windows should be using SetThreadErrorMode instead (this API is supported on Win7+).
The full framework calls SetThreadErrorMode on Win7+: https://github.com/Microsoft/referencesource/blob/4fe4349175f4c5091d972a7e56ea12012f1e7170/mscorlib/microsoft/win32/win32native.cs#L1480. Calling SetThreadErrorMode instead of SetErrorMode will both fix the race condition and improve compatibility with full framework.",True,"Use SetThreadErrorMode instead of SetErrorMode - SetErrorMode Windows API is process global and thus it suffers from race condition when multiple threads are flipping the error mode back and forth.
File I/O on Windows should be using SetThreadErrorMode instead (this API is supported on Win7+).
The full framework calls SetThreadErrorMode on Win7+: https://github.com/Microsoft/referencesource/blob/4fe4349175f4c5091d972a7e56ea12012f1e7170/mscorlib/microsoft/win32/win32native.cs#L1480. Calling SetThreadErrorMode instead of SetErrorMode will both fix the race condition and improve compatibility with full framework.",1,use setthreaderrormode instead of seterrormode seterrormode windows api is process global and thus it suffers from race condition when multiple threads are flipping the error mode back and forth file i o on windows should be using setthreaderrormode instead this api is supported on the full framework calls setthreaderrormode on calling setthreaderrormode instead of seterrormode will both fix the race condition and improve compatibility with full framework ,1
2394,25128021917.0,IssuesEvent,2022-11-09 13:14:56,Azure/PSRule.Rules.Azure,https://api.github.com/repos/Azure/PSRule.Rules.Azure,closed, Azure Database for MySQL should have backup configured,rule: mysql pillar: reliability,"# Rule request
## Suggested rule change
Azure Database for MySQL should have backups of the data files and the transaction log.
## Applies to the following
The rule applies to the following:
- Resource type: **[Microsoft.DBforMySQL/servers]**
## Additional context
Lets use the `Reliability` pillar for this one.
- [Backup and restore in Azure Database for MySQL](https://learn.microsoft.com/azure/mysql/single-server/concepts-backup)
- [Azure template reference](https://learn.microsoft.com/azure/templates/microsoft.dbformysql/servers)",True," Azure Database for MySQL should have backup configured - # Rule request
## Suggested rule change
Azure Database for MySQL should have backups of the data files and the transaction log.
## Applies to the following
The rule applies to the following:
- Resource type: **[Microsoft.DBforMySQL/servers]**
## Additional context
Lets use the `Reliability` pillar for this one.
- [Backup and restore in Azure Database for MySQL](https://learn.microsoft.com/azure/mysql/single-server/concepts-backup)
- [Azure template reference](https://learn.microsoft.com/azure/templates/microsoft.dbformysql/servers)",1, azure database for mysql should have backup configured rule request suggested rule change azure database for mysql should have backups of the data files and the transaction log applies to the following the rule applies to the following resource type additional context lets use the reliability pillar for this one ,1
2151,23762742032.0,IssuesEvent,2022-09-01 10:12:55,adoptium/infrastructure,https://api.github.com/repos/adoptium/infrastructure,closed,Ansible request for AWX to deploy to Windows systems.,ansible reliability,"Our AWX server does not currently have a template for deploying to Windows systems. We should add that and ensure that it is ""safe"" to deploy across all machines. There was one available on the original AWX server, but we need to ensure that the credentials used for connecting to the machine can be kept adequately secure and not published in the logs.",True,"Ansible request for AWX to deploy to Windows systems. - Our AWX server does not currently have a template for deploying to Windows systems. We should add that and ensure that it is ""safe"" to deploy across all machines. There was one available on the original AWX server, but we need to ensure that the credentials used for connecting to the machine can be kept adequately secure and not published in the logs.",1,ansible request for awx to deploy to windows systems our awx server does not currently have a template for deploying to windows systems we should add that and ensure that it is safe to deploy across all machines there was one available on the original awx server but we need to ensure that the credentials used for connecting to the machine can be kept adequately secure and not published in the logs ,1
196080,14792650163.0,IssuesEvent,2021-01-12 14:59:54,elastic/elasticsearch,https://api.github.com/repos/elastic/elasticsearch,closed,FullClusterRestartIT.testDataStreams fails during BWC when started before midnight and finishes after midnight,:Core/Features/Data streams >test-failure Team:Core/Features,"Build: https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+pull-request-bwc/15000/
Build scan: https://gradle-enterprise.elastic.co/s/dv3bwzpn5axfu/tests/:x-pack:qa:full-cluster-restart:v7.11.0%23upgradedClusterTest/org.elasticsearch.xpack.restart.FullClusterRestartIT/testDataStreams#1
The build started on `Dec 17, 2020 11:47:24 PM` and finished `22 min` later with the following exception in the test:
```
org.junit.ComparisonFailure: expected:<.ds-ds-2020.12.1[8]-000001> but was:<.ds-ds-2020.12.1[7]-000001>
```
",1.0,"FullClusterRestartIT.testDataStreams fails during BWC when started before midnight and finishes after midnight - Build: https://elasticsearch-ci.elastic.co/job/elastic+elasticsearch+pull-request-bwc/15000/
Build scan: https://gradle-enterprise.elastic.co/s/dv3bwzpn5axfu/tests/:x-pack:qa:full-cluster-restart:v7.11.0%23upgradedClusterTest/org.elasticsearch.xpack.restart.FullClusterRestartIT/testDataStreams#1
The build started on `Dec 17, 2020 11:47:24 PM` and finished `22 min` later with the following exception in the test:
```
org.junit.ComparisonFailure: expected:<.ds-ds-2020.12.1[8]-000001> but was:<.ds-ds-2020.12.1[7]-000001>
```
",0,fullclusterrestartit testdatastreams fails during bwc when started before midnight and finishes after midnight build build scan the build started on dec pm and finished min later with the following exception in the test org junit comparisonfailure expected but was ,0
254,5735785189.0,IssuesEvent,2017-04-22 01:33:34,Storj/bridge,https://api.github.com/repos/Storj/bridge,closed,Mirrors established to the same nodeID,reliability,"In addition to several shards not having any mirrors (https://github.com/Storj/bridge/issues/386), this upload (359M) had several established mirrors to the same nodeID:
```
~/storj/libstorj-c$ ./src/storj list-mirrors 095ef214fd554287c8d61436 7c17a3afc0b80db6185bb274
Established
-----------
Shard: 0
Hash: ee95db343c18fbf723052cea3309f3b5fa44cf61
storj://82.33.111.116:4004/efca0e3e018e11709f8495308c89c01f8b538306
storj://82.33.111.116:4004/efca0e3e018e11709f8495308c89c01f8b538306
storj://82.33.111.116:4004/efca0e3e018e11709f8495308c89c01f8b538306
storj://73.3.202.189:45659/ef38a4fcb1e60f6f519c57c58a5e8ef9cd9949f4
storj://82.33.111.116:4004/efca0e3e018e11709f8495308c89c01f8b538306
Available
---------
Shard: 0
Hash: ee95db343c18fbf723052cea3309f3b5fa44cf61
storj://95.79.125.120:20233/eebda6a08236b43c05710085c1349cbff73bdb87
storj://78.68.77.92:56446/ee29261a02452695ae402094f9bc100f0cf759b1
storj://86.9.240.168:12503/effeb08c3bf415c63a3583a1f3677c5cf4859b5e
storj://86.13.206.28:30293/ee29448c5d98741b4b97232ae9e3045aff5f1616
storj://73.3.202.189:45659/ef38a4fcb1e60f6f519c57c58a5e8ef9cd9949f4
storj://client014.storj.dk:15010/ef187250a1ef7e2aaa69adfbd34349240a647472
storj://188.242.148.175:6000/ef5d8f452ac2f015f9de3c0a0ab828e2c95ca99e
storj://storj.biliskner.com:6500/ef557a6f3633083cb73c133654d615d57c7f7089
storj://151.252.86.146:4150/ee62b28f8fb47d7c85738e25252b49313ceacbaf
storj://113.154.64.29:4000/eea3049fecb299607ed72ab78662d4a4608039fa
storj://89.190.220.101:4000/ee9b0e4ec3cc74adbfc1c8c30b6e2effd5de19a7
storj://193.40.136.102:4004/efee7655a5b93d154a34bd89657349405a85e55c
storj://144.76.58.177:4011/eff82630f295b987a7d515134e4ed81287b8de6d
storj://188.26.132.0:15093/eda396a3f39ae2496360d252a586126b7136120d
storj://86.9.240.168:12503/effeb08c3bf415c63a3583a1f3677c5cf4859b5e
storj://86.13.206.28:30293/ee29448c5d98741b4b97232ae9e3045aff5f1616
storj://client014.storj.dk:15084/ecabcacdceabdda5ea1fe18c4934fbc0cb4534b9
storj://client016.storj.dk:15081/ee6b24efa912817878e01855b55f9b75fea0e5ea
storj://78.68.77.92:56446/ee29261a02452695ae402094f9bc100f0cf759b1
storj://144.76.58.177:4011/eff82630f295b987a7d515134e4ed81287b8de6d
storj://206.212.243.107:60003/efc435aa46ac4ff161241dc23efb0d9c261058a4
storj://108.49.55.131:4000/edf594cf15591270a33dfeaa9486c376e3012840
storj://46.150.70.134:13997/edd0e509fef6e8e9f7b8c874522970826c84160f
storj://72.220.83.130:4000/ec9f2d3f206a8166ffd30ff84188872e5177d2de
storj://136.61.160.16:9816/ef516bd0181fc1b000a095aa64cd95a1f77881a6
storj://193.106.169.130:4000/ed7555b7bf8398e2dc37eefa01ff916c1a491e1b
storj://stj.no-ip.org:5000/ecbd697e07f3226f91b8f3cdd5aef073ffad183f
storj://client020.storj.dk:15049/ed0c9534530987b76c0f0c4c35590021f8f8a97f
storj://188.242.148.175:6000/ef5d8f452ac2f015f9de3c0a0ab828e2c95ca99e
storj://113.154.64.29:4000/eea3049fecb299607ed72ab78662d4a4608039fa
storj://66.130.26.76:4003/ed05af7cd7218d4baedcbccb963ffb3214001881
storj://92.249.239.222:46790/ec319f45ddf4d6f4285ef288a4c96f41e23ccfcb
storj://83.81.0.192:33150/ed9f895127bf6b41a804fce00868f6e5dde13b73
storj://client014.storj.dk:15010/ef187250a1ef7e2aaa69adfbd34349240a647472
storj://client017.storj.dk:15002/ecc27fc1ac2b6e0752201fd135492b6f84df560a
storj://86.13.206.28:30293/ee29448c5d98741b4b97232ae9e3045aff5f1616
storj://151.252.86.146:4150/ee62b28f8fb47d7c85738e25252b49313ceacbaf
storj://188.242.148.175:6000/ef5d8f452ac2f015f9de3c0a0ab828e2c95ca99e
storj://193.40.136.102:4004/efee7655a5b93d154a34bd89657349405a85e55c
storj://136.61.160.16:9816/ef516bd0181fc1b000a095aa64cd95a1f77881a6
storj://113.154.64.29:4000/eea3049fecb299607ed72ab78662d4a4608039fa
storj://86.9.240.168:12503/effeb08c3bf415c63a3583a1f3677c5cf4859b5e
storj://78.68.77.92:56446/ee29261a02452695ae402094f9bc100f0cf759b1
storj://client016.storj.dk:15081/ee6b24efa912817878e01855b55f9b75fea0e5ea
storj://client014.storj.dk:15010/ef187250a1ef7e2aaa69adfbd34349240a647472
storj://82.47.185.143:45003/eef9d8ffa899caf5c097d109ef6fbb6ebd49edf4
storj://82.47.185.143:50843/ef38e3114c553c7014bef808ce4091db7a72f07a
storj://uberfuturo.dynu.com:4010/ef116519a9acd5362c428c510747b437d4a6bf0c
storj://89.190.220.101:4000/ee9b0e4ec3cc74adbfc1c8c30b6e2effd5de19a7
storj://95.79.125.120:20233/eebda6a08236b43c05710085c1349cbff73bdb87
storj://tamriel.ca:50000/ee5bfd5b6b4e7cbb51d70dcb8a1774d46e844219
storj://client014.storj.dk:15010/ef187250a1ef7e2aaa69adfbd34349240a647472
storj://86.13.206.28:30293/ee29448c5d98741b4b97232ae9e3045aff5f1616
storj://144.76.58.177:4011/eff82630f295b987a7d515134e4ed81287b8de6d
storj://tamriel.ca:50000/ee5bfd5b6b4e7cbb51d70dcb8a1774d46e844219
storj://136.61.160.16:9816/ef516bd0181fc1b000a095aa64cd95a1f77881a6
storj://206.212.243.107:60003/efc435aa46ac4ff161241dc23efb0d9c261058a4
storj://86.9.240.168:12503/effeb08c3bf415c63a3583a1f3677c5cf4859b5e
storj://storj.biliskner.com:6500/ef557a6f3633083cb73c133654d615d57c7f7089
storj://uberfuturo.dynu.com:4010/ef116519a9acd5362c428c510747b437d4a6bf0c
storj://188.242.148.175:6000/ef5d8f452ac2f015f9de3c0a0ab828e2c95ca99e
storj://113.154.64.29:4000/eea3049fecb299607ed72ab78662d4a4608039fa
storj://82.47.185.143:50843/ef38e3114c553c7014bef808ce4091db7a72f07a
storj://78.68.77.92:56446/ee29261a02452695ae402094f9bc100f0cf759b1
storj://193.40.136.102:4004/efee7655a5b93d154a34bd89657349405a85e55c
storj://95.79.125.120:20233/eebda6a08236b43c05710085c1349cbff73bdb87
storj://client016.storj.dk:15081/ee6b24efa912817878e01855b55f9b75fea0e5ea
storj://82.47.185.143:45003/eef9d8ffa899caf5c097d109ef6fbb6ebd49edf4
storj://89.190.220.101:4000/ee9b0e4ec3cc74adbfc1c8c30b6e2effd5de19a7
storj://70.176.140.120:6745/eff95db6ea525c8c27fd29fd320033bf7eb0501a
```",True,"Mirrors established to the same nodeID - In addition to several shards not having any mirrors (https://github.com/Storj/bridge/issues/386), this upload (359M) had several established mirrors to the same nodeID:
```
~/storj/libstorj-c$ ./src/storj list-mirrors 095ef214fd554287c8d61436 7c17a3afc0b80db6185bb274
Established
-----------
Shard: 0
Hash: ee95db343c18fbf723052cea3309f3b5fa44cf61
storj://82.33.111.116:4004/efca0e3e018e11709f8495308c89c01f8b538306
storj://82.33.111.116:4004/efca0e3e018e11709f8495308c89c01f8b538306
storj://82.33.111.116:4004/efca0e3e018e11709f8495308c89c01f8b538306
storj://73.3.202.189:45659/ef38a4fcb1e60f6f519c57c58a5e8ef9cd9949f4
storj://82.33.111.116:4004/efca0e3e018e11709f8495308c89c01f8b538306
Available
---------
Shard: 0
Hash: ee95db343c18fbf723052cea3309f3b5fa44cf61
storj://95.79.125.120:20233/eebda6a08236b43c05710085c1349cbff73bdb87
storj://78.68.77.92:56446/ee29261a02452695ae402094f9bc100f0cf759b1
storj://86.9.240.168:12503/effeb08c3bf415c63a3583a1f3677c5cf4859b5e
storj://86.13.206.28:30293/ee29448c5d98741b4b97232ae9e3045aff5f1616
storj://73.3.202.189:45659/ef38a4fcb1e60f6f519c57c58a5e8ef9cd9949f4
storj://client014.storj.dk:15010/ef187250a1ef7e2aaa69adfbd34349240a647472
storj://188.242.148.175:6000/ef5d8f452ac2f015f9de3c0a0ab828e2c95ca99e
storj://storj.biliskner.com:6500/ef557a6f3633083cb73c133654d615d57c7f7089
storj://151.252.86.146:4150/ee62b28f8fb47d7c85738e25252b49313ceacbaf
storj://113.154.64.29:4000/eea3049fecb299607ed72ab78662d4a4608039fa
storj://89.190.220.101:4000/ee9b0e4ec3cc74adbfc1c8c30b6e2effd5de19a7
storj://193.40.136.102:4004/efee7655a5b93d154a34bd89657349405a85e55c
storj://144.76.58.177:4011/eff82630f295b987a7d515134e4ed81287b8de6d
storj://188.26.132.0:15093/eda396a3f39ae2496360d252a586126b7136120d
storj://86.9.240.168:12503/effeb08c3bf415c63a3583a1f3677c5cf4859b5e
storj://86.13.206.28:30293/ee29448c5d98741b4b97232ae9e3045aff5f1616
storj://client014.storj.dk:15084/ecabcacdceabdda5ea1fe18c4934fbc0cb4534b9
storj://client016.storj.dk:15081/ee6b24efa912817878e01855b55f9b75fea0e5ea
storj://78.68.77.92:56446/ee29261a02452695ae402094f9bc100f0cf759b1
storj://144.76.58.177:4011/eff82630f295b987a7d515134e4ed81287b8de6d
storj://206.212.243.107:60003/efc435aa46ac4ff161241dc23efb0d9c261058a4
storj://108.49.55.131:4000/edf594cf15591270a33dfeaa9486c376e3012840
storj://46.150.70.134:13997/edd0e509fef6e8e9f7b8c874522970826c84160f
storj://72.220.83.130:4000/ec9f2d3f206a8166ffd30ff84188872e5177d2de
storj://136.61.160.16:9816/ef516bd0181fc1b000a095aa64cd95a1f77881a6
storj://193.106.169.130:4000/ed7555b7bf8398e2dc37eefa01ff916c1a491e1b
storj://stj.no-ip.org:5000/ecbd697e07f3226f91b8f3cdd5aef073ffad183f
storj://client020.storj.dk:15049/ed0c9534530987b76c0f0c4c35590021f8f8a97f
storj://188.242.148.175:6000/ef5d8f452ac2f015f9de3c0a0ab828e2c95ca99e
storj://113.154.64.29:4000/eea3049fecb299607ed72ab78662d4a4608039fa
storj://66.130.26.76:4003/ed05af7cd7218d4baedcbccb963ffb3214001881
storj://92.249.239.222:46790/ec319f45ddf4d6f4285ef288a4c96f41e23ccfcb
storj://83.81.0.192:33150/ed9f895127bf6b41a804fce00868f6e5dde13b73
storj://client014.storj.dk:15010/ef187250a1ef7e2aaa69adfbd34349240a647472
storj://client017.storj.dk:15002/ecc27fc1ac2b6e0752201fd135492b6f84df560a
storj://86.13.206.28:30293/ee29448c5d98741b4b97232ae9e3045aff5f1616
storj://151.252.86.146:4150/ee62b28f8fb47d7c85738e25252b49313ceacbaf
storj://188.242.148.175:6000/ef5d8f452ac2f015f9de3c0a0ab828e2c95ca99e
storj://193.40.136.102:4004/efee7655a5b93d154a34bd89657349405a85e55c
storj://136.61.160.16:9816/ef516bd0181fc1b000a095aa64cd95a1f77881a6
storj://113.154.64.29:4000/eea3049fecb299607ed72ab78662d4a4608039fa
storj://86.9.240.168:12503/effeb08c3bf415c63a3583a1f3677c5cf4859b5e
storj://78.68.77.92:56446/ee29261a02452695ae402094f9bc100f0cf759b1
storj://client016.storj.dk:15081/ee6b24efa912817878e01855b55f9b75fea0e5ea
storj://client014.storj.dk:15010/ef187250a1ef7e2aaa69adfbd34349240a647472
storj://82.47.185.143:45003/eef9d8ffa899caf5c097d109ef6fbb6ebd49edf4
storj://82.47.185.143:50843/ef38e3114c553c7014bef808ce4091db7a72f07a
storj://uberfuturo.dynu.com:4010/ef116519a9acd5362c428c510747b437d4a6bf0c
storj://89.190.220.101:4000/ee9b0e4ec3cc74adbfc1c8c30b6e2effd5de19a7
storj://95.79.125.120:20233/eebda6a08236b43c05710085c1349cbff73bdb87
storj://tamriel.ca:50000/ee5bfd5b6b4e7cbb51d70dcb8a1774d46e844219
storj://client014.storj.dk:15010/ef187250a1ef7e2aaa69adfbd34349240a647472
storj://86.13.206.28:30293/ee29448c5d98741b4b97232ae9e3045aff5f1616
storj://144.76.58.177:4011/eff82630f295b987a7d515134e4ed81287b8de6d
storj://tamriel.ca:50000/ee5bfd5b6b4e7cbb51d70dcb8a1774d46e844219
storj://136.61.160.16:9816/ef516bd0181fc1b000a095aa64cd95a1f77881a6
storj://206.212.243.107:60003/efc435aa46ac4ff161241dc23efb0d9c261058a4
storj://86.9.240.168:12503/effeb08c3bf415c63a3583a1f3677c5cf4859b5e
storj://storj.biliskner.com:6500/ef557a6f3633083cb73c133654d615d57c7f7089
storj://uberfuturo.dynu.com:4010/ef116519a9acd5362c428c510747b437d4a6bf0c
storj://188.242.148.175:6000/ef5d8f452ac2f015f9de3c0a0ab828e2c95ca99e
storj://113.154.64.29:4000/eea3049fecb299607ed72ab78662d4a4608039fa
storj://82.47.185.143:50843/ef38e3114c553c7014bef808ce4091db7a72f07a
storj://78.68.77.92:56446/ee29261a02452695ae402094f9bc100f0cf759b1
storj://193.40.136.102:4004/efee7655a5b93d154a34bd89657349405a85e55c
storj://95.79.125.120:20233/eebda6a08236b43c05710085c1349cbff73bdb87
storj://client016.storj.dk:15081/ee6b24efa912817878e01855b55f9b75fea0e5ea
storj://82.47.185.143:45003/eef9d8ffa899caf5c097d109ef6fbb6ebd49edf4
storj://89.190.220.101:4000/ee9b0e4ec3cc74adbfc1c8c30b6e2effd5de19a7
storj://70.176.140.120:6745/eff95db6ea525c8c27fd29fd320033bf7eb0501a
```",1,mirrors established to the same nodeid in addition to several shards not having any mirrors this upload had several established mirrors to the same nodeid storj libstorj c src storj list mirrors established shard hash storj storj storj storj storj available shard hash storj storj storj storj storj storj storj dk storj storj storj biliskner com storj storj storj storj storj storj storj storj storj storj dk storj storj dk storj storj storj storj storj storj storj storj storj stj no ip org storj storj dk storj storj storj storj storj storj storj dk storj storj dk storj storj storj storj storj storj storj storj storj storj dk storj storj dk storj storj storj uberfuturo dynu com storj storj storj tamriel ca storj storj dk storj storj storj tamriel ca storj storj storj storj storj biliskner com storj uberfuturo dynu com storj storj storj storj storj storj storj storj dk storj storj storj ,1
45560,18760657207.0,IssuesEvent,2021-11-05 16:04:32,PipedreamHQ/pipedream,https://api.github.com/repos/PipedreamHQ/pipedream,closed,[FEATURE] delimited file import into a SQL DW table,enhancement sql service,"I’d love the ability to import delimited (CSV/TSV/PSV/etc) files into the data warehouse as 'reference data' tables to join against.
I picture it looking like:
1) select delimited file in a file chooser, and pick a destination table name.
2) if that name already exists, offer the choice of a) replacing all content, or b) merging with the existing content
I would imagine having limits on the total number of cells (rowXcolumn) for the various plans. For a free plan, being able to store up to some reasonable multiple of 100,000 cells seems ideal.
As a later enhancement, the feature could be expanded to allow the import of newline-delimited JSON objects, and linking (or pulling in) data from Google Sheets etc.
",1.0,"[FEATURE] delimited file import into a SQL DW table - I’d love the ability to import delimited (CSV/TSV/PSV/etc) files into the data warehouse as 'reference data' tables to join against.
I picture it looking like:
1) select delimited file in a file chooser, and pick a destination table name.
2) if that name already exists, offer the choice of a) replacing all content, or b) merging with the existing content
I would imagine having limits on the total number of cells (rowXcolumn) for the various plans. For a free plan, being able to store up to some reasonable multiple of 100,000 cells seems ideal.
As a later enhancement, the feature could be expanded to allow the import of newline-delimited JSON objects, and linking (or pulling in) data from Google Sheets etc.
",0, delimited file import into a sql dw table i’d love the ability to import delimited csv tsv psv etc files into the data warehouse as reference data tables to join against i picture it looking like select delimited file in a file chooser and pick a destination table name if that name already exists offer the choice of a replacing all content or b merging with the existing content i would imagine having limits on the total number of cells rowxcolumn for the various plans for a free plan being able to store up to some reasonable multiple of cells seems ideal as a later enhancement the feature could be expanded to allow the import of newline delimited json objects and linking or pulling in data from google sheets etc ,0
1404,15867315704.0,IssuesEvent,2021-04-08 16:45:11,emmamei/cdkey,https://api.github.com/repos/emmamei/cdkey,opened,Separate generic animation code from pose code,reliabilityfix,Separate code equally suitable for being collapsed or posed from special code for each.,True,Separate generic animation code from pose code - Separate code equally suitable for being collapsed or posed from special code for each.,1,separate generic animation code from pose code separate code equally suitable for being collapsed or posed from special code for each ,1
2740,27359720671.0,IssuesEvent,2023-02-27 15:07:39,adoptium/infrastructure,https://api.github.com/repos/adoptium/infrastructure,closed,Nagios: Add Solaris Hosts Into Nagios,Nagios reliability,"Currently the solaris build and test hosts are not being monitored in nagios, due to issues relating to connectivity, particularly to the siteox machines. New nagios checks have been written to allow for the non standard connection port, and a first host has been added. This now needs to be applied to the other hosts.",True,"Nagios: Add Solaris Hosts Into Nagios - Currently the solaris build and test hosts are not being monitored in nagios, due to issues relating to connectivity, particularly to the siteox machines. New nagios checks have been written to allow for the non standard connection port, and a first host has been added. This now needs to be applied to the other hosts.",1,nagios add solaris hosts into nagios currently the solaris build and test hosts are not being monitored in nagios due to issues relating to connectivity particularly to the siteox machines new nagios checks have been written to allow for the non standard connection port and a first host has been added this now needs to be applied to the other hosts ,1
528,8335812531.0,IssuesEvent,2018-09-28 04:43:10,dotnet/corefx,https://api.github.com/repos/dotnet/corefx,closed,SocketHttpHandler set up with MaximumConnectionsPerServer could deadlock on concurrent request cancellation,area-System.Net.Http.SocketsHttpHandler bug tenet-reliability,"@karelz @stephentoub
After the deadlock hits the process has to be restarted. If continued to be used the visible symptoms are the inability to communicate with a certain endpoint, the process may eventually run out of available threads.
Repro project: [DeadlockInSocketsHandler](https://github.com/baal2000/DeadlockInSocketsHandler)
Tested in Windows on SDK 2.1.301
Compile the console app and run. It would produce output similar to:
```
Running the test...
No deadlocks detected: all requests completed.
No deadlocks detected: all requests completed.
No deadlocks detected: all requests completed.
No deadlocks detected: all requests completed.
No deadlocks detected: all requests completed.
No deadlocks detected: all requests completed.
No deadlocks detected: all requests completed.
No deadlocks detected: all requests completed.
No deadlocks detected: all requests completed.
Deadlock detected: 2 requests are not completed
Finished the test. Press any key to exit.
```
The deadlock is caused by a race condition meaning it would strike after a random count of the test repetitions on each new application run. The constant values `MaximumConnectionsPerServer` and `MaxRequestCount` can be modified to increase/decrease probability of the deadlock, but `MaxRequestCount` must be higher than `MaximumConnectionsPerServer` to force some requests into `ConnectionWaiter` queue. The current values `1` and `2` are the lowest possible. They still reliably reproduce the issue and produce clean threads picture.
One may then attach to the running process or dump it to investigate the threads.
There would be 2 deadlocked threads, for example, named ""A"" and ""B"".
**Thread A**
```
System.Private.CoreLib.dll!System.Threading.SpinWait.SpinOnce(int sleep1Threshold)
System.Private.CoreLib.dll!System.Threading.CancellationTokenSource.WaitForCallbackToComplete(long id)
System.Net.Http.dll!System.Net.Http.HttpConnectionPool.DecrementConnectionCount()
System.Net.Http.dll!System.Net.Http.HttpConnection.Dispose(bool disposing)
System.Net.Http.dll!System.Net.Http.HttpConnection.RegisterCancellation.AnonymousMethod__65_0(object s)
System.Private.CoreLib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state)
System.Private.CoreLib.dll!System.Threading.CancellationTokenSource.ExecuteCallbackHandlers(bool throwOnFirstException)
System.Private.CoreLib.dll!System.Threading.CancellationTokenSource.ExecuteCallbackHandlers(bool throwOnFirstException)
DeadlockInSocketsHandler.dll!DeadlockInSocketsHandler.Program.DeadlockTestCore.AnonymousMethod__0() Line 83
System.Private.CoreLib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state)
System.Private.CoreLib.dll!System.Threading.Tasks.Task.ExecuteWithThreadLocal(ref System.Threading.Tasks.Task currentTaskSlot)
System.Private.CoreLib.dll!System.Threading.ThreadPoolWorkQueue.Dispatch()
```
**Thread B**
```
System.Net.Http.dll!System.Net.Http.HttpConnectionPool.GetConnectionAsync.AnonymousMethod__38_0(object s)
System.Private.CoreLib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state)
System.Private.CoreLib.dll!System.Threading.CancellationTokenSource.ExecuteCallbackHandlers(bool throwOnFirstException)
System.Private.CoreLib.dll!System.Threading.CancellationTokenSource.ExecuteCallbackHandlers(bool throwOnFirstException)
DeadlockInSocketsHandler.dll!DeadlockInSocketsHandler.Program.DeadlockTestCore.AnonymousMethod__0() Line 83
System.Private.CoreLib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state)
System.Private.CoreLib.dll!System.Threading.Tasks.Task.ExecuteWithThreadLocal(ref System.Threading.Tasks.Task currentTaskSlot)
System.Private.CoreLib.dll!System.Threading.ThreadPoolWorkQueue.Dispatch()
```
**Explanation**
**Thread A**
1. HttpConnectionPool.DecrementConnectionCount() [entered](https://github.com/dotnet/corefx/blob/ac50832f493f6a554caaeaa3db9426a18b9e9c12/src/System.Net.Http/src/System/Net/Http/SocketsHttpHandler/HttpConnectionPool.cs#L785) `lock(SyncObj)`
2. Spin-waits in CancellationTokenSource.WaitForCallbackToComplete for Thread B to complete HttpConnectionPool.GetConnectionAsync.[AnonymousMethod__38_0](https://github.com/dotnet/corefx/blob/ac50832f493f6a554caaeaa3db9426a18b9e9c12/src/System.Net.Http/src/System/Net/Http/SocketsHttpHandler/HttpConnectionPool.cs#L282) callback
**Thread B**
1. HttpConnectionPool.GetConnectionAsync.[AnonymousMethod__38_0](https://github.com/dotnet/corefx/blob/ac50832f493f6a554caaeaa3db9426a18b9e9c12/src/System.Net.Http/src/System/Net/Http/SocketsHttpHandler/HttpConnectionPool.cs#L282) callback waits to enter lock([SyncObj](https://github.com/dotnet/corefx/blob/ac50832f493f6a554caaeaa3db9426a18b9e9c12/src/System.Net.Http/src/System/Net/Http/SocketsHttpHandler/HttpConnectionPool.cs#L284)) that is held by Thread A
2. SyncObj can never be released Thread A because it is going to spin-wait infinitely unless Thread B makes progress.
**Conclusion**
Both threads **cannot move**, that confirms the deadlock.
**Workarounds**
1. Cancel the requests to the same endpoint serially by the application. The request cancellation could be queued and then processed sequentially on a single a worker thread. Or the cancellation threads could be synchronized by a lock.
2. If possible, oo not set MaxConnectionsPerServer property.",True,"SocketHttpHandler set up with MaximumConnectionsPerServer could deadlock on concurrent request cancellation - @karelz @stephentoub
After the deadlock hits the process has to be restarted. If continued to be used the visible symptoms are the inability to communicate with a certain endpoint, the process may eventually run out of available threads.
Repro project: [DeadlockInSocketsHandler](https://github.com/baal2000/DeadlockInSocketsHandler)
Tested in Windows on SDK 2.1.301
Compile the console app and run. It would produce output similar to:
```
Running the test...
No deadlocks detected: all requests completed.
No deadlocks detected: all requests completed.
No deadlocks detected: all requests completed.
No deadlocks detected: all requests completed.
No deadlocks detected: all requests completed.
No deadlocks detected: all requests completed.
No deadlocks detected: all requests completed.
No deadlocks detected: all requests completed.
No deadlocks detected: all requests completed.
Deadlock detected: 2 requests are not completed
Finished the test. Press any key to exit.
```
The deadlock is caused by a race condition meaning it would strike after a random count of the test repetitions on each new application run. The constant values `MaximumConnectionsPerServer` and `MaxRequestCount` can be modified to increase/decrease probability of the deadlock, but `MaxRequestCount` must be higher than `MaximumConnectionsPerServer` to force some requests into `ConnectionWaiter` queue. The current values `1` and `2` are the lowest possible. They still reliably reproduce the issue and produce clean threads picture.
One may then attach to the running process or dump it to investigate the threads.
There would be 2 deadlocked threads, for example, named ""A"" and ""B"".
**Thread A**
```
System.Private.CoreLib.dll!System.Threading.SpinWait.SpinOnce(int sleep1Threshold)
System.Private.CoreLib.dll!System.Threading.CancellationTokenSource.WaitForCallbackToComplete(long id)
System.Net.Http.dll!System.Net.Http.HttpConnectionPool.DecrementConnectionCount()
System.Net.Http.dll!System.Net.Http.HttpConnection.Dispose(bool disposing)
System.Net.Http.dll!System.Net.Http.HttpConnection.RegisterCancellation.AnonymousMethod__65_0(object s)
System.Private.CoreLib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state)
System.Private.CoreLib.dll!System.Threading.CancellationTokenSource.ExecuteCallbackHandlers(bool throwOnFirstException)
System.Private.CoreLib.dll!System.Threading.CancellationTokenSource.ExecuteCallbackHandlers(bool throwOnFirstException)
DeadlockInSocketsHandler.dll!DeadlockInSocketsHandler.Program.DeadlockTestCore.AnonymousMethod__0() Line 83
System.Private.CoreLib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state)
System.Private.CoreLib.dll!System.Threading.Tasks.Task.ExecuteWithThreadLocal(ref System.Threading.Tasks.Task currentTaskSlot)
System.Private.CoreLib.dll!System.Threading.ThreadPoolWorkQueue.Dispatch()
```
**Thread B**
```
System.Net.Http.dll!System.Net.Http.HttpConnectionPool.GetConnectionAsync.AnonymousMethod__38_0(object s)
System.Private.CoreLib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state)
System.Private.CoreLib.dll!System.Threading.CancellationTokenSource.ExecuteCallbackHandlers(bool throwOnFirstException)
System.Private.CoreLib.dll!System.Threading.CancellationTokenSource.ExecuteCallbackHandlers(bool throwOnFirstException)
DeadlockInSocketsHandler.dll!DeadlockInSocketsHandler.Program.DeadlockTestCore.AnonymousMethod__0() Line 83
System.Private.CoreLib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state)
System.Private.CoreLib.dll!System.Threading.Tasks.Task.ExecuteWithThreadLocal(ref System.Threading.Tasks.Task currentTaskSlot)
System.Private.CoreLib.dll!System.Threading.ThreadPoolWorkQueue.Dispatch()
```
**Explanation**
**Thread A**
1. HttpConnectionPool.DecrementConnectionCount() [entered](https://github.com/dotnet/corefx/blob/ac50832f493f6a554caaeaa3db9426a18b9e9c12/src/System.Net.Http/src/System/Net/Http/SocketsHttpHandler/HttpConnectionPool.cs#L785) `lock(SyncObj)`
2. Spin-waits in CancellationTokenSource.WaitForCallbackToComplete for Thread B to complete HttpConnectionPool.GetConnectionAsync.[AnonymousMethod__38_0](https://github.com/dotnet/corefx/blob/ac50832f493f6a554caaeaa3db9426a18b9e9c12/src/System.Net.Http/src/System/Net/Http/SocketsHttpHandler/HttpConnectionPool.cs#L282) callback
**Thread B**
1. HttpConnectionPool.GetConnectionAsync.[AnonymousMethod__38_0](https://github.com/dotnet/corefx/blob/ac50832f493f6a554caaeaa3db9426a18b9e9c12/src/System.Net.Http/src/System/Net/Http/SocketsHttpHandler/HttpConnectionPool.cs#L282) callback waits to enter lock([SyncObj](https://github.com/dotnet/corefx/blob/ac50832f493f6a554caaeaa3db9426a18b9e9c12/src/System.Net.Http/src/System/Net/Http/SocketsHttpHandler/HttpConnectionPool.cs#L284)) that is held by Thread A
2. SyncObj can never be released Thread A because it is going to spin-wait infinitely unless Thread B makes progress.
**Conclusion**
Both threads **cannot move**, that confirms the deadlock.
**Workarounds**
1. Cancel the requests to the same endpoint serially by the application. The request cancellation could be queued and then processed sequentially on a single a worker thread. Or the cancellation threads could be synchronized by a lock.
2. If possible, oo not set MaxConnectionsPerServer property.",1,sockethttphandler set up with maximumconnectionsperserver could deadlock on concurrent request cancellation karelz stephentoub after the deadlock hits the process has to be restarted if continued to be used the visible symptoms are the inability to communicate with a certain endpoint the process may eventually run out of available threads repro project tested in windows on sdk compile the console app and run it would produce output similar to running the test no deadlocks detected all requests completed no deadlocks detected all requests completed no deadlocks detected all requests completed no deadlocks detected all requests completed no deadlocks detected all requests completed no deadlocks detected all requests completed no deadlocks detected all requests completed no deadlocks detected all requests completed no deadlocks detected all requests completed deadlock detected requests are not completed finished the test press any key to exit the deadlock is caused by a race condition meaning it would strike after a random count of the test repetitions on each new application run the constant values maximumconnectionsperserver and maxrequestcount can be modified to increase decrease probability of the deadlock but maxrequestcount must be higher than maximumconnectionsperserver to force some requests into connectionwaiter queue the current values and are the lowest possible they still reliably reproduce the issue and produce clean threads picture one may then attach to the running process or dump it to investigate the threads there would be deadlocked threads for example named a and b thread a system private corelib dll system threading spinwait spinonce int system private corelib dll system threading cancellationtokensource waitforcallbacktocomplete long id system net http dll system net http httpconnectionpool decrementconnectioncount system net http dll system net http httpconnection dispose bool disposing system net http dll system net http httpconnection registercancellation anonymousmethod object s system private corelib dll system threading executioncontext runinternal system threading executioncontext executioncontext system threading contextcallback callback object state system private corelib dll system threading cancellationtokensource executecallbackhandlers bool throwonfirstexception system private corelib dll system threading cancellationtokensource executecallbackhandlers bool throwonfirstexception deadlockinsocketshandler dll deadlockinsocketshandler program deadlocktestcore anonymousmethod line system private corelib dll system threading executioncontext runinternal system threading executioncontext executioncontext system threading contextcallback callback object state system private corelib dll system threading tasks task executewiththreadlocal ref system threading tasks task currenttaskslot system private corelib dll system threading threadpoolworkqueue dispatch thread b system net http dll system net http httpconnectionpool getconnectionasync anonymousmethod object s system private corelib dll system threading executioncontext runinternal system threading executioncontext executioncontext system threading contextcallback callback object state system private corelib dll system threading cancellationtokensource executecallbackhandlers bool throwonfirstexception system private corelib dll system threading cancellationtokensource executecallbackhandlers bool throwonfirstexception deadlockinsocketshandler dll deadlockinsocketshandler program deadlocktestcore anonymousmethod line system private corelib dll system threading executioncontext runinternal system threading executioncontext executioncontext system threading contextcallback callback object state system private corelib dll system threading tasks task executewiththreadlocal ref system threading tasks task currenttaskslot system private corelib dll system threading threadpoolworkqueue dispatch explanation thread a httpconnectionpool decrementconnectioncount lock syncobj spin waits in cancellationtokensource waitforcallbacktocomplete for thread b to complete httpconnectionpool getconnectionasync callback thread b httpconnectionpool getconnectionasync callback waits to enter lock that is held by thread a syncobj can never be released thread a because it is going to spin wait infinitely unless thread b makes progress conclusion both threads cannot move that confirms the deadlock workarounds cancel the requests to the same endpoint serially by the application the request cancellation could be queued and then processed sequentially on a single a worker thread or the cancellation threads could be synchronized by a lock if possible oo not set maxconnectionsperserver property ,1
1441,16110292955.0,IssuesEvent,2021-04-27 20:11:24,Azure/azure-sdk-for-java,https://api.github.com/repos/Azure/azure-sdk-for-java,closed,Investigate Event Hubs Service Stress test failures,Azure.Core Event Hubs amqp tenet-reliability,The service team has a set of stress tests that they run which we added T2 support in. Investigate failures in the T2 library suite.,True,Investigate Event Hubs Service Stress test failures - The service team has a set of stress tests that they run which we added T2 support in. Investigate failures in the T2 library suite.,1,investigate event hubs service stress test failures the service team has a set of stress tests that they run which we added support in investigate failures in the library suite ,1
1746,19411746274.0,IssuesEvent,2021-12-20 10:21:09,kata-containers/tests,https://api.github.com/repos/kata-containers/tests,closed,fedora docker registry issue failing jobs,unreliable,"From the jobs on https://github.com/kata-containers/kata-containers/pull/3297:
```
+ docker build --build-arg http_proxy= --build-arg https_proxy= -t image-builder-osbuilder /tmp/jenkins/workspace/kata-containers-2.0-ubuntu-20.04-PR/go/src/github.com/kata-containers/kata-containers/tools/osbuilder/image-builder
Sending build context to Docker daemon 24.06kB
Step 1/4 : ARG IMAGE_REGISTRY=registry.fedoraproject.org
Step 2/4 : FROM ${IMAGE_REGISTRY}/fedora:34
34: Pulling from fedora
manifest for registry.fedoraproject.org/fedora:34 not found: manifest unknown: manifest unknown
make: *** [Makefile:130: /tmp/jenkins/workspace/kata-containers-2.0-ubuntu-20.04-PR/go/src/github.com/kata-containers/kata-containers/tools/osbuilder/kata-containers.img] Error 1
[install_kata_image.sh:50] ERROR: sudo -E USE_DOCKER=1 DISTRO=ubuntu make -e image
[install_kata.sh:22] ERROR: .ci/install_kata_image.sh
[jenkins_job_build.sh:172] ERROR: ci/setup.sh
Build step 'Execute shell' marked build as failure
```
And:
```
Step 1/4 : ARG IMAGE_REGISTRY=registry.fedoraproject.org
Step 2/4 : FROM ${IMAGE_REGISTRY}/fedora:34
WARNING: ⚠️ Failed to pull manifest by the resolved digest. This registry does not
appear to conform to the distribution registry specification; falling back to
pull by tag. This fallback is DEPRECATED, and will be removed in a future
release. Please contact admins of https://registry.fedoraproject.org. ⚠️
34: Pulling from fedora
manifest for registry.fedoraproject.org/fedora:34 not found: manifest unknown: manifest unknown
Makefile:130: recipe for target '/tmp/jenkins/workspace/kata-containers-2.0-ubuntu-PR-x86_64-clh-crio-kata-repo/go/src/github.com/kata-containers/kata-containers/tools/osbuilder/kata-containers.img' failed
make: *** [/tmp/jenkins/workspace/kata-containers-2.0-ubuntu-PR-x86_64-clh-crio-kata-repo/go/src/github.com/kata-containers/kata-containers/tools/osbuilder/kata-containers.img] Error 1
[install_kata_image.sh:50] ERROR: sudo -E USE_DOCKER=1 DISTRO=ubuntu make -e image
[install_kata.sh:22] ERROR: .ci/install_kata_image.sh
[jenkins_job_build.sh:172] ERROR: ci/setup.sh
```
```",True,"fedora docker registry issue failing jobs - From the jobs on https://github.com/kata-containers/kata-containers/pull/3297:
```
+ docker build --build-arg http_proxy= --build-arg https_proxy= -t image-builder-osbuilder /tmp/jenkins/workspace/kata-containers-2.0-ubuntu-20.04-PR/go/src/github.com/kata-containers/kata-containers/tools/osbuilder/image-builder
Sending build context to Docker daemon 24.06kB
Step 1/4 : ARG IMAGE_REGISTRY=registry.fedoraproject.org
Step 2/4 : FROM ${IMAGE_REGISTRY}/fedora:34
34: Pulling from fedora
manifest for registry.fedoraproject.org/fedora:34 not found: manifest unknown: manifest unknown
make: *** [Makefile:130: /tmp/jenkins/workspace/kata-containers-2.0-ubuntu-20.04-PR/go/src/github.com/kata-containers/kata-containers/tools/osbuilder/kata-containers.img] Error 1
[install_kata_image.sh:50] ERROR: sudo -E USE_DOCKER=1 DISTRO=ubuntu make -e image
[install_kata.sh:22] ERROR: .ci/install_kata_image.sh
[jenkins_job_build.sh:172] ERROR: ci/setup.sh
Build step 'Execute shell' marked build as failure
```
And:
```
Step 1/4 : ARG IMAGE_REGISTRY=registry.fedoraproject.org
Step 2/4 : FROM ${IMAGE_REGISTRY}/fedora:34
WARNING: ⚠️ Failed to pull manifest by the resolved digest. This registry does not
appear to conform to the distribution registry specification; falling back to
pull by tag. This fallback is DEPRECATED, and will be removed in a future
release. Please contact admins of https://registry.fedoraproject.org. ⚠️
34: Pulling from fedora
manifest for registry.fedoraproject.org/fedora:34 not found: manifest unknown: manifest unknown
Makefile:130: recipe for target '/tmp/jenkins/workspace/kata-containers-2.0-ubuntu-PR-x86_64-clh-crio-kata-repo/go/src/github.com/kata-containers/kata-containers/tools/osbuilder/kata-containers.img' failed
make: *** [/tmp/jenkins/workspace/kata-containers-2.0-ubuntu-PR-x86_64-clh-crio-kata-repo/go/src/github.com/kata-containers/kata-containers/tools/osbuilder/kata-containers.img] Error 1
[install_kata_image.sh:50] ERROR: sudo -E USE_DOCKER=1 DISTRO=ubuntu make -e image
[install_kata.sh:22] ERROR: .ci/install_kata_image.sh
[jenkins_job_build.sh:172] ERROR: ci/setup.sh
```
```",1,fedora docker registry issue failing jobs from the jobs on docker build build arg http proxy build arg https proxy t image builder osbuilder tmp jenkins workspace kata containers ubuntu pr go src github com kata containers kata containers tools osbuilder image builder sending build context to docker daemon step arg image registry registry fedoraproject org step from image registry fedora pulling from fedora manifest for registry fedoraproject org fedora not found manifest unknown manifest unknown make error error sudo e use docker distro ubuntu make e image error ci install kata image sh error ci setup sh build step execute shell marked build as failure and step arg image registry registry fedoraproject org step from image registry fedora warning ⚠️ failed to pull manifest by the resolved digest this registry does not appear to conform to the distribution registry specification falling back to pull by tag this fallback is deprecated and will be removed in a future release please contact admins of ⚠️ pulling from fedora manifest for registry fedoraproject org fedora not found manifest unknown manifest unknown makefile recipe for target tmp jenkins workspace kata containers ubuntu pr clh crio kata repo go src github com kata containers kata containers tools osbuilder kata containers img failed make error error sudo e use docker distro ubuntu make e image error ci install kata image sh error ci setup sh ,1
1969,22293540797.0,IssuesEvent,2022-06-12 18:13:35,jina-ai/jina,https://api.github.com/repos/jina-ai/jina,closed,Improve K8s readieness check,epic/reliability,Investigate if we can improve the Readiness check in Kubernetes. Ideas: Use gRPC instead of TCP or use the container health checks or use the Prometheus metrics,True,Improve K8s readieness check - Investigate if we can improve the Readiness check in Kubernetes. Ideas: Use gRPC instead of TCP or use the container health checks or use the Prometheus metrics,1,improve readieness check investigate if we can improve the readiness check in kubernetes ideas use grpc instead of tcp or use the container health checks or use the prometheus metrics,1
2243,24439496390.0,IssuesEvent,2022-10-06 13:44:37,Azure/PSRule.Rules.Azure,https://api.github.com/repos/Azure/PSRule.Rules.Azure,closed,Enable purge protection for App Configuration stores,rule: app-configuration pillar: reliability,"# Rule request
## Suggested rule change
App Configuration supports purge protection to extend the protection provided by soft-delete. Purge protection limits data loss causes by accidental and malicious purges of deleted configuration stores by enforcing an mandatory retention interval.
This feature only applies to Standard SKU configuration stores. Free configuration stores should be ignored by this rule.
This is enabled by setting the `properties.enablePurgeProtection` property to `true`.
## Applies to the following
The rule applies to the following:
- Resource type: **Microsoft.AppConfiguration/configurationStores**
## Additional context
[Azure deployment reference](https://learn.microsoft.com/azure/templates/microsoft.appconfiguration/configurationstores)
[Purge protection](https://learn.microsoft.com/azure/azure-app-configuration/concept-soft-delete#purge-protection)
Related rules include:
- https://azure.github.io/PSRule.Rules.Azure/en/rules/Azure.KeyVault.PurgeProtect/
",True,"Enable purge protection for App Configuration stores - # Rule request
## Suggested rule change
App Configuration supports purge protection to extend the protection provided by soft-delete. Purge protection limits data loss causes by accidental and malicious purges of deleted configuration stores by enforcing an mandatory retention interval.
This feature only applies to Standard SKU configuration stores. Free configuration stores should be ignored by this rule.
This is enabled by setting the `properties.enablePurgeProtection` property to `true`.
## Applies to the following
The rule applies to the following:
- Resource type: **Microsoft.AppConfiguration/configurationStores**
## Additional context
[Azure deployment reference](https://learn.microsoft.com/azure/templates/microsoft.appconfiguration/configurationstores)
[Purge protection](https://learn.microsoft.com/azure/azure-app-configuration/concept-soft-delete#purge-protection)
Related rules include:
- https://azure.github.io/PSRule.Rules.Azure/en/rules/Azure.KeyVault.PurgeProtect/
",1,enable purge protection for app configuration stores rule request suggested rule change app configuration supports purge protection to extend the protection provided by soft delete purge protection limits data loss causes by accidental and malicious purges of deleted configuration stores by enforcing an mandatory retention interval this feature only applies to standard sku configuration stores free configuration stores should be ignored by this rule this is enabled by setting the properties enablepurgeprotection property to true applies to the following the rule applies to the following resource type microsoft appconfiguration configurationstores additional context related rules include ,1
294,6041238425.0,IssuesEvent,2017-06-10 22:12:29,UofSSpaceTeam/roveberrypy,https://api.github.com/repos/UofSSpaceTeam/roveberrypy,closed,DriveProcess fails to handle None messages from WebServer,Bug Drive Reliability,"See log below:
```root : INFO Enabled modules:
root : INFO ('DriveProcess', 'USBServer', 'WebServer')
root : INFO Registering process subscribers...
root : INFO STARTING: ['DriveProcess', 'USBServer', 'WebServer']
root : INFO WATCHDOG: Monitoring for hanging RoverRrocess instances
StateManager : DEBUG Watchdog: Timer 0 Watching {}
Web Templates Loaded From: ['./WebUI/views']
StateManager : DEBUG Watchdog: Timer 0 Watching {'WebServer': True, 'DriveProcess': True}
USBServer : DEBUG b'\x02\x08$wheelRM{\xd8\x03'
DriveProcess : DEBUG 0
Exception in thread Thread-1:
Traceback (most recent call last):
File ""/usr/lib/python3.4/threading.py"", line 920, in _bootstrap_inner
self.run()
File ""/home/pi/roveberrypy/roverprocess/RoverProcess.py"", line 59, in run
getattr(self._parent, ""on_"" + message.key)(message.data)
File ""/home/pi/roveberrypy/roverprocess/DriveProcess.py"", line 51, in on_joystick2
y_axis = (y_axis * 40000/2)
TypeError: unsupported operand type(s) for *: 'NoneType' and 'int'
```
This is run on the dev branch on the real rover. Can be fixed by explicitly checking for None messages (is this a hack?).",True,"DriveProcess fails to handle None messages from WebServer - See log below:
```root : INFO Enabled modules:
root : INFO ('DriveProcess', 'USBServer', 'WebServer')
root : INFO Registering process subscribers...
root : INFO STARTING: ['DriveProcess', 'USBServer', 'WebServer']
root : INFO WATCHDOG: Monitoring for hanging RoverRrocess instances
StateManager : DEBUG Watchdog: Timer 0 Watching {}
Web Templates Loaded From: ['./WebUI/views']
StateManager : DEBUG Watchdog: Timer 0 Watching {'WebServer': True, 'DriveProcess': True}
USBServer : DEBUG b'\x02\x08$wheelRM{\xd8\x03'
DriveProcess : DEBUG 0
Exception in thread Thread-1:
Traceback (most recent call last):
File ""/usr/lib/python3.4/threading.py"", line 920, in _bootstrap_inner
self.run()
File ""/home/pi/roveberrypy/roverprocess/RoverProcess.py"", line 59, in run
getattr(self._parent, ""on_"" + message.key)(message.data)
File ""/home/pi/roveberrypy/roverprocess/DriveProcess.py"", line 51, in on_joystick2
y_axis = (y_axis * 40000/2)
TypeError: unsupported operand type(s) for *: 'NoneType' and 'int'
```
This is run on the dev branch on the real rover. Can be fixed by explicitly checking for None messages (is this a hack?).",1,driveprocess fails to handle none messages from webserver see log below root info enabled modules root info driveprocess usbserver webserver root info registering process subscribers root info starting root info watchdog monitoring for hanging roverrrocess instances statemanager debug watchdog timer watching web templates loaded from statemanager debug watchdog timer watching webserver true driveprocess true usbserver debug b wheelrm driveprocess debug exception in thread thread traceback most recent call last file usr lib threading py line in bootstrap inner self run file home pi roveberrypy roverprocess roverprocess py line in run getattr self parent on message key message data file home pi roveberrypy roverprocess driveprocess py line in on y axis y axis typeerror unsupported operand type s for nonetype and int this is run on the dev branch on the real rover can be fixed by explicitly checking for none messages is this a hack ,1
132,4140853928.0,IssuesEvent,2016-06-14 00:53:54,dotnet/coreclr,https://api.github.com/repos/dotnet/coreclr,closed,Exit Code 139 on random CLI xplat runs,blocking-release Linux reliability,"Lately I've been seeing more and more ""Expected command to pass but it did not. Exit Code: 139"" issues on CLI runs.
In the past, when we've seen ""Exit Code 139"", it means ""segmentation fault"" and usually in crossgened managed code.
I don't have a solid repro yet.
Opening this issue to track all the places we've seen it. Maybe from logging these failures, and looking through the logs we will be able to determine what is the exact cause of the Exit Code 139.
Here are the first logs:
http://dotnet-ci.cloudapp.net/job/dotnet_cli/job/rel_1.0.0-preview2/job/debug_debian8.2_x64_prtest/97/testReport/junit/Microsoft.DotNet.Tests/PackagedCommandTests/TestProjectDependencyIsNotAvailableThroughDriver/
```
Expected command to pass but it did not. File Name: /mnt/resource/j/workspace/dotnet_cli/rel_1.0.0-preview2/debug_debian8.2_x64_prtest/artifacts/debian.8-x64/stage2/dotnet Arguments: --verbose build --build-profile ""/mnt/resource/j/workspace/dotnet_cli/rel_1.0.0-preview2/debug_debian8.2_x64_prtest/test/dotnet.Tests/bin/Debug/netcoreapp1.0/TestAssets/TestProjects/AppWithDirectDependency/project.json"" Exit Code: 139 StdOut: StdErr:
```
https://mseng.visualstudio.com/dotnetcore/_build?favDefinitionId=3598&_a=summary&buildId=3052704
```
2016-06-09T17:36:40.6997100Z Microsoft.DotNet.Tools.Builder.Tests.BuildPerformanceTest.IncrementalSkipAllNoDependenciesInGraph_TwoTargetGraphLarge [FAIL]
2016-06-09T17:36:40.7002620Z Expected command to pass but it did not.
2016-06-09T17:36:40.7008200Z File Name: /opt/code/artifacts/centos.7-x64/stage2/dotnet
2016-06-09T17:36:40.7016130Z Arguments: --verbose build --no-dependencies ""/opt/code/test/Performance/bin/Release/netcoreapp1.0/IncrementalSkipAllNoDependenciesInGraph_TwoTargetGraphLarge/PerformanceTestProjects/TwoTargetGraphLarge/TwoTargetLargeP4"" --framework netcoreapp1.0
2016-06-09T17:36:40.7022120Z Exit Code: 139
2016-06-09T17:36:40.7026560Z StdOut:
2016-06-09T17:36:40.7030930Z
2016-06-09T17:36:40.7036240Z StdErr:
```
https://mseng.visualstudio.com/dotnetcore/_build?favDefinitionId=3603&_a=summary&buildId=3053398
```
2016-06-09T21:08:52.706Z: Executing - /Users/SHAREDADMIN/vsoagent/_work/6/s/artifacts/osx.10.11-x64/stage2/dotnet --verbose build --build-profile ""/Users/SHAREDADMIN/vsoagent/_work/6/s/artifacts/osx.10.11-x64/tests/artifacts/5c513543-b685-4730-88ad-47d72b6ddccb/e2etestroot/project.json"" -o ""/Users/SHAREDADMIN/vsoagent/_work/6/s/artifacts/osx.10.11-x64/tests/artifacts/5c513543-b685-4730-88ad-47d72b6ddccb/e2etestroot/test space/bin"" --framework netcoreapp1.0
2016-06-09T21:08:52.830Z: Telemetry is: Enabled
2016-06-09T21:08:53.682Z: Microsoft.DotNet.Tests.EndToEnd.EndToEndTest.TestDotnetBuild [FAIL]
2016-06-09T21:08:53.682Z: Expected command to pass but it did not.
2016-06-09T21:08:53.682Z: File Name: /Users/SHAREDADMIN/vsoagent/_work/6/s/artifacts/osx.10.11-x64/stage2/dotnet
2016-06-09T21:08:53.682Z: Arguments: --verbose build --build-profile ""/Users/SHAREDADMIN/vsoagent/_work/6/s/artifacts/osx.10.11-x64/tests/artifacts/5c513543-b685-4730-88ad-47d72b6ddccb/e2etestroot/project.json"" -o ""/Users/SHAREDADMIN/vsoagent/_work/6/s/artifacts/osx.10.11-x64/tests/artifacts/5c513543-b685-4730-88ad-47d72b6ddccb/e2etestroot/test space/bin"" --framework netcoreapp1.0
2016-06-09T21:08:53.682Z: Exit Code: 138
```
Technically that one is exit code **138**, but this is OSX so that could be the difference.
https://mseng.visualstudio.com/dotnetcore/_build?favDefinitionId=3605&_a=summary&buildId=3052708
```
2016-06-09T17:35:55.829Z: Build failed: Microsoft.DotNet.Cli.Build.Framework.BuildFailureException: Command failed with exit code 139: /opt/code/artifacts/ubuntu.14.04-x64/stage2/dotnet ""build"" ""--configuration"" ""Release""
2016-06-09T17:35:55.829Z:
2016-06-09T17:35:55.829Z: at Microsoft.DotNet.Cli.Build.TestTargets.BuildTests(BuildTargetContext c)
2016-06-09T17:35:55.829Z: at Microsoft.DotNet.Cli.Build.Framework.BuildContext.ExecTarget(BuildTarget target)
```
@brthor @piotrpMSFT @gkhanna79 @sergiy-k @JohnChen0 @adityamandaleeka @joshfree ",True,"Exit Code 139 on random CLI xplat runs - Lately I've been seeing more and more ""Expected command to pass but it did not. Exit Code: 139"" issues on CLI runs.
In the past, when we've seen ""Exit Code 139"", it means ""segmentation fault"" and usually in crossgened managed code.
I don't have a solid repro yet.
Opening this issue to track all the places we've seen it. Maybe from logging these failures, and looking through the logs we will be able to determine what is the exact cause of the Exit Code 139.
Here are the first logs:
http://dotnet-ci.cloudapp.net/job/dotnet_cli/job/rel_1.0.0-preview2/job/debug_debian8.2_x64_prtest/97/testReport/junit/Microsoft.DotNet.Tests/PackagedCommandTests/TestProjectDependencyIsNotAvailableThroughDriver/
```
Expected command to pass but it did not. File Name: /mnt/resource/j/workspace/dotnet_cli/rel_1.0.0-preview2/debug_debian8.2_x64_prtest/artifacts/debian.8-x64/stage2/dotnet Arguments: --verbose build --build-profile ""/mnt/resource/j/workspace/dotnet_cli/rel_1.0.0-preview2/debug_debian8.2_x64_prtest/test/dotnet.Tests/bin/Debug/netcoreapp1.0/TestAssets/TestProjects/AppWithDirectDependency/project.json"" Exit Code: 139 StdOut: StdErr:
```
https://mseng.visualstudio.com/dotnetcore/_build?favDefinitionId=3598&_a=summary&buildId=3052704
```
2016-06-09T17:36:40.6997100Z Microsoft.DotNet.Tools.Builder.Tests.BuildPerformanceTest.IncrementalSkipAllNoDependenciesInGraph_TwoTargetGraphLarge [FAIL]
2016-06-09T17:36:40.7002620Z Expected command to pass but it did not.
2016-06-09T17:36:40.7008200Z File Name: /opt/code/artifacts/centos.7-x64/stage2/dotnet
2016-06-09T17:36:40.7016130Z Arguments: --verbose build --no-dependencies ""/opt/code/test/Performance/bin/Release/netcoreapp1.0/IncrementalSkipAllNoDependenciesInGraph_TwoTargetGraphLarge/PerformanceTestProjects/TwoTargetGraphLarge/TwoTargetLargeP4"" --framework netcoreapp1.0
2016-06-09T17:36:40.7022120Z Exit Code: 139
2016-06-09T17:36:40.7026560Z StdOut:
2016-06-09T17:36:40.7030930Z
2016-06-09T17:36:40.7036240Z StdErr:
```
https://mseng.visualstudio.com/dotnetcore/_build?favDefinitionId=3603&_a=summary&buildId=3053398
```
2016-06-09T21:08:52.706Z: Executing - /Users/SHAREDADMIN/vsoagent/_work/6/s/artifacts/osx.10.11-x64/stage2/dotnet --verbose build --build-profile ""/Users/SHAREDADMIN/vsoagent/_work/6/s/artifacts/osx.10.11-x64/tests/artifacts/5c513543-b685-4730-88ad-47d72b6ddccb/e2etestroot/project.json"" -o ""/Users/SHAREDADMIN/vsoagent/_work/6/s/artifacts/osx.10.11-x64/tests/artifacts/5c513543-b685-4730-88ad-47d72b6ddccb/e2etestroot/test space/bin"" --framework netcoreapp1.0
2016-06-09T21:08:52.830Z: Telemetry is: Enabled
2016-06-09T21:08:53.682Z: Microsoft.DotNet.Tests.EndToEnd.EndToEndTest.TestDotnetBuild [FAIL]
2016-06-09T21:08:53.682Z: Expected command to pass but it did not.
2016-06-09T21:08:53.682Z: File Name: /Users/SHAREDADMIN/vsoagent/_work/6/s/artifacts/osx.10.11-x64/stage2/dotnet
2016-06-09T21:08:53.682Z: Arguments: --verbose build --build-profile ""/Users/SHAREDADMIN/vsoagent/_work/6/s/artifacts/osx.10.11-x64/tests/artifacts/5c513543-b685-4730-88ad-47d72b6ddccb/e2etestroot/project.json"" -o ""/Users/SHAREDADMIN/vsoagent/_work/6/s/artifacts/osx.10.11-x64/tests/artifacts/5c513543-b685-4730-88ad-47d72b6ddccb/e2etestroot/test space/bin"" --framework netcoreapp1.0
2016-06-09T21:08:53.682Z: Exit Code: 138
```
Technically that one is exit code **138**, but this is OSX so that could be the difference.
https://mseng.visualstudio.com/dotnetcore/_build?favDefinitionId=3605&_a=summary&buildId=3052708
```
2016-06-09T17:35:55.829Z: Build failed: Microsoft.DotNet.Cli.Build.Framework.BuildFailureException: Command failed with exit code 139: /opt/code/artifacts/ubuntu.14.04-x64/stage2/dotnet ""build"" ""--configuration"" ""Release""
2016-06-09T17:35:55.829Z:
2016-06-09T17:35:55.829Z: at Microsoft.DotNet.Cli.Build.TestTargets.BuildTests(BuildTargetContext c)
2016-06-09T17:35:55.829Z: at Microsoft.DotNet.Cli.Build.Framework.BuildContext.ExecTarget(BuildTarget target)
```
@brthor @piotrpMSFT @gkhanna79 @sergiy-k @JohnChen0 @adityamandaleeka @joshfree ",1,exit code on random cli xplat runs lately i ve been seeing more and more expected command to pass but it did not exit code issues on cli runs in the past when we ve seen exit code it means segmentation fault and usually in crossgened managed code i don t have a solid repro yet opening this issue to track all the places we ve seen it maybe from logging these failures and looking through the logs we will be able to determine what is the exact cause of the exit code here are the first logs expected command to pass but it did not file name mnt resource j workspace dotnet cli rel debug prtest artifacts debian dotnet arguments verbose build build profile mnt resource j workspace dotnet cli rel debug prtest test dotnet tests bin debug testassets testprojects appwithdirectdependency project json exit code stdout stderr microsoft dotnet tools builder tests buildperformancetest incrementalskipallnodependenciesingraph twotargetgraphlarge expected command to pass but it did not file name opt code artifacts centos dotnet arguments verbose build no dependencies opt code test performance bin release incrementalskipallnodependenciesingraph twotargetgraphlarge performancetestprojects twotargetgraphlarge framework exit code stdout stderr executing users sharedadmin vsoagent work s artifacts osx dotnet verbose build build profile users sharedadmin vsoagent work s artifacts osx tests artifacts project json o users sharedadmin vsoagent work s artifacts osx tests artifacts test space bin framework telemetry is enabled microsoft dotnet tests endtoend endtoendtest testdotnetbuild expected command to pass but it did not file name users sharedadmin vsoagent work s artifacts osx dotnet arguments verbose build build profile users sharedadmin vsoagent work s artifacts osx tests artifacts project json o users sharedadmin vsoagent work s artifacts osx tests artifacts test space bin framework exit code technically that one is exit code but this is osx so that could be the difference build failed microsoft dotnet cli build framework buildfailureexception command failed with exit code opt code artifacts ubuntu dotnet build configuration release at microsoft dotnet cli build testtargets buildtests buildtargetcontext c at microsoft dotnet cli build framework buildcontext exectarget buildtarget target brthor piotrpmsft sergiy k adityamandaleeka joshfree ,1
128078,5048334738.0,IssuesEvent,2016-12-20 12:33:38,TASVideos/BizHawk,https://api.github.com/repos/TASVideos/BizHawk,closed,snes BSX support,Assigned-zeromus auto-migrated Core-BSNES Priority-Low Type-Enhancement,"```
i'm not sure BSX roms can be detected. if so, detect it.
we may need to put them in the gamedb.
otherwise, we need a special rom load option.
bsnes 0.87 requires specially selected load options.
```
Original issue reported on code.google.com by `zero...@zeromus.org` on 20 Apr 2014 at 11:15
",1.0,"snes BSX support - ```
i'm not sure BSX roms can be detected. if so, detect it.
we may need to put them in the gamedb.
otherwise, we need a special rom load option.
bsnes 0.87 requires specially selected load options.
```
Original issue reported on code.google.com by `zero...@zeromus.org` on 20 Apr 2014 at 11:15
",0,snes bsx support i m not sure bsx roms can be detected if so detect it we may need to put them in the gamedb otherwise we need a special rom load option bsnes requires specially selected load options original issue reported on code google com by zero zeromus org on apr at ,0
2453,25478258691.0,IssuesEvent,2022-11-25 16:43:51,gitpod-io/gitpod,https://api.github.com/repos/gitpod-io/gitpod,closed,Epic: Establish Workspace Success Rate (IDE),meta: stale operations: observability team: IDE type: epic aspect: reliability,"### Context
IDE team should establish a workspace success rate according to [internal RFC](https://www.notion.so/gitpod/Workspace-Success-Rate-ef464969f5f54b83b7df43828d1ec5a4#29aa3385045446a1b8b16101f229a68c)
### Value
Improve visibility in reliability issues for users trying access the workspace using different IDE interfaces.
### Acceptance Criteria
- [ ] All SLIs are defined and documented
- [ ] All SLOs are defined and overview available in one place
- [ ] All alerts on SLOs are created
- [ ] Grafana dashboards are available for IDE clients and proxies
### Plan
#### Engineering
- [x] https://github.com/gitpod-io/gitpod/issues/11134
- [ ] rate limiting
- [ ] RED observability for grpc endpoint
- [ ] reporting from non-js clients, i.e. golang and java
#### SLIs/SLOs
> Connections errors mean any error to IDE services from any protocol (http, web-socket, Gitpod API, IDE backends, etc) as a ratio per a client session or a throughput in proxies.
- [ ] Connection errors in a browser (supervisor-frontend/VS Code Browser)
- [ ] [SLI definition](link)
- [ ] [SLO overview](link)
- [ ] [alert](link)
- [ ] [dashboard](link)
- [ ] Connection errors in VS Code Desktop extension
- [ ] [SLI definition](link)
- [ ] [SLO overview](link)
- [ ] [alert](link)
- [ ] [dashboard](link)
- [ ] Connection errors in JB Gateway Gitpod Plugin
- [ ] [SLI definition](link)
- [ ] [SLO overview](link)
- [ ] [alert](link)
- [ ] [dashboard](link)
- [ ] Connection errors in ws-proxy
- [ ] [SLI definition](link)
- [ ] [SLO overview](link)
- [ ] [alert](link)
- [ ] [dashboard](link)
- [ ] Connection errors in IDE proxy
- [ ] [SLI definition](link)
- [ ] [SLO overview](link)
- [ ] [alert](link)
- [ ] [dashboard](link)
- [ ] Connection errors in OpenVSX proxy
- [ ] [SLI definition](link)
- [ ] [SLO overview](link)
- [x] [alert](https://github.com/gitpod-io/gitpod/blob/bcade930fd0e20a90edefee4119d9b1cf579e1fc/operations/observability/mixins/IDE/rules/components/openvsx-proxy/alerts.libsonnet#L10)
- [x] [dashboard](https://grafana.gitpod.io/d/HNOvmGpxgd/openvsx-proxy?orgId=1&refresh=5s)
- [ ] Connection errors in SSH Gateway
- [ ] [SLI definition](link)
- [ ] [SLO overview](link)
- [ ] [alert](link)
- [x] [dashboard](https://grafana.gitpod.io/d/3oan1Zr7k/ssh-gateway-overview?orgId=1&refresh=30s)",True,"Epic: Establish Workspace Success Rate (IDE) - ### Context
IDE team should establish a workspace success rate according to [internal RFC](https://www.notion.so/gitpod/Workspace-Success-Rate-ef464969f5f54b83b7df43828d1ec5a4#29aa3385045446a1b8b16101f229a68c)
### Value
Improve visibility in reliability issues for users trying access the workspace using different IDE interfaces.
### Acceptance Criteria
- [ ] All SLIs are defined and documented
- [ ] All SLOs are defined and overview available in one place
- [ ] All alerts on SLOs are created
- [ ] Grafana dashboards are available for IDE clients and proxies
### Plan
#### Engineering
- [x] https://github.com/gitpod-io/gitpod/issues/11134
- [ ] rate limiting
- [ ] RED observability for grpc endpoint
- [ ] reporting from non-js clients, i.e. golang and java
#### SLIs/SLOs
> Connections errors mean any error to IDE services from any protocol (http, web-socket, Gitpod API, IDE backends, etc) as a ratio per a client session or a throughput in proxies.
- [ ] Connection errors in a browser (supervisor-frontend/VS Code Browser)
- [ ] [SLI definition](link)
- [ ] [SLO overview](link)
- [ ] [alert](link)
- [ ] [dashboard](link)
- [ ] Connection errors in VS Code Desktop extension
- [ ] [SLI definition](link)
- [ ] [SLO overview](link)
- [ ] [alert](link)
- [ ] [dashboard](link)
- [ ] Connection errors in JB Gateway Gitpod Plugin
- [ ] [SLI definition](link)
- [ ] [SLO overview](link)
- [ ] [alert](link)
- [ ] [dashboard](link)
- [ ] Connection errors in ws-proxy
- [ ] [SLI definition](link)
- [ ] [SLO overview](link)
- [ ] [alert](link)
- [ ] [dashboard](link)
- [ ] Connection errors in IDE proxy
- [ ] [SLI definition](link)
- [ ] [SLO overview](link)
- [ ] [alert](link)
- [ ] [dashboard](link)
- [ ] Connection errors in OpenVSX proxy
- [ ] [SLI definition](link)
- [ ] [SLO overview](link)
- [x] [alert](https://github.com/gitpod-io/gitpod/blob/bcade930fd0e20a90edefee4119d9b1cf579e1fc/operations/observability/mixins/IDE/rules/components/openvsx-proxy/alerts.libsonnet#L10)
- [x] [dashboard](https://grafana.gitpod.io/d/HNOvmGpxgd/openvsx-proxy?orgId=1&refresh=5s)
- [ ] Connection errors in SSH Gateway
- [ ] [SLI definition](link)
- [ ] [SLO overview](link)
- [ ] [alert](link)
- [x] [dashboard](https://grafana.gitpod.io/d/3oan1Zr7k/ssh-gateway-overview?orgId=1&refresh=30s)",1,epic establish workspace success rate ide context ide team should establish a workspace success rate according to value improve visibility in reliability issues for users trying access the workspace using different ide interfaces acceptance criteria all slis are defined and documented all slos are defined and overview available in one place all alerts on slos are created grafana dashboards are available for ide clients and proxies plan engineering rate limiting red observability for grpc endpoint reporting from non js clients i e golang and java slis slos connections errors mean any error to ide services from any protocol http web socket gitpod api ide backends etc as a ratio per a client session or a throughput in proxies connection errors in a browser supervisor frontend vs code browser link link link link connection errors in vs code desktop extension link link link link connection errors in jb gateway gitpod plugin link link link link connection errors in ws proxy link link link link connection errors in ide proxy link link link link connection errors in openvsx proxy link link connection errors in ssh gateway link link link ,1
730,10149683327.0,IssuesEvent,2019-08-05 15:44:41,pulumi/pulumi,https://api.github.com/repos/pulumi/pulumi,closed,Implement checking progress reporting,area/tools impact/reliability impact/usability kind/enhancement,"Now that we sometimes access services over the Internet as part of resource validation (e.g., see pulumi/coconut#115, checking that AMIs exist), the validation step can consume perceptible time.
As a result, we should consider some form of progress reporting. Even just a single CLI line that says something like `Validating: `, and rewrites itself as it walks through each resource (``, ``, etc), would at least tell the user what might be taking a long time. This will also be important if we do analyzers (see pulumi/coconut#119).
In addition to this, we may want to have an ""offline"" mode. If you're on an airplane and want to validate that a resource graph is valid, it kind of stinks that you can't do that anymore.",True,"Implement checking progress reporting - Now that we sometimes access services over the Internet as part of resource validation (e.g., see pulumi/coconut#115, checking that AMIs exist), the validation step can consume perceptible time.
As a result, we should consider some form of progress reporting. Even just a single CLI line that says something like `Validating: `, and rewrites itself as it walks through each resource (``, ``, etc), would at least tell the user what might be taking a long time. This will also be important if we do analyzers (see pulumi/coconut#119).
In addition to this, we may want to have an ""offline"" mode. If you're on an airplane and want to validate that a resource graph is valid, it kind of stinks that you can't do that anymore.",1,implement checking progress reporting now that we sometimes access services over the internet as part of resource validation e g see pulumi coconut checking that amis exist the validation step can consume perceptible time as a result we should consider some form of progress reporting even just a single cli line that says something like validating and rewrites itself as it walks through each resource etc would at least tell the user what might be taking a long time this will also be important if we do analyzers see pulumi coconut in addition to this we may want to have an offline mode if you re on an airplane and want to validate that a resource graph is valid it kind of stinks that you can t do that anymore ,1
681168,23299414375.0,IssuesEvent,2022-08-07 04:37:26,AlexanderDefuria/FRC-Scouting,https://api.github.com/repos/AlexanderDefuria/FRC-Scouting,opened,Optimize Match Data Loading,Bug High Priority Back End,Currently matchdata is taking a long long time to load. Investigate.,1.0,Optimize Match Data Loading - Currently matchdata is taking a long long time to load. Investigate.,0,optimize match data loading currently matchdata is taking a long long time to load investigate ,0
1222,14085994559.0,IssuesEvent,2020-11-05 02:29:24,dotnet/roslyn,https://api.github.com/repos/dotnet/roslyn,closed,Visual Studio 2019 keeps restarting after exception in Inline Rename application,Area-IDE Bug Developer Community Tenet-Reliability help wanted,"_This issue has been moved from [a ticket on Developer Community](https://developercommunity.visualstudio.com/content/problem/1114309/visual-studio-2019-keeps-restarting.html)._
---
[regression] [worked-in:VS 2019 Preview 3]
Visual Studio 2019 preview 4 keeps restarting after renaming multiple files/locations.
My solution has 40 projects and when I rename a file, which will rename the use everywhere restarted everytime the Visual Studio.
---
### Original Comments
#### Feedback Bot on 7/15/2020, 11:50 PM:
We have directed your feedback to the appropriate engineering team for further evaluation. The team will review the feedback and notify you about the next steps.
#### Sunny Song [MSFT] on 7/16/2020, 02:53 AM:
Dear customer, Thanks for your feedback!
We are investigating this issue now, but we cannot able reproduce the issue on VS 2019 Dev16.7 preview 4
In order to help you solve the problem, could you please try the solution:
Repair your VS then reopen your solution
If this doesn’t solve your issue, please provide following information to us:
Could you please provide the screenshot about the issue
The detailed reproduce steps for your issue
Please provide us the project that you met issue if convenient
#### Sunny Song [MSFT] on 7/26/2020, 08:02 PM:
Dear customer, we haven’t gotten your reply yet, does the issue still reproduce?
#### Eddy Nakamura [MSFT] on 7/27/2020, 02:22 AM:
Hi Sunny, after the update (preview 4 to preview 5), continues restarting, but in a smaller amount.
What I saw so far: normally, when you are renaming a file, which will rename the class name as well and that class is used in many places (more than 30), it restarts. When I check VS after the restart, everything is updated.
#### Feedback Bot on 7/27/2020, 02:59 AM:
We have directed your feedback to the appropriate engineering team for further evaluation. The team will review the feedback and notify you about the next steps.
---
### Original Solutions
(no solutions)",True,"Visual Studio 2019 keeps restarting after exception in Inline Rename application - _This issue has been moved from [a ticket on Developer Community](https://developercommunity.visualstudio.com/content/problem/1114309/visual-studio-2019-keeps-restarting.html)._
---
[regression] [worked-in:VS 2019 Preview 3]
Visual Studio 2019 preview 4 keeps restarting after renaming multiple files/locations.
My solution has 40 projects and when I rename a file, which will rename the use everywhere restarted everytime the Visual Studio.
---
### Original Comments
#### Feedback Bot on 7/15/2020, 11:50 PM:
We have directed your feedback to the appropriate engineering team for further evaluation. The team will review the feedback and notify you about the next steps.
#### Sunny Song [MSFT] on 7/16/2020, 02:53 AM:
Dear customer, Thanks for your feedback!
We are investigating this issue now, but we cannot able reproduce the issue on VS 2019 Dev16.7 preview 4
In order to help you solve the problem, could you please try the solution:
Repair your VS then reopen your solution
If this doesn’t solve your issue, please provide following information to us:
Could you please provide the screenshot about the issue
The detailed reproduce steps for your issue
Please provide us the project that you met issue if convenient
#### Sunny Song [MSFT] on 7/26/2020, 08:02 PM:
Dear customer, we haven’t gotten your reply yet, does the issue still reproduce?
#### Eddy Nakamura [MSFT] on 7/27/2020, 02:22 AM:
Hi Sunny, after the update (preview 4 to preview 5), continues restarting, but in a smaller amount.
What I saw so far: normally, when you are renaming a file, which will rename the class name as well and that class is used in many places (more than 30), it restarts. When I check VS after the restart, everything is updated.
#### Feedback Bot on 7/27/2020, 02:59 AM:
We have directed your feedback to the appropriate engineering team for further evaluation. The team will review the feedback and notify you about the next steps.
---
### Original Solutions
(no solutions)",1,visual studio keeps restarting after exception in inline rename application this issue has been moved from visual studio preview keeps restarting after renaming multiple files locations my solution has projects and when i rename a file which will rename the use everywhere restarted everytime the visual studio original comments feedback bot on pm we have directed your feedback to the appropriate engineering team for further evaluation the team will review the feedback and notify you about the next steps sunny song on am dear customer thanks for your feedback we are investigating this issue now but we cannot able reproduce the issue on vs preview in order to help you solve the problem could you please try the solution repair your vs then reopen your solution if this doesn’t solve your issue please provide following information to us could you please provide the screenshot about the issue the detailed reproduce steps for your issue please provide us the project that you met issue if convenient sunny song on pm dear customer we haven’t gotten your reply yet does the issue still reproduce eddy nakamura on am hi sunny after the update preview to preview continues restarting but in a smaller amount what i saw so far normally when you are renaming a file which will rename the class name as well and that class is used in many places more than it restarts when i check vs after the restart everything is updated a target blank href feedback bot on am we have directed your feedback to the appropriate engineering team for further evaluation the team will review the feedback and notify you about the next steps feedback bot on pm thank you for sharing your feedback our teams prioritize action on product issues with broad customer impact see details at original solutions no solutions ,1
2222,24280593012.0,IssuesEvent,2022-09-28 17:02:13,vectordotdev/vector,https://api.github.com/repos/vectordotdev/vector,closed,chore(buffers): emit a better error than a panic when disk_v2 hits an error during write/flush,domain: buffers domain: reliability,"Right now, the writer will straight up panic, which is _technically_ correct and all we can do, but it does look very bad/ugly. We should go in and emit a more useful error, either by doing so directly and then killing the component gracefully, or by returning an error that can propagate back up the caller chain to be emitted somewhere else that would also stop the component.",True,"chore(buffers): emit a better error than a panic when disk_v2 hits an error during write/flush - Right now, the writer will straight up panic, which is _technically_ correct and all we can do, but it does look very bad/ugly. We should go in and emit a more useful error, either by doing so directly and then killing the component gracefully, or by returning an error that can propagate back up the caller chain to be emitted somewhere else that would also stop the component.",1,chore buffers emit a better error than a panic when disk hits an error during write flush right now the writer will straight up panic which is technically correct and all we can do but it does look very bad ugly we should go in and emit a more useful error either by doing so directly and then killing the component gracefully or by returning an error that can propagate back up the caller chain to be emitted somewhere else that would also stop the component ,1
2471,25601638005.0,IssuesEvent,2022-12-01 20:47:35,openforcefield/openff-toolkit,https://api.github.com/repos/openforcefield/openff-toolkit,opened,Atomic numbers not validated in `Atom` constructor,bug reliability,"```python3
>>> from openff.toolkit.topology.molecule import Atom
>>> Atom(-1, 0, False)
Atom(name=, atomic number=-1)
>>> Atom(-5/2, 0, False)
Atom(name=, atomic number=-2.5)
>>> import math
>>> Atom(math.pi, 0, False)
Atom(name=, atomic number=3.141592653589793)
```
with some fun side effects ...
```python3
>>> from openff.toolkit import Molecule
>>> from rdkit import Chem
>>> [
... atom.atomic_number
... for atom in Molecule.from_rdkit(Chem.MolFromSmiles(""*C"")).atoms
... ]
[0, 6, 1, 1, 1]",True,"Atomic numbers not validated in `Atom` constructor - ```python3
>>> from openff.toolkit.topology.molecule import Atom
>>> Atom(-1, 0, False)
Atom(name=, atomic number=-1)
>>> Atom(-5/2, 0, False)
Atom(name=, atomic number=-2.5)
>>> import math
>>> Atom(math.pi, 0, False)
Atom(name=, atomic number=3.141592653589793)
```
with some fun side effects ...
```python3
>>> from openff.toolkit import Molecule
>>> from rdkit import Chem
>>> [
... atom.atomic_number
... for atom in Molecule.from_rdkit(Chem.MolFromSmiles(""*C"")).atoms
... ]
[0, 6, 1, 1, 1]",1,atomic numbers not validated in atom constructor from openff toolkit topology molecule import atom atom false atom name atomic number atom false atom name atomic number import math atom math pi false atom name atomic number with some fun side effects from openff toolkit import molecule from rdkit import chem atom atomic number for atom in molecule from rdkit chem molfromsmiles c atoms ,1
21621,4729123991.0,IssuesEvent,2016-10-18 17:49:26,kubernetes/kubernetes,https://api.github.com/repos/kubernetes/kubernetes,opened,kubectl --v= is underdocumented,component/kubectl kind/documentation team/ux,"_(Comes from user)_
The most important thing for cloud support is insight, logs, verbosity. The description of the `--v` argument reads:
`--v=0: log level for V logs`
- What are the different log levels?
- What level is most useful to see all API calls?
- What is ""V""?
- What would be a useful standard reply to a customer? ""Please re-run this command with --v=9 and send me the output""
With gcloud I normally ask for the output of `--log-http --verbosity=debug`.
@kubernetes/kubectl @ymqytw @pwittrock ",1.0,"kubectl --v= is underdocumented - _(Comes from user)_
The most important thing for cloud support is insight, logs, verbosity. The description of the `--v` argument reads:
`--v=0: log level for V logs`
- What are the different log levels?
- What level is most useful to see all API calls?
- What is ""V""?
- What would be a useful standard reply to a customer? ""Please re-run this command with --v=9 and send me the output""
With gcloud I normally ask for the output of `--log-http --verbosity=debug`.
@kubernetes/kubectl @ymqytw @pwittrock ",0,kubectl v is underdocumented comes from user the most important thing for cloud support is insight logs verbosity the description of the v argument reads v log level for v logs what are the different log levels what level is most useful to see all api calls what is v what would be a useful standard reply to a customer please re run this command with v and send me the output with gcloud i normally ask for the output of log http verbosity debug kubernetes kubectl ymqytw pwittrock ,0
2166,23875475970.0,IssuesEvent,2022-09-07 18:36:28,gitpod-io/gitpod,https://api.github.com/repos/gitpod-io/gitpod,closed,Investigate limit core dump file size dump inside workspace,type: improvement team: IDE aspect: reliability,"If a program crashes and generates a core dump file it's possible that if the generated file is too big, it will force the workspace to stop, [see slack thread](https://gitpod.slack.com/archives/C03ND7D4NKX/p1658865272867739?thread_ts=1658584968.171359&cid=C03ND7D4NKX), right now the file size is set to unlimited
Here's a [related thread](https://discord.com/channels/816244985187008514/816246578594840586/1002242027121025034) in discord where a user reported this file being generated when running brew
",True,"Investigate limit core dump file size dump inside workspace - If a program crashes and generates a core dump file it's possible that if the generated file is too big, it will force the workspace to stop, [see slack thread](https://gitpod.slack.com/archives/C03ND7D4NKX/p1658865272867739?thread_ts=1658584968.171359&cid=C03ND7D4NKX), right now the file size is set to unlimited
Here's a [related thread](https://discord.com/channels/816244985187008514/816246578594840586/1002242027121025034) in discord where a user reported this file being generated when running brew
",1,investigate limit core dump file size dump inside workspace if a program crashes and generates a core dump file it s possible that if the generated file is too big it will force the workspace to stop right now the file size is set to unlimited here s a in discord where a user reported this file being generated when running brew ,1
619215,19519270344.0,IssuesEvent,2021-12-29 15:28:34,sahar-avsh/ZahraAtrvash-SWE573,https://api.github.com/repos/sahar-avsh/ZahraAtrvash-SWE573,closed,Friends page,Improvement S.O.S Priority Backend Frontend Milestone,"@sahar-avsh
- [x] Add a **Friend request button** to each user page
- [x] A **notification of this request** shall be sent to the user
- [x] If user accepts the friend request **both** of them will be _following each other_
- [x] We shall see both of them in a **friends page** of themselves(No separated follower and following page anymore)
- [x] **Link** profile editing _HTML and CSS_ page to it",1.0,"Friends page - @sahar-avsh
- [x] Add a **Friend request button** to each user page
- [x] A **notification of this request** shall be sent to the user
- [x] If user accepts the friend request **both** of them will be _following each other_
- [x] We shall see both of them in a **friends page** of themselves(No separated follower and following page anymore)
- [x] **Link** profile editing _HTML and CSS_ page to it",0,friends page sahar avsh add a friend request button to each user page a notification of this request shall be sent to the user if user accepts the friend request both of them will be following each other we shall see both of them in a friends page of themselves no separated follower and following page anymore link profile editing html and css page to it,0
751907,26264308256.0,IssuesEvent,2023-01-06 10:58:44,feast-dev/feast,https://api.github.com/repos/feast-dev/feast,opened,Get Online Features through Redis getting out of index ,kind/bug priority/p2,"## Expected Behavior
Need to get the required features according to below code
from pprint import pprint
from feast import FeatureStore
feature_vector = fs.get_online_features(
features=[
'driver_stats:conv_rate',
'driver_stats:acc_rate',
'driver_stats:avg_daily_trips'
],
entity_rows=[{""driver"": 50999}]
).to_dict()
pprint(feature_vector)
## Current Behavior
getting out of index error below you can see my error
IndexError Traceback (most recent call last)
Input In [20], in ()
1 from pprint import pprint
2 from feast import FeatureStore
----> 4 feature_vector = fs.get_online_features(
5 features=[
6 'driver_stats:conv_rate',
7 'driver_stats:acc_rate',
8 'driver_stats:avg_daily_trips'
9 ],
10 entity_rows=[{""driver"": 50999}]
11 ).to_dict()
13 pprint(feature_vector)
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/feast/usage.py:269, in log_exceptions_and_usage..decorator..wrapper(*args, **kwargs)
266 ctx.attributes.update(attrs)
268 try:
--> 269 return func(*args, **kwargs)
270 except Exception:
271 if ctx.exception:
272 # exception was already recorded
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/feast/feature_store.py:1175, in FeatureStore.get_online_features(self, features, entity_rows, full_feature_names)
1172 except KeyError as e:
1173 raise ValueError(""All entity_rows must have the same keys."") from e
-> 1175 return self._get_online_features(
1176 features=features,
1177 entity_values=columnar,
1178 full_feature_names=full_feature_names,
1179 native_entity_values=True,
1180 )
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/feast/feature_store.py:1309, in FeatureStore._get_online_features(self, features, entity_values, full_feature_names, native_entity_values)
1304 table_entity_values, idxs = self._get_unique_entities(
1305 table, join_key_values, entity_name_to_join_key_map,
1306 )
1308 # Fetch feature data for the minimum set of Entities.
-> 1309 feature_data = self._read_from_online_store(
1310 table_entity_values, provider, requested_features, table,
1311 )
1313 # Populate the result_rows with the Features from the OnlineStore inplace.
1314 self._populate_response_from_feature_data(
1315 feature_data,
1316 idxs,
(...)
1320 table,
1321 )
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/feast/feature_store.py:1517, in FeatureStore._read_from_online_store(self, entity_rows, provider, requested_features, table)
1511 entity_key_protos = [
1512 EntityKeyProto(join_keys=row.keys(), entity_values=row.values())
1513 for row in entity_rows
1514 ]
1516 # Fetch data for Entities.
-> 1517 read_rows = provider.online_read(
1518 config=self.config,
1519 table=table,
1520 entity_keys=entity_key_protos,
1521 requested_features=requested_features,
1522 )
1524 # Each row is a set of features for a given entity key. We only need to convert
1525 # the data to Protobuf once.
1526 row_ts_proto = Timestamp()
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/feast_azure_provider/azure_provider.py:90, in AzureProvider.online_read(self, config, table, entity_keys, requested_features)
88 result = []
89 if self.online_store:
---> 90 result = self.online_store.online_read(config, table, entity_keys, requested_features)
91 return result
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/feast/usage.py:280, in log_exceptions_and_usage..decorator..wrapper(*args, **kwargs)
277 ctx.traceback = _trace_to_log(traceback)
279 if traceback:
--> 280 raise exc.with_traceback(traceback)
282 raise exc
283 finally:
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/feast/usage.py:269, in log_exceptions_and_usage..decorator..wrapper(*args, **kwargs)
266 ctx.attributes.update(attrs)
268 try:
--> 269 return func(*args, **kwargs)
270 except Exception:
271 if ctx.exception:
272 # exception was already recorded
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/feast/infra/online_stores/redis.py:246, in RedisOnlineStore.online_read(self, config, table, entity_keys, requested_features)
243 online_store_config = config.online_store
244 assert isinstance(online_store_config, RedisOnlineStoreConfig)
--> 246 client = self._get_client(online_store_config)
247 feature_view = table.name
248 project = config.project
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/feast/infra/online_stores/redis.py:166, in RedisOnlineStore._get_client(self, online_store_config)
164 self._client = RedisCluster(**kwargs)
165 else:
--> 166 kwargs[""host""] = startup_nodes[0][""host""]
167 kwargs[""port""] = startup_nodes[0][""port""]
168 self._client = Redis(**kwargs)
IndexError: list index out of range
## Steps to reproduce
### Specifications
- Version:
- Platform: Azure Machine Learning Work Space (Redis)
- Subsystem: Linux
## Possible Solution
Not enough documentation on Materialize section needed detailed code and documentation with enough Time Delta scenarios. ",1.0,"Get Online Features through Redis getting out of index - ## Expected Behavior
Need to get the required features according to below code
from pprint import pprint
from feast import FeatureStore
feature_vector = fs.get_online_features(
features=[
'driver_stats:conv_rate',
'driver_stats:acc_rate',
'driver_stats:avg_daily_trips'
],
entity_rows=[{""driver"": 50999}]
).to_dict()
pprint(feature_vector)
## Current Behavior
getting out of index error below you can see my error
IndexError Traceback (most recent call last)
Input In [20], in ()
1 from pprint import pprint
2 from feast import FeatureStore
----> 4 feature_vector = fs.get_online_features(
5 features=[
6 'driver_stats:conv_rate',
7 'driver_stats:acc_rate',
8 'driver_stats:avg_daily_trips'
9 ],
10 entity_rows=[{""driver"": 50999}]
11 ).to_dict()
13 pprint(feature_vector)
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/feast/usage.py:269, in log_exceptions_and_usage..decorator..wrapper(*args, **kwargs)
266 ctx.attributes.update(attrs)
268 try:
--> 269 return func(*args, **kwargs)
270 except Exception:
271 if ctx.exception:
272 # exception was already recorded
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/feast/feature_store.py:1175, in FeatureStore.get_online_features(self, features, entity_rows, full_feature_names)
1172 except KeyError as e:
1173 raise ValueError(""All entity_rows must have the same keys."") from e
-> 1175 return self._get_online_features(
1176 features=features,
1177 entity_values=columnar,
1178 full_feature_names=full_feature_names,
1179 native_entity_values=True,
1180 )
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/feast/feature_store.py:1309, in FeatureStore._get_online_features(self, features, entity_values, full_feature_names, native_entity_values)
1304 table_entity_values, idxs = self._get_unique_entities(
1305 table, join_key_values, entity_name_to_join_key_map,
1306 )
1308 # Fetch feature data for the minimum set of Entities.
-> 1309 feature_data = self._read_from_online_store(
1310 table_entity_values, provider, requested_features, table,
1311 )
1313 # Populate the result_rows with the Features from the OnlineStore inplace.
1314 self._populate_response_from_feature_data(
1315 feature_data,
1316 idxs,
(...)
1320 table,
1321 )
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/feast/feature_store.py:1517, in FeatureStore._read_from_online_store(self, entity_rows, provider, requested_features, table)
1511 entity_key_protos = [
1512 EntityKeyProto(join_keys=row.keys(), entity_values=row.values())
1513 for row in entity_rows
1514 ]
1516 # Fetch data for Entities.
-> 1517 read_rows = provider.online_read(
1518 config=self.config,
1519 table=table,
1520 entity_keys=entity_key_protos,
1521 requested_features=requested_features,
1522 )
1524 # Each row is a set of features for a given entity key. We only need to convert
1525 # the data to Protobuf once.
1526 row_ts_proto = Timestamp()
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/feast_azure_provider/azure_provider.py:90, in AzureProvider.online_read(self, config, table, entity_keys, requested_features)
88 result = []
89 if self.online_store:
---> 90 result = self.online_store.online_read(config, table, entity_keys, requested_features)
91 return result
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/feast/usage.py:280, in log_exceptions_and_usage..decorator..wrapper(*args, **kwargs)
277 ctx.traceback = _trace_to_log(traceback)
279 if traceback:
--> 280 raise exc.with_traceback(traceback)
282 raise exc
283 finally:
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/feast/usage.py:269, in log_exceptions_and_usage..decorator..wrapper(*args, **kwargs)
266 ctx.attributes.update(attrs)
268 try:
--> 269 return func(*args, **kwargs)
270 except Exception:
271 if ctx.exception:
272 # exception was already recorded
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/feast/infra/online_stores/redis.py:246, in RedisOnlineStore.online_read(self, config, table, entity_keys, requested_features)
243 online_store_config = config.online_store
244 assert isinstance(online_store_config, RedisOnlineStoreConfig)
--> 246 client = self._get_client(online_store_config)
247 feature_view = table.name
248 project = config.project
File /anaconda/envs/azureml_py310_sdkv2/lib/python3.10/site-packages/feast/infra/online_stores/redis.py:166, in RedisOnlineStore._get_client(self, online_store_config)
164 self._client = RedisCluster(**kwargs)
165 else:
--> 166 kwargs[""host""] = startup_nodes[0][""host""]
167 kwargs[""port""] = startup_nodes[0][""port""]
168 self._client = Redis(**kwargs)
IndexError: list index out of range
## Steps to reproduce
### Specifications
- Version:
- Platform: Azure Machine Learning Work Space (Redis)
- Subsystem: Linux
## Possible Solution
Not enough documentation on Materialize section needed detailed code and documentation with enough Time Delta scenarios. ",0,get online features through redis getting out of index expected behavior need to get the required features according to below code from pprint import pprint from feast import featurestore feature vector fs get online features features driver stats conv rate driver stats acc rate driver stats avg daily trips entity rows to dict pprint feature vector current behavior getting out of index error below you can see my error indexerror traceback most recent call last input in in from pprint import pprint from feast import featurestore feature vector fs get online features features driver stats conv rate driver stats acc rate driver stats avg daily trips entity rows to dict pprint feature vector file anaconda envs azureml lib site packages feast usage py in log exceptions and usage decorator wrapper args kwargs ctx attributes update attrs try return func args kwargs except exception if ctx exception exception was already recorded file anaconda envs azureml lib site packages feast feature store py in featurestore get online features self features entity rows full feature names except keyerror as e raise valueerror all entity rows must have the same keys from e return self get online features features features entity values columnar full feature names full feature names native entity values true file anaconda envs azureml lib site packages feast feature store py in featurestore get online features self features entity values full feature names native entity values table entity values idxs self get unique entities table join key values entity name to join key map fetch feature data for the minimum set of entities feature data self read from online store table entity values provider requested features table populate the result rows with the features from the onlinestore inplace self populate response from feature data feature data idxs table file anaconda envs azureml lib site packages feast feature store py in featurestore read from online store self entity rows provider requested features table entity key protos entitykeyproto join keys row keys entity values row values for row in entity rows fetch data for entities read rows provider online read config self config table table entity keys entity key protos requested features requested features each row is a set of features for a given entity key we only need to convert the data to protobuf once row ts proto timestamp file anaconda envs azureml lib site packages feast azure provider azure provider py in azureprovider online read self config table entity keys requested features result if self online store result self online store online read config table entity keys requested features return result file anaconda envs azureml lib site packages feast usage py in log exceptions and usage decorator wrapper args kwargs ctx traceback trace to log traceback if traceback raise exc with traceback traceback raise exc finally file anaconda envs azureml lib site packages feast usage py in log exceptions and usage decorator wrapper args kwargs ctx attributes update attrs try return func args kwargs except exception if ctx exception exception was already recorded file anaconda envs azureml lib site packages feast infra online stores redis py in redisonlinestore online read self config table entity keys requested features online store config config online store assert isinstance online store config redisonlinestoreconfig client self get client online store config feature view table name project config project file anaconda envs azureml lib site packages feast infra online stores redis py in redisonlinestore get client self online store config self client rediscluster kwargs else kwargs startup nodes kwargs startup nodes self client redis kwargs indexerror list index out of range steps to reproduce specifications version platform azure machine learning work space redis subsystem linux possible solution not enough documentation on materialize section needed detailed code and documentation with enough time delta scenarios ,0
956,11798001686.0,IssuesEvent,2020-03-18 13:42:56,dotnet/roslyn,https://api.github.com/repos/dotnet/roslyn,closed,Roslyn OOP repeatedly crashing,Area-IDE Bug Tenet-Reliability,"I don't have a specific repro, but it often seems to be Find References doing this:
```
Application: ServiceHub.RoslynCodeAnalysisService32.exe
Framework Version: v4.0.30319
Description: The application requested process termination through System.Environment.FailFast(string message).
Message: System.InvalidOperationException: SqlConnection was not properly closed
Stack:
at System.Environment.FailFast(System.String, System.Exception)
at Microsoft.CodeAnalysis.FailFast.OnFatalException(System.Exception)
at Microsoft.CodeAnalysis.ErrorReporting.WatsonReporter.ReportFatal(System.Exception)
at Microsoft.CodeAnalysis.ErrorReporting.FatalError.Report(System.Exception, System.Action`1)
at Microsoft.CodeAnalysis.SQLite.Interop.SqlConnection.Finalize()
```
",True,"Roslyn OOP repeatedly crashing - I don't have a specific repro, but it often seems to be Find References doing this:
```
Application: ServiceHub.RoslynCodeAnalysisService32.exe
Framework Version: v4.0.30319
Description: The application requested process termination through System.Environment.FailFast(string message).
Message: System.InvalidOperationException: SqlConnection was not properly closed
Stack:
at System.Environment.FailFast(System.String, System.Exception)
at Microsoft.CodeAnalysis.FailFast.OnFatalException(System.Exception)
at Microsoft.CodeAnalysis.ErrorReporting.WatsonReporter.ReportFatal(System.Exception)
at Microsoft.CodeAnalysis.ErrorReporting.FatalError.Report(System.Exception, System.Action`1)
at Microsoft.CodeAnalysis.SQLite.Interop.SqlConnection.Finalize()
```
",1,roslyn oop repeatedly crashing i don t have a specific repro but it often seems to be find references doing this application servicehub exe framework version description the application requested process termination through system environment failfast string message message system invalidoperationexception sqlconnection was not properly closed stack at system environment failfast system string system exception at microsoft codeanalysis failfast onfatalexception system exception at microsoft codeanalysis errorreporting watsonreporter reportfatal system exception at microsoft codeanalysis errorreporting fatalerror report system exception system action at microsoft codeanalysis sqlite interop sqlconnection finalize ,1
1678,18451170001.0,IssuesEvent,2021-10-15 10:50:08,ipfs-shipyard/nft.storage,https://api.github.com/repos/ipfs-shipyard/nft.storage,closed,Migrate NFT.Storage to Postgres,kind/enhancement P0 reliability-performance-sprint,"The base PR for this migration is https://github.com/ipfs-shipyard/nft.storage/pull/263
## Tasks
- [x] SQL Schema
- [x] Endpoints using postgres client
- [x] normalize cidv1 https://github.com/web3-storage/web3-schema/pull/4
- [x] Add docs and scripts for local database setup to run tests https://github.com/ipfs-shipyard/nft.storage/pull/425
- [x] Add deals schema and logic https://github.com/ipfs-shipyard/nft.storage/pull/418
- [x] #454
- [x] #459
- [x] #461 https://github.com/ipfs-shipyard/nft.storage/pull/496
- \+ setup foreign table materialized views refresh
- [x] #473 https://github.com/ipfs-shipyard/nft.storage/pull/491
- [x] metrics will query postgres directly from `/metrics` https://github.com/ipfs-shipyard/nft.storage/pull/495
- [x] handle DB errors properly, define what to return and what to catch and send to sentry. https://github.com/ipfs-shipyard/nft.storage/pull/510
- [x] pinning services api needs to send error to sentry, because there we dont use the normal throw error flow. https://github.com/ipfs-shipyard/nft.storage/pull/512
- [x] add uploads deleted_at, update select to account for deleted_at and change the `ON CONFLICT` clause in `upload_fn` https://github.com/ipfs-shipyard/nft.storage/pull/551
- [x] Dont delete auth_keys use `deleted_at` because uploads needs auth_key.id https://github.com/ipfs-shipyard/nft.storage/pull/539
- [x] Define max number of item returned in the list endpoints (nft list and pins list) https://github.com/ipfs-shipyard/nft.storage/pull/493
- [x] Improve the db ts types setup, where we actually setup with docker, run the schema and get the types.
- [x] #374 https://github.com/ipfs-shipyard/nft.storage/pull/485
- [x] Add new enum value for `service_type` `IpfsCluster2` https://github.com/ipfs-shipyard/nft.storage/pull/494 / https://github.com/ipfs-shipyard/nft.storage/pull/509
- [x] Move v1 endpoints to root https://github.com/ipfs-shipyard/nft.storage/pull/534
## Diagram
https://dbdiagram.io/d/615d7356940c4c4eec87e2be",True,"Migrate NFT.Storage to Postgres - The base PR for this migration is https://github.com/ipfs-shipyard/nft.storage/pull/263
## Tasks
- [x] SQL Schema
- [x] Endpoints using postgres client
- [x] normalize cidv1 https://github.com/web3-storage/web3-schema/pull/4
- [x] Add docs and scripts for local database setup to run tests https://github.com/ipfs-shipyard/nft.storage/pull/425
- [x] Add deals schema and logic https://github.com/ipfs-shipyard/nft.storage/pull/418
- [x] #454
- [x] #459
- [x] #461 https://github.com/ipfs-shipyard/nft.storage/pull/496
- \+ setup foreign table materialized views refresh
- [x] #473 https://github.com/ipfs-shipyard/nft.storage/pull/491
- [x] metrics will query postgres directly from `/metrics` https://github.com/ipfs-shipyard/nft.storage/pull/495
- [x] handle DB errors properly, define what to return and what to catch and send to sentry. https://github.com/ipfs-shipyard/nft.storage/pull/510
- [x] pinning services api needs to send error to sentry, because there we dont use the normal throw error flow. https://github.com/ipfs-shipyard/nft.storage/pull/512
- [x] add uploads deleted_at, update select to account for deleted_at and change the `ON CONFLICT` clause in `upload_fn` https://github.com/ipfs-shipyard/nft.storage/pull/551
- [x] Dont delete auth_keys use `deleted_at` because uploads needs auth_key.id https://github.com/ipfs-shipyard/nft.storage/pull/539
- [x] Define max number of item returned in the list endpoints (nft list and pins list) https://github.com/ipfs-shipyard/nft.storage/pull/493
- [x] Improve the db ts types setup, where we actually setup with docker, run the schema and get the types.
- [x] #374 https://github.com/ipfs-shipyard/nft.storage/pull/485
- [x] Add new enum value for `service_type` `IpfsCluster2` https://github.com/ipfs-shipyard/nft.storage/pull/494 / https://github.com/ipfs-shipyard/nft.storage/pull/509
- [x] Move v1 endpoints to root https://github.com/ipfs-shipyard/nft.storage/pull/534
## Diagram
https://dbdiagram.io/d/615d7356940c4c4eec87e2be",1,migrate nft storage to postgres the base pr for this migration is tasks sql schema endpoints using postgres client normalize add docs and scripts for local database setup to run tests add deals schema and logic setup foreign table materialized views refresh metrics will query postgres directly from metrics handle db errors properly define what to return and what to catch and send to sentry pinning services api needs to send error to sentry because there we dont use the normal throw error flow add uploads deleted at update select to account for deleted at and change the on conflict clause in upload fn dont delete auth keys use deleted at because uploads needs auth key id define max number of item returned in the list endpoints nft list and pins list improve the db ts types setup where we actually setup with docker run the schema and get the types add new enum value for service type move endpoints to root diagram img width alt screenshot at src ,1
167655,13038719960.0,IssuesEvent,2020-07-28 15:37:05,mozilla-mobile/firefox-ios,https://api.github.com/repos/mozilla-mobile/firefox-ios,opened,[XCUITest] New test for the search button,Test-Automation eng:ui-test,"There is a new search button added to the bottom bar. It would be nice to have a test to check its functionality.
New UI:

",2.0,"[XCUITest] New test for the search button - There is a new search button added to the bottom bar. It would be nice to have a test to check its functionality.
New UI:

",0, new test for the search button there is a new search button added to the bottom bar it would be nice to have a test to check its functionality new ui ,0
310376,26713169097.0,IssuesEvent,2023-01-28 06:01:06,andrew-johnson-4/L1IR,https://api.github.com/repos/andrew-johnson-4/L1IR,closed,Add optimizations for unary combinators,JIT reference tests optimization,"`+ - * / % ^ == != < <= > >= ...` should all be recognized by the JIT sweep, even in the reference implementation. JIT in the reference implementation is a compiler phase that can be skipped for equivalence testing.
for example
```
let $""+""(x, y) = literal x y;
let $""-""(x, y) = match (x, y) {
(literal xs, literal) => xs,
(literal '0' xs, literal '0' ys) => xs - ys,
};
```
These function signatures should each get a JIT trampoline for the case of unary input.
* `+` ✓
* `-` ✓
* `*` ✓
* `/` ✓
* `%` ✓
* `^` (pow, not xor) ✓ (no single instruction)
* `==` ✓
* `!=` ✓
* `<` ✓
* `<=` ✓
* `>` ✓
* `>=` ✓
* `pos`
* `neg`
* `abs`
* `not`
* `&&`
* `||`
* `xor`
* `lshift`
* `rshift`
* `Binary as Unary`
* `Decimal as Unary`
* `Hexadecimal as Unary`
* `Unary as Binary`
* `Unary as Decimal`
* `Unary as Hexadecimal`",1.0,"Add optimizations for unary combinators - `+ - * / % ^ == != < <= > >= ...` should all be recognized by the JIT sweep, even in the reference implementation. JIT in the reference implementation is a compiler phase that can be skipped for equivalence testing.
for example
```
let $""+""(x, y) = literal x y;
let $""-""(x, y) = match (x, y) {
(literal xs, literal) => xs,
(literal '0' xs, literal '0' ys) => xs - ys,
};
```
These function signatures should each get a JIT trampoline for the case of unary input.
* `+` ✓
* `-` ✓
* `*` ✓
* `/` ✓
* `%` ✓
* `^` (pow, not xor) ✓ (no single instruction)
* `==` ✓
* `!=` ✓
* `<` ✓
* `<=` ✓
* `>` ✓
* `>=` ✓
* `pos`
* `neg`
* `abs`
* `not`
* `&&`
* `||`
* `xor`
* `lshift`
* `rshift`
* `Binary as Unary`
* `Decimal as Unary`
* `Hexadecimal as Unary`
* `Unary as Binary`
* `Unary as Decimal`
* `Unary as Hexadecimal`",0,add optimizations for unary combinators should all be recognized by the jit sweep even in the reference implementation jit in the reference implementation is a compiler phase that can be skipped for equivalence testing for example let x y literal x y let x y match x y literal xs literal xs literal xs literal ys xs ys these function signatures should each get a jit trampoline for the case of unary input ✓ ✓ ✓ ✓ ✓ pow not xor ✓ no single instruction ✓ ✓ ✓ ✓ ✓ ✓ pos neg abs not xor lshift rshift binary as unary decimal as unary hexadecimal as unary unary as binary unary as decimal unary as hexadecimal ,0
1539,16832972781.0,IssuesEvent,2021-06-18 08:12:53,emmamei/cdkey,https://api.github.com/repos/emmamei/cdkey,closed,"Variable ""name"" needs more specificity",reliabilityfix,"The code uses `name` for a variety of purposes, including:
- User name
- Configuration setting name
- Menu selection name
These need to be distinguished and separated for reliability and maintainability.",True,"Variable ""name"" needs more specificity - The code uses `name` for a variety of purposes, including:
- User name
- Configuration setting name
- Menu selection name
These need to be distinguished and separated for reliability and maintainability.",1,variable name needs more specificity the code uses name for a variety of purposes including user name configuration setting name menu selection name these need to be distinguished and separated for reliability and maintainability ,1
48896,6123129688.0,IssuesEvent,2017-06-23 03:03:53,HelgLeshiy/SimpleLab,https://api.github.com/repos/HelgLeshiy/SimpleLab,closed,Создать рендерер шрифтов,3 - Doing... designing graphics,"Стоит отказаться от файлов, хранящих шрифты в виде png. А рендерить на Surface при загрузке приложения. Это позволит использовать шрифты на выбор пользователя",1.0,"Создать рендерер шрифтов - Стоит отказаться от файлов, хранящих шрифты в виде png. А рендерить на Surface при загрузке приложения. Это позволит использовать шрифты на выбор пользователя",0,создать рендерер шрифтов стоит отказаться от файлов хранящих шрифты в виде png а рендерить на surface при загрузке приложения это позволит использовать шрифты на выбор пользователя,0
146,4274437997.0,IssuesEvent,2016-07-13 20:31:15,dotnet/corefx,https://api.github.com/repos/dotnet/corefx,opened,Deadlock in WinHttpResponseStream,System.Net tenet-reliability,"While investigating #9785, I saw one System.Net.Http test run deadlock in some cancellation token registration management code.
One thread was unregistering from a cancellation token:
```
00 ntdll!ZwDelayExecution
01 KERNELBASE!SleepEx
02 CoreCLR!Thread::UserSleep
03 CoreCLR!ThreadNative::Sleep
04 System_Private_CoreLib_ni!System.Threading.Thread.Sleep(Int32)
05 System_Private_CoreLib_ni!System.Threading.SpinWait.SpinOnce()
06 System_Private_CoreLib_ni!System.Threading.CancellationTokenSource.WaitForCallbackToComplete(System.Threading.CancellationCallbackInfo)
07 system_net_http!System.Net.Http.WinHttpRequestState.DisposeCtrReadFromResponseStream()
08 system_net_http!System.Net.Http.WinHttpRequestCallback.OnRequestReadComplete(System.Net.Http.WinHttpRequestState, UInt32)
09 system_net_http!System.Net.Http.WinHttpRequestCallback.RequestCallback(IntPtr, System.Net.Http.WinHttpRequestState, UInt32, IntPtr, UInt32)
0a system_net_http!System.Net.Http.WinHttpRequestCallback.WinHttpCallback(IntPtr, IntPtr, UInt32, IntPtr, UInt32)
0b system_console!DomainBoundILStubClass.IL_STUB_ReversePInvoke(Int64, Int64, Int32, Int64, Int32)
0c CoreCLR!UMThunkStub
0d winhttp!HTTP_REQUEST_HANDLE_OBJECT::_SafeAppCallback
0e winhttp!HTTP_REQUEST_HANDLE_OBJECT::_ControlledAppCallback
0f winhttp!HTTP_REQUEST_HANDLE_OBJECT::IndicateCompletionStatusCommon
10 winhttp!HTTP_REQUEST_HANDLE_OBJECT::IndicateCompletionStatusInline
11 winhttp!WinHttpReadData
12 system_console!DomainBoundILStubClass.IL_STUB_PInvoke(SafeWinHttpHandle, IntPtr, UInt32, IntPtr)
13 system_net_http!System.Net.Http.WinHttpResponseStream+<>c__DisplayClass17_0.b__0(System.Threading.Tasks.Task`1)
14 System_Private_CoreLib_ni!System.Threading.Tasks.Task.Execute()
15 System_Private_CoreLib_ni!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
16 System_Private_CoreLib_ni!System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef)
17 System_Private_CoreLib_ni!System.Threading.Tasks.Task.ExecuteEntry(Boolean)
18 System_Private_CoreLib_ni!System.Threading.ThreadPoolWorkQueue.Dispatch()
19 CoreCLR!CallDescrWorkerInternal
1a CoreCLR!MethodDescCallSite::CallTargetWorker
1b CoreCLR!MethodDescCallSite::Call_RetBool
1c CoreCLR!QueueUserWorkItemManagedCallback
1d CoreCLR!ManagedThreadBase_DispatchInner
1e CoreCLR!ManagedThreadBase_DispatchMiddle
1f CoreCLR!ManagedThreadBase_DispatchOuter
20 CoreCLR!ManagedThreadBase_FullTransitionWithAD
21 CoreCLR!ManagedThreadBase::ThreadPool
22 CoreCLR!ManagedPerAppDomainTPCount::DispatchWorkItem
23 CoreCLR!ThreadpoolMgr::ExecuteWorkRequest
24 CoreCLR!ThreadpoolMgr::WorkerThreadStart
25 CoreCLR!Thread::intermediateThreadProc
26 KERNEL32!BaseThreadInitThunk
27 ntdll!RtlUserThreadStart
```
So, `CancellationTokenSource.WaitForCallbackToComplete` is waiting for an already-executing registered callback. This appears to be running on the following stack:
```
00 ntdll!ZwWaitForMultipleObjects
01 KERNELBASE!WaitForMultipleObjectsEx
02 CoreCLR!WaitForMultipleObjectsEx_SO_TOLERANT
03 CoreCLR!Thread::DoAppropriateAptStateWait
04 CoreCLR!Thread::DoAppropriateWaitWorker
05 CoreCLR!Thread::DoAppropriateWait
06 CoreCLR!CLREventBase::WaitEx
07 CoreCLR!CLREventBase::Wait
08 CoreCLR!AwareLock::EnterEpilogHelper
09 CoreCLR!AwareLock::EnterEpilog
0a CoreCLR!SyncBlock::EnterMonitor
0b CoreCLR!ObjHeader::EnterObjMonitor
0c CoreCLR!Object::EnterObjMonitor
0d CoreCLR!JITutil_MonEnterWorker
0e system_net_http!System.Net.Http.WinHttpResponseStream.CancelPendingResponseStreamReadOperation()
0f system_net_http!System.Net.Http.WinHttpResponseStream+<>c.b__17_1(System.Object)
10 System_Private_CoreLib_ni!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
11 System_Private_CoreLib_ni!System.Threading.CancellationTokenSource.ExecuteCallbackHandlers(Boolean)
12 System_Private_CoreLib_ni!System.Threading.CancellationTokenSource.Cancel()
13 system_net_http_functional_tests!System.Net.Http.Functional.Tests.ResponseStreamTest+d__5.MoveNext()
14 System_Private_CoreLib_ni!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
15 System_Private_CoreLib_ni!System.Runtime.CompilerServices.AsyncMethodBuilderCore+MoveNextRunner.RunWithDefaultContext()
16 xunit_execution_dotnet!Xunit.Sdk.AsyncTestSyncContext+<>c__DisplayClass7_0.b__1(System.Object)
17 xunit_execution_dotnet!Xunit.Sdk.MaxConcurrencySyncContext.RunOnSyncContext(System.Threading.SendOrPostCallback, System.Object)
18 System_Private_CoreLib_ni!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
19 xunit_execution_dotnet!Xunit.Sdk.MaxConcurrencySyncContext.WorkerThreadProc()
1a xunit_execution_dotnet!Xunit.Sdk.XunitWorkerThread+<>c.b__5_0(System.Object)
1b System_Private_CoreLib_ni!System.Threading.Tasks.Task.Execute()
1c System_Private_CoreLib_ni!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
1d System_Private_CoreLib_ni!System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef)
1e System_Private_CoreLib_ni!System.Threading.Tasks.Task.ExecuteEntry(Boolean)
1f System_Private_CoreLib_ni!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
20 CoreCLR!CallDescrWorkerInternal
21 CoreCLR!MethodDescCallSite::CallTargetWorker
22 CoreCLR!MethodDescCallSite::Call
23 CoreCLR!ThreadNative::KickOffThread_Worker
24 CoreCLR!ManagedThreadBase_DispatchInner
25 CoreCLR!ManagedThreadBase_DispatchMiddle
26 CoreCLR!ManagedThreadBase_DispatchOuter
27 CoreCLR!ManagedThreadBase_FullTransitionWithAD
28 CoreCLR!ManagedThreadBase::KickOff
29 CoreCLR!ThreadNative::KickOffThread
2a CoreCLR!Thread::intermediateThreadProc
2b KERNEL32!BaseThreadInitThunk
2c ntdll!RtlUserThreadStart
```
`WinHttpResponseStream.CancelPendingResponseStreamReadOperation` is trying to acquire a lock, which is already held by the first thread - the one waiting for the cancellation callback to complete. So, classic deadlock.
I can't reproduce this reliably, but I found it while running System.Net.Http.Functional.Tests under WinDBG, which may alter timing enough to make this more likely to repro.
",True,"Deadlock in WinHttpResponseStream - While investigating #9785, I saw one System.Net.Http test run deadlock in some cancellation token registration management code.
One thread was unregistering from a cancellation token:
```
00 ntdll!ZwDelayExecution
01 KERNELBASE!SleepEx
02 CoreCLR!Thread::UserSleep
03 CoreCLR!ThreadNative::Sleep
04 System_Private_CoreLib_ni!System.Threading.Thread.Sleep(Int32)
05 System_Private_CoreLib_ni!System.Threading.SpinWait.SpinOnce()
06 System_Private_CoreLib_ni!System.Threading.CancellationTokenSource.WaitForCallbackToComplete(System.Threading.CancellationCallbackInfo)
07 system_net_http!System.Net.Http.WinHttpRequestState.DisposeCtrReadFromResponseStream()
08 system_net_http!System.Net.Http.WinHttpRequestCallback.OnRequestReadComplete(System.Net.Http.WinHttpRequestState, UInt32)
09 system_net_http!System.Net.Http.WinHttpRequestCallback.RequestCallback(IntPtr, System.Net.Http.WinHttpRequestState, UInt32, IntPtr, UInt32)
0a system_net_http!System.Net.Http.WinHttpRequestCallback.WinHttpCallback(IntPtr, IntPtr, UInt32, IntPtr, UInt32)
0b system_console!DomainBoundILStubClass.IL_STUB_ReversePInvoke(Int64, Int64, Int32, Int64, Int32)
0c CoreCLR!UMThunkStub
0d winhttp!HTTP_REQUEST_HANDLE_OBJECT::_SafeAppCallback
0e winhttp!HTTP_REQUEST_HANDLE_OBJECT::_ControlledAppCallback
0f winhttp!HTTP_REQUEST_HANDLE_OBJECT::IndicateCompletionStatusCommon
10 winhttp!HTTP_REQUEST_HANDLE_OBJECT::IndicateCompletionStatusInline
11 winhttp!WinHttpReadData
12 system_console!DomainBoundILStubClass.IL_STUB_PInvoke(SafeWinHttpHandle, IntPtr, UInt32, IntPtr)
13 system_net_http!System.Net.Http.WinHttpResponseStream+<>c__DisplayClass17_0.b__0(System.Threading.Tasks.Task`1)
14 System_Private_CoreLib_ni!System.Threading.Tasks.Task.Execute()
15 System_Private_CoreLib_ni!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
16 System_Private_CoreLib_ni!System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef)
17 System_Private_CoreLib_ni!System.Threading.Tasks.Task.ExecuteEntry(Boolean)
18 System_Private_CoreLib_ni!System.Threading.ThreadPoolWorkQueue.Dispatch()
19 CoreCLR!CallDescrWorkerInternal
1a CoreCLR!MethodDescCallSite::CallTargetWorker
1b CoreCLR!MethodDescCallSite::Call_RetBool
1c CoreCLR!QueueUserWorkItemManagedCallback
1d CoreCLR!ManagedThreadBase_DispatchInner
1e CoreCLR!ManagedThreadBase_DispatchMiddle
1f CoreCLR!ManagedThreadBase_DispatchOuter
20 CoreCLR!ManagedThreadBase_FullTransitionWithAD
21 CoreCLR!ManagedThreadBase::ThreadPool
22 CoreCLR!ManagedPerAppDomainTPCount::DispatchWorkItem
23 CoreCLR!ThreadpoolMgr::ExecuteWorkRequest
24 CoreCLR!ThreadpoolMgr::WorkerThreadStart
25 CoreCLR!Thread::intermediateThreadProc
26 KERNEL32!BaseThreadInitThunk
27 ntdll!RtlUserThreadStart
```
So, `CancellationTokenSource.WaitForCallbackToComplete` is waiting for an already-executing registered callback. This appears to be running on the following stack:
```
00 ntdll!ZwWaitForMultipleObjects
01 KERNELBASE!WaitForMultipleObjectsEx
02 CoreCLR!WaitForMultipleObjectsEx_SO_TOLERANT
03 CoreCLR!Thread::DoAppropriateAptStateWait
04 CoreCLR!Thread::DoAppropriateWaitWorker
05 CoreCLR!Thread::DoAppropriateWait
06 CoreCLR!CLREventBase::WaitEx
07 CoreCLR!CLREventBase::Wait
08 CoreCLR!AwareLock::EnterEpilogHelper
09 CoreCLR!AwareLock::EnterEpilog
0a CoreCLR!SyncBlock::EnterMonitor
0b CoreCLR!ObjHeader::EnterObjMonitor
0c CoreCLR!Object::EnterObjMonitor
0d CoreCLR!JITutil_MonEnterWorker
0e system_net_http!System.Net.Http.WinHttpResponseStream.CancelPendingResponseStreamReadOperation()
0f system_net_http!System.Net.Http.WinHttpResponseStream+<>c.b__17_1(System.Object)
10 System_Private_CoreLib_ni!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
11 System_Private_CoreLib_ni!System.Threading.CancellationTokenSource.ExecuteCallbackHandlers(Boolean)
12 System_Private_CoreLib_ni!System.Threading.CancellationTokenSource.Cancel()
13 system_net_http_functional_tests!System.Net.Http.Functional.Tests.ResponseStreamTest+d__5.MoveNext()
14 System_Private_CoreLib_ni!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
15 System_Private_CoreLib_ni!System.Runtime.CompilerServices.AsyncMethodBuilderCore+MoveNextRunner.RunWithDefaultContext()
16 xunit_execution_dotnet!Xunit.Sdk.AsyncTestSyncContext+<>c__DisplayClass7_0.b__1(System.Object)
17 xunit_execution_dotnet!Xunit.Sdk.MaxConcurrencySyncContext.RunOnSyncContext(System.Threading.SendOrPostCallback, System.Object)
18 System_Private_CoreLib_ni!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
19 xunit_execution_dotnet!Xunit.Sdk.MaxConcurrencySyncContext.WorkerThreadProc()
1a xunit_execution_dotnet!Xunit.Sdk.XunitWorkerThread+<>c.b__5_0(System.Object)
1b System_Private_CoreLib_ni!System.Threading.Tasks.Task.Execute()
1c System_Private_CoreLib_ni!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
1d System_Private_CoreLib_ni!System.Threading.Tasks.Task.ExecuteWithThreadLocal(System.Threading.Tasks.Task ByRef)
1e System_Private_CoreLib_ni!System.Threading.Tasks.Task.ExecuteEntry(Boolean)
1f System_Private_CoreLib_ni!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
20 CoreCLR!CallDescrWorkerInternal
21 CoreCLR!MethodDescCallSite::CallTargetWorker
22 CoreCLR!MethodDescCallSite::Call
23 CoreCLR!ThreadNative::KickOffThread_Worker
24 CoreCLR!ManagedThreadBase_DispatchInner
25 CoreCLR!ManagedThreadBase_DispatchMiddle
26 CoreCLR!ManagedThreadBase_DispatchOuter
27 CoreCLR!ManagedThreadBase_FullTransitionWithAD
28 CoreCLR!ManagedThreadBase::KickOff
29 CoreCLR!ThreadNative::KickOffThread
2a CoreCLR!Thread::intermediateThreadProc
2b KERNEL32!BaseThreadInitThunk
2c ntdll!RtlUserThreadStart
```
`WinHttpResponseStream.CancelPendingResponseStreamReadOperation` is trying to acquire a lock, which is already held by the first thread - the one waiting for the cancellation callback to complete. So, classic deadlock.
I can't reproduce this reliably, but I found it while running System.Net.Http.Functional.Tests under WinDBG, which may alter timing enough to make this more likely to repro.
",1,deadlock in winhttpresponsestream while investigating i saw one system net http test run deadlock in some cancellation token registration management code one thread was unregistering from a cancellation token ntdll zwdelayexecution kernelbase sleepex coreclr thread usersleep coreclr threadnative sleep system private corelib ni system threading thread sleep system private corelib ni system threading spinwait spinonce system private corelib ni system threading cancellationtokensource waitforcallbacktocomplete system threading cancellationcallbackinfo system net http system net http winhttprequeststate disposectrreadfromresponsestream system net http system net http winhttprequestcallback onrequestreadcomplete system net http winhttprequeststate system net http system net http winhttprequestcallback requestcallback intptr system net http winhttprequeststate intptr system net http system net http winhttprequestcallback winhttpcallback intptr intptr intptr system console domainboundilstubclass il stub reversepinvoke coreclr umthunkstub winhttp http request handle object safeappcallback winhttp http request handle object controlledappcallback winhttp http request handle object indicatecompletionstatuscommon winhttp http request handle object indicatecompletionstatusinline winhttp winhttpreaddata system console domainboundilstubclass il stub pinvoke safewinhttphandle intptr intptr system net http system net http winhttpresponsestream c b system threading tasks task system private corelib ni system threading tasks task execute system private corelib ni system threading executioncontext run system threading executioncontext system threading contextcallback system object system private corelib ni system threading tasks task executewiththreadlocal system threading tasks task byref system private corelib ni system threading tasks task executeentry boolean system private corelib ni system threading threadpoolworkqueue dispatch coreclr calldescrworkerinternal coreclr methoddesccallsite calltargetworker coreclr methoddesccallsite call retbool coreclr queueuserworkitemmanagedcallback coreclr managedthreadbase dispatchinner coreclr managedthreadbase dispatchmiddle coreclr managedthreadbase dispatchouter coreclr managedthreadbase fulltransitionwithad coreclr managedthreadbase threadpool coreclr managedperappdomaintpcount dispatchworkitem coreclr threadpoolmgr executeworkrequest coreclr threadpoolmgr workerthreadstart coreclr thread intermediatethreadproc basethreadinitthunk ntdll rtluserthreadstart so cancellationtokensource waitforcallbacktocomplete is waiting for an already executing registered callback this appears to be running on the following stack ntdll zwwaitformultipleobjects kernelbase waitformultipleobjectsex coreclr waitformultipleobjectsex so tolerant coreclr thread doappropriateaptstatewait coreclr thread doappropriatewaitworker coreclr thread doappropriatewait coreclr clreventbase waitex coreclr clreventbase wait coreclr awarelock enterepiloghelper coreclr awarelock enterepilog coreclr syncblock entermonitor coreclr objheader enterobjmonitor coreclr object enterobjmonitor coreclr jitutil monenterworker system net http system net http winhttpresponsestream cancelpendingresponsestreamreadoperation system net http system net http winhttpresponsestream c b system object system private corelib ni system threading executioncontext run system threading executioncontext system threading contextcallback system object system private corelib ni system threading cancellationtokensource executecallbackhandlers boolean system private corelib ni system threading cancellationtokensource cancel system net http functional tests system net http functional tests responsestreamtest d movenext system private corelib ni system threading executioncontext run system threading executioncontext system threading contextcallback system object system private corelib ni system runtime compilerservices asyncmethodbuildercore movenextrunner runwithdefaultcontext xunit execution dotnet xunit sdk asynctestsynccontext c b system object xunit execution dotnet xunit sdk maxconcurrencysynccontext runonsynccontext system threading sendorpostcallback system object system private corelib ni system threading executioncontext run system threading executioncontext system threading contextcallback system object xunit execution dotnet xunit sdk maxconcurrencysynccontext workerthreadproc xunit execution dotnet xunit sdk xunitworkerthread c b system object system private corelib ni system threading tasks task execute system private corelib ni system threading executioncontext run system threading executioncontext system threading contextcallback system object system private corelib ni system threading tasks task executewiththreadlocal system threading tasks task byref system private corelib ni system threading tasks task executeentry boolean system private corelib ni system threading executioncontext run system threading executioncontext system threading contextcallback system object coreclr calldescrworkerinternal coreclr methoddesccallsite calltargetworker coreclr methoddesccallsite call coreclr threadnative kickoffthread worker coreclr managedthreadbase dispatchinner coreclr managedthreadbase dispatchmiddle coreclr managedthreadbase dispatchouter coreclr managedthreadbase fulltransitionwithad coreclr managedthreadbase kickoff coreclr threadnative kickoffthread coreclr thread intermediatethreadproc basethreadinitthunk ntdll rtluserthreadstart winhttpresponsestream cancelpendingresponsestreamreadoperation is trying to acquire a lock which is already held by the first thread the one waiting for the cancellation callback to complete so classic deadlock i can t reproduce this reliably but i found it while running system net http functional tests under windbg which may alter timing enough to make this more likely to repro ,1
124,4108655903.0,IssuesEvent,2016-06-06 16:49:46,dotnet/coreclr,https://api.github.com/repos/dotnet/coreclr,opened,SIGABRT_ASSERT_libcoreclr.so!VirtualFree,bug GC reliability,"**The notes in this bug refer to the Ubuntu.14.04 dump [rc3-24131-01_001F](https://rapreqs.blob.core.windows.net/sschaab/BodyPart_826333fd-2d98-42db-8555-4d56d6c189ff?sv=2015-04-05&sr=b&sig=4Q1DOMWTz8yM8DZdYs6mJRCfNYrczUJInJW8C9K%2FDY0%3D&st=2016-06-03T22%3A02%3A31Z&se=2017-06-03T22%3A02%3A31Z&sp=r). Other dumps are available if needed.**
**This issue seems likely to be related to issue [#5188](https://github.com/dotnet/coreclr/issues/5188) however with a different failure pattern on CHK builds where we assert on VirtualFree**
STOP_REASON:
SIGABRT
FAULT_SYMBOL:
libcoreclr.so!VirtualFree
FAILURE_HASH:
SIGABRT_libcoreclr.so!VirtualFree
FAULT_STACK:
libc.so.6!__GI_raise
libc.so.6!__GI_abort
libcoreclr.so!UNKNOWN
libcoreclr.so!sigtrap_handler(int, siginfo_t*, void*)
libclrjit.so!sigtrap_handler(int, siginfo_t*, void*)
libpthread.so.0!???
libcoreclr.so!DBG_DebugBreak
libcoreclr.so!DebugBreak
libcoreclr.so!VirtualFree
libcoreclr.so!EEVirtualFree(void*, unsigned long, unsigned int)
libcoreclr.so!CExecutionEngine::ClrVirtualFree(void*, unsigned long, unsigned int)
libcoreclr.so!non-virtual thunk to CExecutionEngine::ClrVirtualFree(void*, unsigned long, unsigned int)
libcoreclr.so!ClrVirtualFree(void*, unsigned long, unsigned int)
libcoreclr.so!GCToOSInterface::VirtualDecommit(void*, unsigned long)
libcoreclr.so!WKS::gc_heap::decommit_heap_segment_pages(WKS::heap_segment*, unsigned long)
libcoreclr.so!WKS::gc_heap::decommit_ephemeral_segment_pages()
libcoreclr.so!WKS::gc_heap::gc1()
libcoreclr.so!WKS::gc_heap::garbage_collect(int)
libcoreclr.so!WKS::GCHeap::GarbageCollectGeneration(unsigned int, WKS::gc_reason)
libcoreclr.so!WKS::gc_heap::trigger_full_compact_gc(WKS::gc_reason, oom_reason*)
libcoreclr.so!WKS::gc_heap::allocate_small(int, unsigned long, alloc_context*, int)
libcoreclr.so!WKS::gc_heap::try_allocate_more_space(alloc_context*, unsigned long, int)
libcoreclr.so!WKS::gc_heap::allocate_more_space(alloc_context*, unsigned long, int)
libcoreclr.so!WKS::gc_heap::allocate(unsigned long, alloc_context*)
libcoreclr.so!WKS::GCHeap::Alloc(alloc_context*, unsigned long, unsigned int)
libcoreclr.so!Alloc(unsigned long, int, int)
libcoreclr.so!FastAllocatePrimitiveArray(MethodTable*, unsigned int, int)
libcoreclr.so!JIT_NewArr1(CORINFO_CLASS_STRUCT_*, long)
libcoreclr.so!JIT_NewArr1VC_MP_FastPortable(CORINFO_CLASS_STRUCT_*, long)
System.Threading.Tasks.Dataflow.dll!System.Threading.Tasks.SingleProducerSingleConsumerQueue`1+Segment[[System.Int32, System.Private.CoreLib]]..ctor(Int32)
System.Threading.Tasks.Dataflow.dll!System.Threading.Tasks.SingleProducerSingleConsumerQueue`1[[System.Int32, System.Private.CoreLib]]..ctor()
System.Threading.Tasks.Dataflow.dll!System.Threading.Tasks.Dataflow.Internal.SourceCore`1[[System.Int32, System.Private.CoreLib]]..ctor(System.Threading.Tasks.Dataflow.ISourceBlock`1, System.Threading.Tasks.Dataflow.DataflowBlockOptions, System.Action`1>, System.Action`2,Int32>, System.Func`4,Int32,System.Collections.Generic.IList`1,Int32>)
**Looking at the source line from frame 8 it looks like we get back MAP_FAILED from mmap() and assert with the message ""mmap() returned an abnormal value.\n""**
(lldb) fr s 8
frame #8: 0x00007fe6b7267c40 libcoreclr.so`::VirtualFree(lpAddress=0x00007fe621e2d000, dwSize=1617920, dwFreeType=16384) + 2192 at virtual.cpp:1844
(lldb) fr v
(LPVOID) lpAddress = 0x00007fe621e2d000
(SIZE_T) dwSize = 1617920
(DWORD) dwFreeType = 16384
(BOOL) bRetVal = YES
(CorUnix::CPalThread *) pthrCurrent = 0x00007fe5e0014170
(UINT_PTR) StartBoundary = 140626387718144
(SIZE_T) MemSize = 1617920
(PCMI) pUnCommittedMem = 0x00000000010c1bf0
(PAL_EnterHolder) __holder = (m_fEntered = NO, m_palError = 0)
#if MMAP_DOESNOT_ALLOW_REMAP
// if no double mapping is supported,
// just mprotect the memory with no access
if (mprotect((LPVOID)StartBoundary, MemSize, PROT_NONE) == 0)
#else // MMAP_DOESNOT_ALLOW_REMAP
// Explicitly calling mmap instead of mprotect here makes it
// that much more clear to the operating system that we no
// longer need these pages.
#if RESERVE_FROM_BACKING_FILE
if ( mmap( (LPVOID)StartBoundary, MemSize, PROT_NONE,
MAP_FIXED | MAP_PRIVATE, gBackingFile,
(char *) StartBoundary - (char *) gBackingBaseAddress ) !=
MAP_FAILED )
#else // RESERVE_FROM_BACKING_FILE
if ( mmap( (LPVOID)StartBoundary, MemSize, PROT_NONE,
MAP_FIXED | MAP_ANON | MAP_PRIVATE, -1, 0 ) != MAP_FAILED )
#endif // RESERVE_FROM_BACKING_FILE
#endif // MMAP_DOESNOT_ALLOW_REMAP
{
#if (MMAP_ANON_IGNORES_PROTECTION && !MMAP_DOESNOT_ALLOW_REMAP)
if (mprotect((LPVOID) StartBoundary, MemSize, PROT_NONE) != 0)
{
ASSERT(""mprotect failed to protect the region!\n"");
pthrCurrent->SetLastError(ERROR_INTERNAL_ERROR);
munmap((LPVOID) StartBoundary, MemSize);
bRetVal = FALSE;
goto VirtualFreeExit;
}
#endif // MMAP_ANON_IGNORES_PROTECTION && !MMAP_DOESNOT_ALLOW_REMAP
SIZE_T index = 0;
SIZE_T nNumOfPagesToChange = 0;
/* We can now commit this memory by calling VirtualAlloc().*/
index = (StartBoundary - pUnCommittedMem->startBoundary) / VIRTUAL_PAGE_SIZE;
nNumOfPagesToChange = MemSize / VIRTUAL_PAGE_SIZE;
VIRTUALSetAllocState( MEM_RESERVE, index,
nNumOfPagesToChange, pUnCommittedMem );
#if MMAP_DOESNOT_ALLOW_REMAP
VIRTUALSetDirtyPages( 1, index,
nNumOfPagesToChange, pUnCommittedMem );
#endif // MMAP_DOESNOT_ALLOW_REMAP
goto VirtualFreeExit;
}
else
{
ASSERT( ""mmap() returned an abnormal value.\n"" );
bRetVal = FALSE;
pthrCurrent->SetLastError( ERROR_INTERNAL_ERROR );
goto VirtualFreeExit;
}",True,"SIGABRT_ASSERT_libcoreclr.so!VirtualFree - **The notes in this bug refer to the Ubuntu.14.04 dump [rc3-24131-01_001F](https://rapreqs.blob.core.windows.net/sschaab/BodyPart_826333fd-2d98-42db-8555-4d56d6c189ff?sv=2015-04-05&sr=b&sig=4Q1DOMWTz8yM8DZdYs6mJRCfNYrczUJInJW8C9K%2FDY0%3D&st=2016-06-03T22%3A02%3A31Z&se=2017-06-03T22%3A02%3A31Z&sp=r). Other dumps are available if needed.**
**This issue seems likely to be related to issue [#5188](https://github.com/dotnet/coreclr/issues/5188) however with a different failure pattern on CHK builds where we assert on VirtualFree**
STOP_REASON:
SIGABRT
FAULT_SYMBOL:
libcoreclr.so!VirtualFree
FAILURE_HASH:
SIGABRT_libcoreclr.so!VirtualFree
FAULT_STACK:
libc.so.6!__GI_raise
libc.so.6!__GI_abort
libcoreclr.so!UNKNOWN
libcoreclr.so!sigtrap_handler(int, siginfo_t*, void*)
libclrjit.so!sigtrap_handler(int, siginfo_t*, void*)
libpthread.so.0!???
libcoreclr.so!DBG_DebugBreak
libcoreclr.so!DebugBreak
libcoreclr.so!VirtualFree
libcoreclr.so!EEVirtualFree(void*, unsigned long, unsigned int)
libcoreclr.so!CExecutionEngine::ClrVirtualFree(void*, unsigned long, unsigned int)
libcoreclr.so!non-virtual thunk to CExecutionEngine::ClrVirtualFree(void*, unsigned long, unsigned int)
libcoreclr.so!ClrVirtualFree(void*, unsigned long, unsigned int)
libcoreclr.so!GCToOSInterface::VirtualDecommit(void*, unsigned long)
libcoreclr.so!WKS::gc_heap::decommit_heap_segment_pages(WKS::heap_segment*, unsigned long)
libcoreclr.so!WKS::gc_heap::decommit_ephemeral_segment_pages()
libcoreclr.so!WKS::gc_heap::gc1()
libcoreclr.so!WKS::gc_heap::garbage_collect(int)
libcoreclr.so!WKS::GCHeap::GarbageCollectGeneration(unsigned int, WKS::gc_reason)
libcoreclr.so!WKS::gc_heap::trigger_full_compact_gc(WKS::gc_reason, oom_reason*)
libcoreclr.so!WKS::gc_heap::allocate_small(int, unsigned long, alloc_context*, int)
libcoreclr.so!WKS::gc_heap::try_allocate_more_space(alloc_context*, unsigned long, int)
libcoreclr.so!WKS::gc_heap::allocate_more_space(alloc_context*, unsigned long, int)
libcoreclr.so!WKS::gc_heap::allocate(unsigned long, alloc_context*)
libcoreclr.so!WKS::GCHeap::Alloc(alloc_context*, unsigned long, unsigned int)
libcoreclr.so!Alloc(unsigned long, int, int)
libcoreclr.so!FastAllocatePrimitiveArray(MethodTable*, unsigned int, int)
libcoreclr.so!JIT_NewArr1(CORINFO_CLASS_STRUCT_*, long)
libcoreclr.so!JIT_NewArr1VC_MP_FastPortable(CORINFO_CLASS_STRUCT_*, long)
System.Threading.Tasks.Dataflow.dll!System.Threading.Tasks.SingleProducerSingleConsumerQueue`1+Segment[[System.Int32, System.Private.CoreLib]]..ctor(Int32)
System.Threading.Tasks.Dataflow.dll!System.Threading.Tasks.SingleProducerSingleConsumerQueue`1[[System.Int32, System.Private.CoreLib]]..ctor()
System.Threading.Tasks.Dataflow.dll!System.Threading.Tasks.Dataflow.Internal.SourceCore`1[[System.Int32, System.Private.CoreLib]]..ctor(System.Threading.Tasks.Dataflow.ISourceBlock`1, System.Threading.Tasks.Dataflow.DataflowBlockOptions, System.Action`1>, System.Action`2,Int32>, System.Func`4,Int32,System.Collections.Generic.IList`1,Int32>)
**Looking at the source line from frame 8 it looks like we get back MAP_FAILED from mmap() and assert with the message ""mmap() returned an abnormal value.\n""**
(lldb) fr s 8
frame #8: 0x00007fe6b7267c40 libcoreclr.so`::VirtualFree(lpAddress=0x00007fe621e2d000, dwSize=1617920, dwFreeType=16384) + 2192 at virtual.cpp:1844
(lldb) fr v
(LPVOID) lpAddress = 0x00007fe621e2d000
(SIZE_T) dwSize = 1617920
(DWORD) dwFreeType = 16384
(BOOL) bRetVal = YES
(CorUnix::CPalThread *) pthrCurrent = 0x00007fe5e0014170
(UINT_PTR) StartBoundary = 140626387718144
(SIZE_T) MemSize = 1617920
(PCMI) pUnCommittedMem = 0x00000000010c1bf0
(PAL_EnterHolder) __holder = (m_fEntered = NO, m_palError = 0)
#if MMAP_DOESNOT_ALLOW_REMAP
// if no double mapping is supported,
// just mprotect the memory with no access
if (mprotect((LPVOID)StartBoundary, MemSize, PROT_NONE) == 0)
#else // MMAP_DOESNOT_ALLOW_REMAP
// Explicitly calling mmap instead of mprotect here makes it
// that much more clear to the operating system that we no
// longer need these pages.
#if RESERVE_FROM_BACKING_FILE
if ( mmap( (LPVOID)StartBoundary, MemSize, PROT_NONE,
MAP_FIXED | MAP_PRIVATE, gBackingFile,
(char *) StartBoundary - (char *) gBackingBaseAddress ) !=
MAP_FAILED )
#else // RESERVE_FROM_BACKING_FILE
if ( mmap( (LPVOID)StartBoundary, MemSize, PROT_NONE,
MAP_FIXED | MAP_ANON | MAP_PRIVATE, -1, 0 ) != MAP_FAILED )
#endif // RESERVE_FROM_BACKING_FILE
#endif // MMAP_DOESNOT_ALLOW_REMAP
{
#if (MMAP_ANON_IGNORES_PROTECTION && !MMAP_DOESNOT_ALLOW_REMAP)
if (mprotect((LPVOID) StartBoundary, MemSize, PROT_NONE) != 0)
{
ASSERT(""mprotect failed to protect the region!\n"");
pthrCurrent->SetLastError(ERROR_INTERNAL_ERROR);
munmap((LPVOID) StartBoundary, MemSize);
bRetVal = FALSE;
goto VirtualFreeExit;
}
#endif // MMAP_ANON_IGNORES_PROTECTION && !MMAP_DOESNOT_ALLOW_REMAP
SIZE_T index = 0;
SIZE_T nNumOfPagesToChange = 0;
/* We can now commit this memory by calling VirtualAlloc().*/
index = (StartBoundary - pUnCommittedMem->startBoundary) / VIRTUAL_PAGE_SIZE;
nNumOfPagesToChange = MemSize / VIRTUAL_PAGE_SIZE;
VIRTUALSetAllocState( MEM_RESERVE, index,
nNumOfPagesToChange, pUnCommittedMem );
#if MMAP_DOESNOT_ALLOW_REMAP
VIRTUALSetDirtyPages( 1, index,
nNumOfPagesToChange, pUnCommittedMem );
#endif // MMAP_DOESNOT_ALLOW_REMAP
goto VirtualFreeExit;
}
else
{
ASSERT( ""mmap() returned an abnormal value.\n"" );
bRetVal = FALSE;
pthrCurrent->SetLastError( ERROR_INTERNAL_ERROR );
goto VirtualFreeExit;
}",1,sigabrt assert libcoreclr so virtualfree the notes in this bug refer to the ubuntu dump other dumps are available if needed this issue seems likely to be related to issue however with a different failure pattern on chk builds where we assert on virtualfree stop reason sigabrt fault symbol libcoreclr so virtualfree failure hash sigabrt libcoreclr so virtualfree fault stack libc so gi raise libc so gi abort libcoreclr so unknown libcoreclr so sigtrap handler int siginfo t void libclrjit so sigtrap handler int siginfo t void libpthread so libcoreclr so dbg debugbreak libcoreclr so debugbreak libcoreclr so virtualfree libcoreclr so eevirtualfree void unsigned long unsigned int libcoreclr so cexecutionengine clrvirtualfree void unsigned long unsigned int libcoreclr so non virtual thunk to cexecutionengine clrvirtualfree void unsigned long unsigned int libcoreclr so clrvirtualfree void unsigned long unsigned int libcoreclr so gctoosinterface virtualdecommit void unsigned long libcoreclr so wks gc heap decommit heap segment pages wks heap segment unsigned long libcoreclr so wks gc heap decommit ephemeral segment pages libcoreclr so wks gc heap libcoreclr so wks gc heap garbage collect int libcoreclr so wks gcheap garbagecollectgeneration unsigned int wks gc reason libcoreclr so wks gc heap trigger full compact gc wks gc reason oom reason libcoreclr so wks gc heap allocate small int unsigned long alloc context int libcoreclr so wks gc heap try allocate more space alloc context unsigned long int libcoreclr so wks gc heap allocate more space alloc context unsigned long int libcoreclr so wks gc heap allocate unsigned long alloc context libcoreclr so wks gcheap alloc alloc context unsigned long unsigned int libcoreclr so alloc unsigned long int int libcoreclr so fastallocateprimitivearray methodtable unsigned int int libcoreclr so jit corinfo class struct long libcoreclr so jit mp fastportable corinfo class struct long system threading tasks dataflow dll system threading tasks singleproducersingleconsumerqueue segment ctor system threading tasks dataflow dll system threading tasks singleproducersingleconsumerqueue ctor system threading tasks dataflow dll system threading tasks dataflow internal sourcecore ctor system threading tasks dataflow isourceblock system threading tasks dataflow dataflowblockoptions system action system action system func system collections generic ilist looking at the source line from frame it looks like we get back map failed from mmap and assert with the message mmap returned an abnormal value n lldb fr s frame libcoreclr so virtualfree lpaddress dwsize dwfreetype at virtual cpp lldb fr v lpvoid lpaddress size t dwsize dword dwfreetype bool bretval yes corunix cpalthread pthrcurrent uint ptr startboundary size t memsize pcmi puncommittedmem pal enterholder holder m fentered no m palerror if mmap doesnot allow remap if no double mapping is supported just mprotect the memory with no access if mprotect lpvoid startboundary memsize prot none else mmap doesnot allow remap explicitly calling mmap instead of mprotect here makes it that much more clear to the operating system that we no longer need these pages if reserve from backing file if mmap lpvoid startboundary memsize prot none map fixed map private gbackingfile char startboundary char gbackingbaseaddress map failed else reserve from backing file if mmap lpvoid startboundary memsize prot none map fixed map anon map private map failed endif reserve from backing file endif mmap doesnot allow remap if mmap anon ignores protection mmap doesnot allow remap if mprotect lpvoid startboundary memsize prot none assert mprotect failed to protect the region n pthrcurrent setlasterror error internal error munmap lpvoid startboundary memsize bretval false goto virtualfreeexit endif mmap anon ignores protection mmap doesnot allow remap size t index size t nnumofpagestochange we can now commit this memory by calling virtualalloc index startboundary puncommittedmem startboundary virtual page size nnumofpagestochange memsize virtual page size virtualsetallocstate mem reserve index nnumofpagestochange puncommittedmem if mmap doesnot allow remap virtualsetdirtypages index nnumofpagestochange puncommittedmem endif mmap doesnot allow remap goto virtualfreeexit else assert mmap returned an abnormal value n bretval false pthrcurrent setlasterror error internal error goto virtualfreeexit ,1
1051,12529886404.0,IssuesEvent,2020-06-04 12:09:41,sohaibaslam/learning_site,https://api.github.com/repos/sohaibaslam/learning_site,opened,"Broken Crawlers 04, Jun 2020",crawler broken/unreliable,"1. **abcmart kr(100%)**
1. **adler de(100%)**
1. **aldo eu(100%)**
1. **alexandermcqueen cn(100%)**
1. **americaneagle ca(100%)**
1. **ami dk(100%)/jp(100%)/uk(100%)**
1. **anthropologie (100%)/fr(100%)/uk(100%)**
1. **argos uk(100%)**
1. **armandthiery fr(100%)**
1. **armedangels de(100%)**
1. **asos (100%)/ae(100%)/au(100%)/ch(100%)/cn(100%)/hk(100%)/id(100%)/my(100%)/nl(100%)/ph(100%)/pl(100%)/ru(100%)/sa(100%)/sg(100%)/th(100%)/vn(100%)**
1. **avenue us(100%)**
1. **babyshop ae(100%)/sa(100%)**
1. **balr es(100%)/fr(100%)/nl(100%)**
1. **burlington us(100%)**
1. **calvinklein us(100%)**
1. **central th(100%)**
1. **centrepoint ae(100%)**
1. **champion eu(100%)/fr(100%)**
1. **coldwatercreek us(100%)**
1. **conforama fr(100%)**
1. **converse at(100%)/au(100%)/de(100%)**
1. **cotton au(100%)**
1. **countryroad (100%)**
1. **davidjones (100%)**
1. **drmartens de(100%)/es(100%)/eu(100%)/fr(100%)/it(100%)/nl(100%)/uk(100%)/us(100%)**
1. **elcorteingles es(100%)**
1. **ellos fi(100%)/no(100%)/se(100%)**
1. **footaction us(100%)**
1. **footlocker (52%)/be(100%)/de(100%)/dk(100%)/es(100%)/fr(100%)/it(100%)/lu(100%)/nl(100%)/no(100%)/se(100%)/uk(100%)**
1. **forloveandlemons de(100%)**
1. **fredperry (100%)/us(100%)**
1. **gapfactory us(100%)**
1. **goodhood uk(100%)**
1. **harrods (100%)**
1. **hermes at(100%)/ca(100%)/it(50%)/nl(67%)/us(88%)**
1. **hm hk(53%)/kw(100%)/ro(100%)/sa(100%)**
1. **hollister cn(100%)**
1. **hush uk(100%)**
1. **isetan jp(100%)**
1. **kupivip ru(100%)**
1. **laredouteapi es(100%)**
1. **lefties es(100%)/pt(100%)**
1. **lifestylestores in(100%)**
1. **liverpool mx(100%)**
1. **luigibertolli br(100%)**
1. **maccosmetics uk(100%)**
1. **michaelkors ca(100%)**
1. **mothercare sa(100%)**
1. **muji de(100%)**
1. **next hk(100%)/jp(100%)/kr(100%)/nz(100%)**
1. **oasis (100%)**
1. **parfois ma(100%)**
1. **peterhahn de(100%)**
1. **popup br(100%)**
1. **prettysecrets in(100%)**
1. **pullandbear gt(100%)**
1. **ralphlauren gr(99%)/sk(100%)**
1. **reebok ch(100%)/de(100%)**
1. **reserved ro(100%)**
1. **runnerspoint de(100%)**
1. **saksfifthavenue mo(100%)/ru(100%)/tw(100%)**
1. **sandroatjd cn(100%)**
1. **selfridges cn(100%)/de(100%)/es(100%)/ie(100%)/mo(100%)/sa(100%)/sg(100%)**
1. **sephora us(100%)**
1. **sfera es(100%)**
1. **simons ca(100%)**
1. **snkrs eu(100%)/fr(100%)**
1. **soccer us(100%)**
1. **solebox de(100%)/uk(100%)**
1. **stefaniamode dk(100%)**
1. **stories nl(100%)**
1. **stylebop (100%)/au(100%)/ca(100%)/es(100%)/fr(100%)/hk(100%)/jp(100%)/mo(100%)**
1. **suistudio eu(100%)/uk(100%)**
1. **suitsupply at(100%)/de(100%)/es(100%)/fi(100%)/fr(100%)/it(100%)/no(100%)**
1. **tods cn(100%)/gr(100%)/pt(100%)**
1. **tommybahama ae(91%)/ch(100%)/de(100%)/hu(25%)/in(100%)/kr(100%)/ph(100%)/sa(100%)/sg(100%)/za(100%)**
1. **topbrands ru(100%)**
1. **valentino cn(100%)**
1. **walmart ca(100%)**
1. **warehouse (100%)/au(100%)/ca(100%)/ie(100%)/nl(100%)/nz(100%)/se(100%)**
1. **wayfair de(100%)/uk(100%)**
1. **zalando it(100%)**
1. **zalandolounge de(100%)**
1. **zalora ph(100%)**
1. tommyjohn us(98%)
1. vip cn(89%)
1. 24sevres eu(84%)/fr(86%)/uk(84%)/us(81%)
1. diesel cn(85%)
1. leroymerlin fr(83%)
1. noon sa(81%)
1. watchshop ru(80%)
1. burberry (69%)/ae(66%)/at(58%)/au(56%)/be(73%)/bg(61%)/ca(57%)/ch(61%)/cz(50%)/de(48%)/dk(65%)/es(47%)/fi(50%)/fr(62%)/hk(48%)/hu(38%)/ie(54%)/it(65%)/jp(49%)/kr(61%)/my(51%)/nl(44%)/pl(55%)/pt(56%)/ro(59%)/ru(55%)/se(61%)/sg(56%)/si(65%)/sk(54%)/tr(50%)/tw(57%)/us(61%)
1. saksoff5th us(69%)
1. ssense ca(69%)
1. adidas kr(62%)/my(20%)
1. underarmour us(59%)
1. neimanmarcus jp(57%)
1. lululemon cn(55%)
1. scotchandsoda ca(47%)/us(36%)
1. camper ca(41%)/es(33%)/se(21%)/us(46%)
1. jelmoli ch(46%)
1. theory (44%)
1. hibbett us(43%)
1. aloyoga us(41%)
1. levi es(39%)
1. navabi uk(39%)
1. moncler ca(30%)/ch(28%)/cn(22%)/de(28%)/es(30%)/fr(30%)/it(29%)/jp(29%)/kr(36%)/ru(28%)/uk(30%)/us(33%)
1. nayomi sa(36%)
1. cos kr(34%)
1. paris cl(34%)
1. joseph de(33%)/eu(33%)/uk(33%)/us(33%)
1. openingceremony us(33%)
1. riachuelo br(32%)
1. gap hk(31%)
1. rivafashion qa(31%)
1. onitsukatigerjd cn(29%)
1. bonita de(28%)
1. arket eu(26%)/uk(27%)
1. melijoe uk(27%)
1. marksandspencer ru(26%)
1. strellson at(25%)/ch(26%)/de(26%)
1. deichmann ro(24%)
1. gant it(24%)/uk(24%)
1. koovs in(24%)
1. fjallraven ca(23%)
1. lcwaikiki bg(21%)/pl(21%)/ro(21%)/ua(22%)/uk(23%)
1. bonpoint us(22%)
1. nisnass kw(20%)/sa(22%)
1. browns au(21%)
1. nike ie(21%)/sg(20%)
1. target (21%)
1. yoox us(21%)
1. lululemonattmall cn(20%)
1. thenorthface cn(20%)
",True,"Broken Crawlers 04, Jun 2020 - 1. **abcmart kr(100%)**
1. **adler de(100%)**
1. **aldo eu(100%)**
1. **alexandermcqueen cn(100%)**
1. **americaneagle ca(100%)**
1. **ami dk(100%)/jp(100%)/uk(100%)**
1. **anthropologie (100%)/fr(100%)/uk(100%)**
1. **argos uk(100%)**
1. **armandthiery fr(100%)**
1. **armedangels de(100%)**
1. **asos (100%)/ae(100%)/au(100%)/ch(100%)/cn(100%)/hk(100%)/id(100%)/my(100%)/nl(100%)/ph(100%)/pl(100%)/ru(100%)/sa(100%)/sg(100%)/th(100%)/vn(100%)**
1. **avenue us(100%)**
1. **babyshop ae(100%)/sa(100%)**
1. **balr es(100%)/fr(100%)/nl(100%)**
1. **burlington us(100%)**
1. **calvinklein us(100%)**
1. **central th(100%)**
1. **centrepoint ae(100%)**
1. **champion eu(100%)/fr(100%)**
1. **coldwatercreek us(100%)**
1. **conforama fr(100%)**
1. **converse at(100%)/au(100%)/de(100%)**
1. **cotton au(100%)**
1. **countryroad (100%)**
1. **davidjones (100%)**
1. **drmartens de(100%)/es(100%)/eu(100%)/fr(100%)/it(100%)/nl(100%)/uk(100%)/us(100%)**
1. **elcorteingles es(100%)**
1. **ellos fi(100%)/no(100%)/se(100%)**
1. **footaction us(100%)**
1. **footlocker (52%)/be(100%)/de(100%)/dk(100%)/es(100%)/fr(100%)/it(100%)/lu(100%)/nl(100%)/no(100%)/se(100%)/uk(100%)**
1. **forloveandlemons de(100%)**
1. **fredperry (100%)/us(100%)**
1. **gapfactory us(100%)**
1. **goodhood uk(100%)**
1. **harrods (100%)**
1. **hermes at(100%)/ca(100%)/it(50%)/nl(67%)/us(88%)**
1. **hm hk(53%)/kw(100%)/ro(100%)/sa(100%)**
1. **hollister cn(100%)**
1. **hush uk(100%)**
1. **isetan jp(100%)**
1. **kupivip ru(100%)**
1. **laredouteapi es(100%)**
1. **lefties es(100%)/pt(100%)**
1. **lifestylestores in(100%)**
1. **liverpool mx(100%)**
1. **luigibertolli br(100%)**
1. **maccosmetics uk(100%)**
1. **michaelkors ca(100%)**
1. **mothercare sa(100%)**
1. **muji de(100%)**
1. **next hk(100%)/jp(100%)/kr(100%)/nz(100%)**
1. **oasis (100%)**
1. **parfois ma(100%)**
1. **peterhahn de(100%)**
1. **popup br(100%)**
1. **prettysecrets in(100%)**
1. **pullandbear gt(100%)**
1. **ralphlauren gr(99%)/sk(100%)**
1. **reebok ch(100%)/de(100%)**
1. **reserved ro(100%)**
1. **runnerspoint de(100%)**
1. **saksfifthavenue mo(100%)/ru(100%)/tw(100%)**
1. **sandroatjd cn(100%)**
1. **selfridges cn(100%)/de(100%)/es(100%)/ie(100%)/mo(100%)/sa(100%)/sg(100%)**
1. **sephora us(100%)**
1. **sfera es(100%)**
1. **simons ca(100%)**
1. **snkrs eu(100%)/fr(100%)**
1. **soccer us(100%)**
1. **solebox de(100%)/uk(100%)**
1. **stefaniamode dk(100%)**
1. **stories nl(100%)**
1. **stylebop (100%)/au(100%)/ca(100%)/es(100%)/fr(100%)/hk(100%)/jp(100%)/mo(100%)**
1. **suistudio eu(100%)/uk(100%)**
1. **suitsupply at(100%)/de(100%)/es(100%)/fi(100%)/fr(100%)/it(100%)/no(100%)**
1. **tods cn(100%)/gr(100%)/pt(100%)**
1. **tommybahama ae(91%)/ch(100%)/de(100%)/hu(25%)/in(100%)/kr(100%)/ph(100%)/sa(100%)/sg(100%)/za(100%)**
1. **topbrands ru(100%)**
1. **valentino cn(100%)**
1. **walmart ca(100%)**
1. **warehouse (100%)/au(100%)/ca(100%)/ie(100%)/nl(100%)/nz(100%)/se(100%)**
1. **wayfair de(100%)/uk(100%)**
1. **zalando it(100%)**
1. **zalandolounge de(100%)**
1. **zalora ph(100%)**
1. tommyjohn us(98%)
1. vip cn(89%)
1. 24sevres eu(84%)/fr(86%)/uk(84%)/us(81%)
1. diesel cn(85%)
1. leroymerlin fr(83%)
1. noon sa(81%)
1. watchshop ru(80%)
1. burberry (69%)/ae(66%)/at(58%)/au(56%)/be(73%)/bg(61%)/ca(57%)/ch(61%)/cz(50%)/de(48%)/dk(65%)/es(47%)/fi(50%)/fr(62%)/hk(48%)/hu(38%)/ie(54%)/it(65%)/jp(49%)/kr(61%)/my(51%)/nl(44%)/pl(55%)/pt(56%)/ro(59%)/ru(55%)/se(61%)/sg(56%)/si(65%)/sk(54%)/tr(50%)/tw(57%)/us(61%)
1. saksoff5th us(69%)
1. ssense ca(69%)
1. adidas kr(62%)/my(20%)
1. underarmour us(59%)
1. neimanmarcus jp(57%)
1. lululemon cn(55%)
1. scotchandsoda ca(47%)/us(36%)
1. camper ca(41%)/es(33%)/se(21%)/us(46%)
1. jelmoli ch(46%)
1. theory (44%)
1. hibbett us(43%)
1. aloyoga us(41%)
1. levi es(39%)
1. navabi uk(39%)
1. moncler ca(30%)/ch(28%)/cn(22%)/de(28%)/es(30%)/fr(30%)/it(29%)/jp(29%)/kr(36%)/ru(28%)/uk(30%)/us(33%)
1. nayomi sa(36%)
1. cos kr(34%)
1. paris cl(34%)
1. joseph de(33%)/eu(33%)/uk(33%)/us(33%)
1. openingceremony us(33%)
1. riachuelo br(32%)
1. gap hk(31%)
1. rivafashion qa(31%)
1. onitsukatigerjd cn(29%)
1. bonita de(28%)
1. arket eu(26%)/uk(27%)
1. melijoe uk(27%)
1. marksandspencer ru(26%)
1. strellson at(25%)/ch(26%)/de(26%)
1. deichmann ro(24%)
1. gant it(24%)/uk(24%)
1. koovs in(24%)
1. fjallraven ca(23%)
1. lcwaikiki bg(21%)/pl(21%)/ro(21%)/ua(22%)/uk(23%)
1. bonpoint us(22%)
1. nisnass kw(20%)/sa(22%)
1. browns au(21%)
1. nike ie(21%)/sg(20%)
1. target (21%)
1. yoox us(21%)
1. lululemonattmall cn(20%)
1. thenorthface cn(20%)
",1,broken crawlers jun abcmart kr adler de aldo eu alexandermcqueen cn americaneagle ca ami dk jp uk anthropologie fr uk argos uk armandthiery fr armedangels de asos ae au ch cn hk id my nl ph pl ru sa sg th vn avenue us babyshop ae sa balr es fr nl burlington us calvinklein us central th centrepoint ae champion eu fr coldwatercreek us conforama fr converse at au de cotton au countryroad davidjones drmartens de es eu fr it nl uk us elcorteingles es ellos fi no se footaction us footlocker be de dk es fr it lu nl no se uk forloveandlemons de fredperry us gapfactory us goodhood uk harrods hermes at ca it nl us hm hk kw ro sa hollister cn hush uk isetan jp kupivip ru laredouteapi es lefties es pt lifestylestores in liverpool mx luigibertolli br maccosmetics uk michaelkors ca mothercare sa muji de next hk jp kr nz oasis parfois ma peterhahn de popup br prettysecrets in pullandbear gt ralphlauren gr sk reebok ch de reserved ro runnerspoint de saksfifthavenue mo ru tw sandroatjd cn selfridges cn de es ie mo sa sg sephora us sfera es simons ca snkrs eu fr soccer us solebox de uk stefaniamode dk stories nl stylebop au ca es fr hk jp mo suistudio eu uk suitsupply at de es fi fr it no tods cn gr pt tommybahama ae ch de hu in kr ph sa sg za topbrands ru valentino cn walmart ca warehouse au ca ie nl nz se wayfair de uk zalando it zalandolounge de zalora ph tommyjohn us vip cn eu fr uk us diesel cn leroymerlin fr noon sa watchshop ru burberry ae at au be bg ca ch cz de dk es fi fr hk hu ie it jp kr my nl pl pt ro ru se sg si sk tr tw us us ssense ca adidas kr my underarmour us neimanmarcus jp lululemon cn scotchandsoda ca us camper ca es se us jelmoli ch theory hibbett us aloyoga us levi es navabi uk moncler ca ch cn de es fr it jp kr ru uk us nayomi sa cos kr paris cl joseph de eu uk us openingceremony us riachuelo br gap hk rivafashion qa onitsukatigerjd cn bonita de arket eu uk melijoe uk marksandspencer ru strellson at ch de deichmann ro gant it uk koovs in fjallraven ca lcwaikiki bg pl ro ua uk bonpoint us nisnass kw sa browns au nike ie sg target yoox us lululemonattmall cn thenorthface cn ,1
373951,11053188766.0,IssuesEvent,2019-12-10 10:51:07,eclipse/codewind,https://api.github.com/repos/eclipse/codewind,closed,Maximum call stack size exceeded in PFE logs on Codewind,area/portal kind/bug priority/stopship,"I'm seeing the following in the PFE logs (latest images) when running on Che just after a docker build and push finishes for Generic docker projects:
```
[26/11/19 22:11:31 FileWatcher.js] [ERROR] RangeError: Maximum call stack size exceeded
at Function.[Symbol.hasInstance] ()
at Function.isBuffer (buffer.js:430:12)
at hasBinary (/portal/node_modules/has-binary2/index.js:44:66)
at hasBinary (/portal/node_modules/has-binary2/index.js:58:59)
at hasBinary (/portal/node_modules/has-binary2/index.js:58:59)
at hasBinary (/portal/node_modules/has-binary2/index.js:58:59)
at hasBinary (/portal/node_modules/has-binary2/index.js:58:59)
at hasBinary (/portal/node_modules/has-binary2/index.js:58:59)
at hasBinary (/portal/node_modules/has-binary2/index.js:58:59)
at hasBinary (/portal/node_modules/has-binary2/index.js:58:59)
```
It also appears to cause PFE to hang, as it prevented the app and build status from updating. Despite the build finishing and the app deploying, it was stuck in `Building - Creating Image`.",1.0,"Maximum call stack size exceeded in PFE logs on Codewind - I'm seeing the following in the PFE logs (latest images) when running on Che just after a docker build and push finishes for Generic docker projects:
```
[26/11/19 22:11:31 FileWatcher.js] [ERROR] RangeError: Maximum call stack size exceeded
at Function.[Symbol.hasInstance] ()
at Function.isBuffer (buffer.js:430:12)
at hasBinary (/portal/node_modules/has-binary2/index.js:44:66)
at hasBinary (/portal/node_modules/has-binary2/index.js:58:59)
at hasBinary (/portal/node_modules/has-binary2/index.js:58:59)
at hasBinary (/portal/node_modules/has-binary2/index.js:58:59)
at hasBinary (/portal/node_modules/has-binary2/index.js:58:59)
at hasBinary (/portal/node_modules/has-binary2/index.js:58:59)
at hasBinary (/portal/node_modules/has-binary2/index.js:58:59)
at hasBinary (/portal/node_modules/has-binary2/index.js:58:59)
```
It also appears to cause PFE to hang, as it prevented the app and build status from updating. Despite the build finishing and the app deploying, it was stuck in `Building - Creating Image`.",0,maximum call stack size exceeded in pfe logs on codewind i m seeing the following in the pfe logs latest images when running on che just after a docker build and push finishes for generic docker projects rangeerror maximum call stack size exceeded at function at function isbuffer buffer js at hasbinary portal node modules has index js at hasbinary portal node modules has index js at hasbinary portal node modules has index js at hasbinary portal node modules has index js at hasbinary portal node modules has index js at hasbinary portal node modules has index js at hasbinary portal node modules has index js at hasbinary portal node modules has index js it also appears to cause pfe to hang as it prevented the app and build status from updating despite the build finishing and the app deploying it was stuck in building creating image ,0
313970,26966348089.0,IssuesEvent,2023-02-08 22:47:51,opengeospatial/te-releases,https://api.github.com/repos/opengeospatial/te-releases,closed,OGC API - Processes 1.0 revision 0.4 in Beta,approved-by-test-lead ready-for-installation,"#### Prepare release (for each test suite individually)
- [x] The master branch should contain the latest ""stable"" version. Make sure all verified pull requests are merged.
- [x] Make sure it builds properly running `mvn clean install site -Dsource=8 -Pintegration-tests,docker`.
- [x] Test locally the master branch with the [reference implementation](https://github.com/opengeospatial/cite/wiki/Reference-Implementations), if there is one. If there is none, any other implementation can also be used for a local test.
- If it is OK, continue, if not create or update an issue, in the issue tracker of that test.
- [x] Review closed issues to be included in the release.
- Update title if appropriate.
- Tag, if they are not tagged with the milestone number.
- [x] Update release notes, usually the file is at src/site/asciidoc/changelog.adoc.
- Add a new title for the revision, if there is not one already.
- Copy the issues to be included in the release, link and title.
- Add any other highlights.
- [x] Create a new issue in the [OGC release tracker](https://github.com/opengeospatial/te-releases/issues) and set label `approved-by-test-lead`.
- Write a title for the issue should containing: [Abbreviation] [version] revision [revision] in *Beta*. For example: WFS 1.1 revision 1.23 in Beta. It should always be in Beta. OGC staff will create the issue related to making the releases in the production, official web site.",1.0,"OGC API - Processes 1.0 revision 0.4 in Beta - #### Prepare release (for each test suite individually)
- [x] The master branch should contain the latest ""stable"" version. Make sure all verified pull requests are merged.
- [x] Make sure it builds properly running `mvn clean install site -Dsource=8 -Pintegration-tests,docker`.
- [x] Test locally the master branch with the [reference implementation](https://github.com/opengeospatial/cite/wiki/Reference-Implementations), if there is one. If there is none, any other implementation can also be used for a local test.
- If it is OK, continue, if not create or update an issue, in the issue tracker of that test.
- [x] Review closed issues to be included in the release.
- Update title if appropriate.
- Tag, if they are not tagged with the milestone number.
- [x] Update release notes, usually the file is at src/site/asciidoc/changelog.adoc.
- Add a new title for the revision, if there is not one already.
- Copy the issues to be included in the release, link and title.
- Add any other highlights.
- [x] Create a new issue in the [OGC release tracker](https://github.com/opengeospatial/te-releases/issues) and set label `approved-by-test-lead`.
- Write a title for the issue should containing: [Abbreviation] [version] revision [revision] in *Beta*. For example: WFS 1.1 revision 1.23 in Beta. It should always be in Beta. OGC staff will create the issue related to making the releases in the production, official web site.",0,ogc api processes revision in beta prepare release for each test suite individually the master branch should contain the latest stable version make sure all verified pull requests are merged make sure it builds properly running mvn clean install site dsource pintegration tests docker test locally the master branch with the if there is one if there is none any other implementation can also be used for a local test if it is ok continue if not create or update an issue in the issue tracker of that test review closed issues to be included in the release update title if appropriate tag if they are not tagged with the milestone number update release notes usually the file is at src site asciidoc changelog adoc add a new title for the revision if there is not one already copy the issues to be included in the release link and title add any other highlights create a new issue in the and set label approved by test lead write a title for the issue should containing revision in beta for example wfs revision in beta it should always be in beta ogc staff will create the issue related to making the releases in the production official web site ,0
2666,26927631103.0,IssuesEvent,2023-02-07 14:48:33,aksio-insurtech/Cratis,https://api.github.com/repos/aksio-insurtech/Cratis,closed,Improve connected clients system,reliability,"We have today a `ConnectedClient` grain. Its purpose is to track all connected clients and provides the ability to observe when clients disconnect. This is used for instance by the `ClientObservers` system that then will tell any `Subscribed` observers that it should `Unsubscribe` as their will be no receiver in the namespaced stream.
Today `ConnectedClient` keeps track of which clients are connected in memory. With this it will not reliably track in a cluster which clients are connected. We want it to become a stateful grain.
With the `ConnectedClient` grain being a singleton in a silo cluster, the grain can move around. When this occurs, the state is lost. The move can be abrupt and not cleanly shut down.
We want to leverage the `ISiloStatusOracle` to track the general silo status:
- Clients connecting are connected for the silo they are connecting to
- If silo goes down, during OnActivateAsync() we will get members of the cluster and remove all connected clients for silos that are no longer with us
- Subscribe to silo changes (`SubscribeToSiloStatusEvents`)
- Make `GetLastConnectedClientConnectionId()` for connected clients to be for specific silo - client needs to know which silo it is connected to.
Below are the flows we want to support:


",True,"Improve connected clients system - We have today a `ConnectedClient` grain. Its purpose is to track all connected clients and provides the ability to observe when clients disconnect. This is used for instance by the `ClientObservers` system that then will tell any `Subscribed` observers that it should `Unsubscribe` as their will be no receiver in the namespaced stream.
Today `ConnectedClient` keeps track of which clients are connected in memory. With this it will not reliably track in a cluster which clients are connected. We want it to become a stateful grain.
With the `ConnectedClient` grain being a singleton in a silo cluster, the grain can move around. When this occurs, the state is lost. The move can be abrupt and not cleanly shut down.
We want to leverage the `ISiloStatusOracle` to track the general silo status:
- Clients connecting are connected for the silo they are connecting to
- If silo goes down, during OnActivateAsync() we will get members of the cluster and remove all connected clients for silos that are no longer with us
- Subscribe to silo changes (`SubscribeToSiloStatusEvents`)
- Make `GetLastConnectedClientConnectionId()` for connected clients to be for specific silo - client needs to know which silo it is connected to.
Below are the flows we want to support:


",1,improve connected clients system we have today a connectedclient grain its purpose is to track all connected clients and provides the ability to observe when clients disconnect this is used for instance by the clientobservers system that then will tell any subscribed observers that it should unsubscribe as their will be no receiver in the namespaced stream today connectedclient keeps track of which clients are connected in memory with this it will not reliably track in a cluster which clients are connected we want it to become a stateful grain with the connectedclient grain being a singleton in a silo cluster the grain can move around when this occurs the state is lost the move can be abrupt and not cleanly shut down we want to leverage the isilostatusoracle to track the general silo status clients connecting are connected for the silo they are connecting to if silo goes down during onactivateasync we will get members of the cluster and remove all connected clients for silos that are no longer with us subscribe to silo changes subscribetosilostatusevents make getlastconnectedclientconnectionid for connected clients to be for specific silo client needs to know which silo it is connected to below are the flows we want to support ,1
2757,27530361320.0,IssuesEvent,2023-03-06 21:32:10,NVIDIA/spark-rapids,https://api.github.com/repos/NVIDIA/spark-rapids,closed,[FEA] Implement OOM retry framework,reliability,"**Is your feature request related to a problem? Please describe.**
Currently memory on the GPU is managed mostly by convention and by the GpuSemaphore. The GpuSemaphore allows a configured number of tasks onto the GPU at any one point in time, but it does not explicitly track or hand out memory to these tasks. By convention different execution paths will assume that they can use 4x the target batch size without any issues and also assume that the input batch size is <= the target batch size. There is also no way to request more memory if the operation knows that it will use more memory than is currently available.
**Describe the solution you'd like**
Create A GpuMemoryLeaseManager (GMLM) or update the GpuSemaphore to provide the following APIs.
```
def requestLease(tc: TaskContext, amount: long): MemoryLease
def getTotalLease(tc: TaskContext): Long
def getBaseLease(tc: TaskContext): Long // Not sure if this is needed getTotalLease probably is good enough.
def returnAllLeases(tc: TaskContext): Unit // release any outstanding leases
```
`MemoryLease` would be `AutoClosable` and would return the memory to the GMLM when it is closed.
The GMLM is an arbitrator. It is not intended to actually allocate any memory, just to reduce the load on the GPU if multiple operations would need more memory than is currently available. So for cases like a Join or a window operation where today we cannot guarantee that it will be under the 4x batch size limit. The goal is to eventually update all operators so that the limit is not by convention, but it a set value that can dynamically change if needed.
This is not intended to replace the efforts we have made for out of core algorithms. Those are still needed even on very large memory GPUs because CUDF still has column size limitations.
When a SparkPlan node wants to run on the GPU it will see what the current budget is by asking the GMLM. It will also estimate how much memory it will need to complete the current operation at hand. If the memory needed is more than current lease another lease on more memory will be requested. In order to make that request the SparkPlan node will need to make sure that all of the memory it is currently using is spillable.
When the GMLM receives a request and there is enough memory to fulfill the request it should provide a lease to the new task for the desired amount of memory ideally without blocking.
When there are more requests for memory than there is memory to fulfill the requests the GMLM will need to decide which tasks should be allowed to continue and which must wait. As this is not a simple problem to solve for the time being I would propose that we do a FIFO pattern, where the first task to ask for the memory is the first task to be allowed to run when there is enough memory available. As new tasks with new requests come in that cannot be satisfied all of their previously requested leases are made available to satisfy higher priority tasks. This is why all task memory must be made spillable before requesting a lease. When a lease is closed that memory will also be made available for pending tasks to use in FIFO/priority order. In the future we may have an explicit priority for a task, which would fit in well with this priority queue model.
If the total requested memory is more than the GPU could ever satisfy, then GMLM should treat the request as if it is asking for the entire GPU, and warn loudly that it is doing so. This is an attempt to let the task succeed on the chance that it overestimated the amount of memory that would be needed.
A task will automatically request a lease for 4 * target batch size when it acquires the GPU Semaphore. When the semaphore is released it will also release that original lease. This amount is for backwards compatibility with existing code that has this assumption hard coded. In the future this amount may change. The GMLM should be queried to see what the amount is, rather than go off of the target batch size.",True,"[FEA] Implement OOM retry framework - **Is your feature request related to a problem? Please describe.**
Currently memory on the GPU is managed mostly by convention and by the GpuSemaphore. The GpuSemaphore allows a configured number of tasks onto the GPU at any one point in time, but it does not explicitly track or hand out memory to these tasks. By convention different execution paths will assume that they can use 4x the target batch size without any issues and also assume that the input batch size is <= the target batch size. There is also no way to request more memory if the operation knows that it will use more memory than is currently available.
**Describe the solution you'd like**
Create A GpuMemoryLeaseManager (GMLM) or update the GpuSemaphore to provide the following APIs.
```
def requestLease(tc: TaskContext, amount: long): MemoryLease
def getTotalLease(tc: TaskContext): Long
def getBaseLease(tc: TaskContext): Long // Not sure if this is needed getTotalLease probably is good enough.
def returnAllLeases(tc: TaskContext): Unit // release any outstanding leases
```
`MemoryLease` would be `AutoClosable` and would return the memory to the GMLM when it is closed.
The GMLM is an arbitrator. It is not intended to actually allocate any memory, just to reduce the load on the GPU if multiple operations would need more memory than is currently available. So for cases like a Join or a window operation where today we cannot guarantee that it will be under the 4x batch size limit. The goal is to eventually update all operators so that the limit is not by convention, but it a set value that can dynamically change if needed.
This is not intended to replace the efforts we have made for out of core algorithms. Those are still needed even on very large memory GPUs because CUDF still has column size limitations.
When a SparkPlan node wants to run on the GPU it will see what the current budget is by asking the GMLM. It will also estimate how much memory it will need to complete the current operation at hand. If the memory needed is more than current lease another lease on more memory will be requested. In order to make that request the SparkPlan node will need to make sure that all of the memory it is currently using is spillable.
When the GMLM receives a request and there is enough memory to fulfill the request it should provide a lease to the new task for the desired amount of memory ideally without blocking.
When there are more requests for memory than there is memory to fulfill the requests the GMLM will need to decide which tasks should be allowed to continue and which must wait. As this is not a simple problem to solve for the time being I would propose that we do a FIFO pattern, where the first task to ask for the memory is the first task to be allowed to run when there is enough memory available. As new tasks with new requests come in that cannot be satisfied all of their previously requested leases are made available to satisfy higher priority tasks. This is why all task memory must be made spillable before requesting a lease. When a lease is closed that memory will also be made available for pending tasks to use in FIFO/priority order. In the future we may have an explicit priority for a task, which would fit in well with this priority queue model.
If the total requested memory is more than the GPU could ever satisfy, then GMLM should treat the request as if it is asking for the entire GPU, and warn loudly that it is doing so. This is an attempt to let the task succeed on the chance that it overestimated the amount of memory that would be needed.
A task will automatically request a lease for 4 * target batch size when it acquires the GPU Semaphore. When the semaphore is released it will also release that original lease. This amount is for backwards compatibility with existing code that has this assumption hard coded. In the future this amount may change. The GMLM should be queried to see what the amount is, rather than go off of the target batch size.",1, implement oom retry framework is your feature request related to a problem please describe currently memory on the gpu is managed mostly by convention and by the gpusemaphore the gpusemaphore allows a configured number of tasks onto the gpu at any one point in time but it does not explicitly track or hand out memory to these tasks by convention different execution paths will assume that they can use the target batch size without any issues and also assume that the input batch size is the target batch size there is also no way to request more memory if the operation knows that it will use more memory than is currently available describe the solution you d like create a gpumemoryleasemanager gmlm or update the gpusemaphore to provide the following apis def requestlease tc taskcontext amount long memorylease def gettotallease tc taskcontext long def getbaselease tc taskcontext long not sure if this is needed gettotallease probably is good enough def returnallleases tc taskcontext unit release any outstanding leases memorylease would be autoclosable and would return the memory to the gmlm when it is closed the gmlm is an arbitrator it is not intended to actually allocate any memory just to reduce the load on the gpu if multiple operations would need more memory than is currently available so for cases like a join or a window operation where today we cannot guarantee that it will be under the batch size limit the goal is to eventually update all operators so that the limit is not by convention but it a set value that can dynamically change if needed this is not intended to replace the efforts we have made for out of core algorithms those are still needed even on very large memory gpus because cudf still has column size limitations when a sparkplan node wants to run on the gpu it will see what the current budget is by asking the gmlm it will also estimate how much memory it will need to complete the current operation at hand if the memory needed is more than current lease another lease on more memory will be requested in order to make that request the sparkplan node will need to make sure that all of the memory it is currently using is spillable when the gmlm receives a request and there is enough memory to fulfill the request it should provide a lease to the new task for the desired amount of memory ideally without blocking when there are more requests for memory than there is memory to fulfill the requests the gmlm will need to decide which tasks should be allowed to continue and which must wait as this is not a simple problem to solve for the time being i would propose that we do a fifo pattern where the first task to ask for the memory is the first task to be allowed to run when there is enough memory available as new tasks with new requests come in that cannot be satisfied all of their previously requested leases are made available to satisfy higher priority tasks this is why all task memory must be made spillable before requesting a lease when a lease is closed that memory will also be made available for pending tasks to use in fifo priority order in the future we may have an explicit priority for a task which would fit in well with this priority queue model if the total requested memory is more than the gpu could ever satisfy then gmlm should treat the request as if it is asking for the entire gpu and warn loudly that it is doing so this is an attempt to let the task succeed on the chance that it overestimated the amount of memory that would be needed a task will automatically request a lease for target batch size when it acquires the gpu semaphore when the semaphore is released it will also release that original lease this amount is for backwards compatibility with existing code that has this assumption hard coded in the future this amount may change the gmlm should be queried to see what the amount is rather than go off of the target batch size ,1
1115,13174759326.0,IssuesEvent,2020-08-11 23:24:29,microsoft/pxt-arcade,https://api.github.com/repos/microsoft/pxt-arcade,closed,5 minutes of tilemap work lost due to bug,bug next-release reliability streams tilemap,"https://youtu.be/QrCsFRg5ArA?t=594
For some reason, all the work done in the tilemap editor was lost. There isn't a consistent repro but this has happened >3 times on stream.
Ignore the very strange audio.",True,"5 minutes of tilemap work lost due to bug - https://youtu.be/QrCsFRg5ArA?t=594
For some reason, all the work done in the tilemap editor was lost. There isn't a consistent repro but this has happened >3 times on stream.
Ignore the very strange audio.",1, minutes of tilemap work lost due to bug for some reason all the work done in the tilemap editor was lost there isn t a consistent repro but this has happened times on stream ignore the very strange audio ,1
210606,23759161904.0,IssuesEvent,2022-09-01 07:21:35,elastic/kibana,https://api.github.com/repos/elastic/kibana,opened,[Security Solution] Empty execution results is showing under execution results when rule is created with non existing index,bug triage_needed impact:medium Team: SecuritySolution Team:Detection Rules v8.5.0,"**Describe the bug:**
Empty execution results is showing under execution results when rule is created with non existing index and warnings are coming.
**Build Details:**
```
VERSION: 8.5.0
BUILD: 55925
COMMIT: dc43193d73c5869335a239c7012528bb1fffd509
```
**Pre-conditions:**
1. Elasticsearch should be up and running
2. Kibana should be up and running
**Steps to Reproduce:**
1. Navigate to Security-->Manage-->Rules.
2. Click on create rule.
3. Select custom query rule.
4. Enter any index pattern which is not created.
5. Enter all the other fields and create the rule.
6. Check the Execution results.
**Expected Result**
Empty values should not be present under execution results.
**Actual Result**
Empty values are present under execution results.
**Screenshots:**


",True,"[Security Solution] Empty execution results is showing under execution results when rule is created with non existing index - **Describe the bug:**
Empty execution results is showing under execution results when rule is created with non existing index and warnings are coming.
**Build Details:**
```
VERSION: 8.5.0
BUILD: 55925
COMMIT: dc43193d73c5869335a239c7012528bb1fffd509
```
**Pre-conditions:**
1. Elasticsearch should be up and running
2. Kibana should be up and running
**Steps to Reproduce:**
1. Navigate to Security-->Manage-->Rules.
2. Click on create rule.
3. Select custom query rule.
4. Enter any index pattern which is not created.
5. Enter all the other fields and create the rule.
6. Check the Execution results.
**Expected Result**
Empty values should not be present under execution results.
**Actual Result**
Empty values are present under execution results.
**Screenshots:**


",0, empty execution results is showing under execution results when rule is created with non existing index describe the bug empty execution results is showing under execution results when rule is created with non existing index and warnings are coming build details version build commit pre conditions elasticsearch should be up and running kibana should be up and running steps to reproduce navigate to security manage rules click on create rule select custom query rule enter any index pattern which is not created enter all the other fields and create the rule check the execution results expected result empty values should not be present under execution results actual result empty values are present under execution results screenshots ,0
23649,16492566923.0,IssuesEvent,2021-05-25 06:41:32,MarinhoGabriel/creditmodel.clj,https://api.github.com/repos/MarinhoGabriel/creditmodel.clj,opened,Create module `communication` ,enhancement infrastructure,The module `communication` is going to have all Kafka configurations.,1.0,Create module `communication` - The module `communication` is going to have all Kafka configurations.,0,create module communication the module communication is going to have all kafka configurations ,0
880,11348750585.0,IssuesEvent,2020-01-24 01:37:06,crossplaneio/crossplane,https://api.github.com/repos/crossplaneio/crossplane,closed,Decrease the number of updates from Reconcile loops,performance reliability,"@ichekrygin brings up a good point in https://github.com/crossplaneio/crossplane/issues/208, and I think I agree now too that we can potentially call `Update` less frequently from our controllers.
Copied here for readability:
General feedback: the more I think about calling r.Update multiple times after every property change in the routine, vs. calling it a single time - the more I think the latter is arguably a ""better"" approach.
For me it boils down to the following:
* calling it a single time is more efficient (less updates calls to API, less object reconcile batching in controller-runtime)
* calling it a single time is more straight forward in terms of error handing branching logic.
There is a perception that if we set an object property and don't follow by the immediate update, there is a chance that changes could be lost. While in theory, there is a chance of that, in practice all our reconcile events end with r.Update for one reason or another (r.fail(...) - will persist all the object's property changes).",True,"Decrease the number of updates from Reconcile loops - @ichekrygin brings up a good point in https://github.com/crossplaneio/crossplane/issues/208, and I think I agree now too that we can potentially call `Update` less frequently from our controllers.
Copied here for readability:
General feedback: the more I think about calling r.Update multiple times after every property change in the routine, vs. calling it a single time - the more I think the latter is arguably a ""better"" approach.
For me it boils down to the following:
* calling it a single time is more efficient (less updates calls to API, less object reconcile batching in controller-runtime)
* calling it a single time is more straight forward in terms of error handing branching logic.
There is a perception that if we set an object property and don't follow by the immediate update, there is a chance that changes could be lost. While in theory, there is a chance of that, in practice all our reconcile events end with r.Update for one reason or another (r.fail(...) - will persist all the object's property changes).",1,decrease the number of updates from reconcile loops ichekrygin brings up a good point in and i think i agree now too that we can potentially call update less frequently from our controllers copied here for readability general feedback the more i think about calling r update multiple times after every property change in the routine vs calling it a single time the more i think the latter is arguably a better approach for me it boils down to the following calling it a single time is more efficient less updates calls to api less object reconcile batching in controller runtime calling it a single time is more straight forward in terms of error handing branching logic there is a perception that if we set an object property and don t follow by the immediate update there is a chance that changes could be lost while in theory there is a chance of that in practice all our reconcile events end with r update for one reason or another r fail will persist all the object s property changes ,1
146428,23068322151.0,IssuesEvent,2022-07-25 15:39:05,WordPress/gutenberg,https://api.github.com/repos/WordPress/gutenberg,opened,"Template List: Consider making ""Add New"" more prominent",[Type] Enhancement Needs Design Feedback [Feature] Site Editor,"## What problem does this address?
This feedback came in during the [fifteenth call for testing for the FSE Outreach Program:](https://make.wordpress.org/test/2022/07/11/fse-program-testing-call-15-category-customization/#comment-2647)
>It took me quite a while to find the Add New button and would have easily missed this functionality all together if not prompted to find the add new template. Super glad I know about this now, very cool feature.
## What is your proposed solution?
The same person who gave feedback shared the following when asked what might help from their perspective:
>I think if the button, in addition to being in the top bar, was also at the bottom of the list or at the top of the list of templates, then it would be more discoverable.
Here's a quick sense of what this might look like, reusing the current add new button:
My biggest question is whether we want to make this action more discoverable from a UX perspective. How often will folks be creating new templates? Do we want to optimize for this? Tagging in @WordPress/gutenberg-design as a result.
",1.0,"Template List: Consider making ""Add New"" more prominent - ## What problem does this address?
This feedback came in during the [fifteenth call for testing for the FSE Outreach Program:](https://make.wordpress.org/test/2022/07/11/fse-program-testing-call-15-category-customization/#comment-2647)
>It took me quite a while to find the Add New button and would have easily missed this functionality all together if not prompted to find the add new template. Super glad I know about this now, very cool feature.
## What is your proposed solution?
The same person who gave feedback shared the following when asked what might help from their perspective:
>I think if the button, in addition to being in the top bar, was also at the bottom of the list or at the top of the list of templates, then it would be more discoverable.
Here's a quick sense of what this might look like, reusing the current add new button:
My biggest question is whether we want to make this action more discoverable from a UX perspective. How often will folks be creating new templates? Do we want to optimize for this? Tagging in @WordPress/gutenberg-design as a result.
",0,template list consider making add new more prominent what problem does this address this feedback came in during the it took me quite a while to find the add new button and would have easily missed this functionality all together if not prompted to find the add new template super glad i know about this now very cool feature what is your proposed solution the same person who gave feedback shared the following when asked what might help from their perspective i think if the button in addition to being in the top bar was also at the bottom of the list or at the top of the list of templates then it would be more discoverable here s a quick sense of what this might look like reusing the current add new button img width alt screen shot at am src my biggest question is whether we want to make this action more discoverable from a ux perspective how often will folks be creating new templates do we want to optimize for this tagging in wordpress gutenberg design as a result ,0
2206,24149918713.0,IssuesEvent,2022-09-21 22:51:53,pulumi/pulumi,https://api.github.com/repos/pulumi/pulumi,closed,[sdk/nodejs] @pulumi/pulumi not interoperable with ESModules,kind/bug area/sdks impact/reliability language/javascript size/S,"### What happened?
The NodeJS SDK's `tsconfig` file does not enable interoperability with esmodules. This introduces [a couple of flawed assumptions regarding compatibility](https://www.typescriptlang.org/tsconfig#esModuleInterop).
As a result, the `@pulumi/pulumi` package could _**potentially**_ introduce subtle bugs in Pulumi programs. It's unclear how sound the errors I'm seeing are (its possible TSC's interop analysis is overly conservative), but `tsc` reports them as guaranteed. I find that spurious since this code has been running perfectly well forever.
### Steps to reproduce
1. `cd sdk/nodejs/`
2. Edit tsconfig.json to set the compiler option `esModuleInterop: true`.
3. `make ensure; make build` and observe the errors.
### Expected Behavior
`@pulumi/pulumi` should be interoperable with ESModules to support the widest variety of Pulumi programs.
### Actual Behavior
TSC refuses to build, since the NodeJS SDK and runtime are not interoperable with ESBuild.
```
make ensure; and make build; and make install
Checking for yarn ................ ✓
Checking for node ................ ✓
BUILD:
Checking for yarn ................ ✓
Checking for node ................ ✓
yarn run tsc
yarn run v1.22.19
$ /Users/robbiemckinstry/workspace/pulumi/pulumi/sdk/nodejs/node_modules/.bin/tsc
automation/cmd.ts:59:52 - error TS2349: This expression is not callable.
Type '{ default: { (file: string, arguments?: readonly string[] | undefined, options?: Options | undefined): ExecaChildProcess; (file: string, arguments?: readonly string[] | undefined, options?: Options<...> | undefined): ExecaChildProcess<...>; (file: string, options?: Options<...> | undefined): ExecaChi...' has no call signatures.
59 const { stdout, stderr, exitCode } = await execa(""pulumi"", args, { env, cwd });
~~~~~
automation/cmd.ts:15:1
15 import * as execa from ""execa"";
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Type originates at this import. A namespace-style import cannot be called or constructed, and will cause a failure at runtime. Consider using a default import or import require here instead.
automation/stack.ts:119:34 - error TS2351: This expression is not constructable.
Type 'typeof TailFile' has no construct signatures.
119 const eventLogTail = new TailFile(logPath, { startPos: 0, pollFileIntervalMs: 200 })
~~~~~~~~
automation/stack.ts:22:1
22 import * as TailFile from ""@logdna/tail-file"";
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Type originates at this import. A namespace-style import cannot be called or constructed, and will cause a failure at runtime. Consider using a default import or import require here instead.
automation/stack.ts:120:32 - error TS7006: Parameter 'err' implicitly has an 'any' type.
120 .on(""tail_error"", (err) => {
~~~
cmd/run-policy-pack/index.ts:95:39 - error TS2349: This expression is not callable.
Type '{ default: { (args?: string[] | undefined, opts?: Opts | undefined): ParsedArgs; (args?: string[] | undefined, opts?: Opts | undefined): T & ParsedArgs; (args?: string[] | undefined, opts?: Opts | undefined): T; }; }' has no call signatures.
95 const argv: minimist.ParsedArgs = minimist(args, {});
~~~~~~~~
cmd/run-policy-pack/index.ts:81:1
81 import * as minimist from ""minimist"";
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Type originates at this import. A namespace-style import cannot be called or constructed, and will cause a failure at runtime. Consider using a default import or import require here instead.
cmd/run/index.ts:116:39 - error TS2349: This expression is not callable.
Type '{ default: { (args?: string[] | undefined, opts?: Opts | undefined): ParsedArgs; (args?: string[] | undefined, opts?: Opts | undefined): T & ParsedArgs; (args?: string[] | undefined, opts?: Opts | undefined): T; }; }' has no call signatures.
116 const argv: minimist.ParsedArgs = minimist(args, {
~~~~~~~~
cmd/run/index.ts:86:1
86 import * as minimist from ""minimist"";
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Type originates at this import. A namespace-style import cannot be called or constructed, and will cause a failure at runtime. Consider using a default import or import require here instead.
Found 5 errors.
error Command failed with exit code 2.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
make: *** [build_package] Error 2
```
### Output of `pulumi about`
_No response_
### Additional context
This is an easy fix. I will implement it.
### Contributing
Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
",True,"[sdk/nodejs] @pulumi/pulumi not interoperable with ESModules - ### What happened?
The NodeJS SDK's `tsconfig` file does not enable interoperability with esmodules. This introduces [a couple of flawed assumptions regarding compatibility](https://www.typescriptlang.org/tsconfig#esModuleInterop).
As a result, the `@pulumi/pulumi` package could _**potentially**_ introduce subtle bugs in Pulumi programs. It's unclear how sound the errors I'm seeing are (its possible TSC's interop analysis is overly conservative), but `tsc` reports them as guaranteed. I find that spurious since this code has been running perfectly well forever.
### Steps to reproduce
1. `cd sdk/nodejs/`
2. Edit tsconfig.json to set the compiler option `esModuleInterop: true`.
3. `make ensure; make build` and observe the errors.
### Expected Behavior
`@pulumi/pulumi` should be interoperable with ESModules to support the widest variety of Pulumi programs.
### Actual Behavior
TSC refuses to build, since the NodeJS SDK and runtime are not interoperable with ESBuild.
```
make ensure; and make build; and make install
Checking for yarn ................ ✓
Checking for node ................ ✓
BUILD:
Checking for yarn ................ ✓
Checking for node ................ ✓
yarn run tsc
yarn run v1.22.19
$ /Users/robbiemckinstry/workspace/pulumi/pulumi/sdk/nodejs/node_modules/.bin/tsc
automation/cmd.ts:59:52 - error TS2349: This expression is not callable.
Type '{ default: { (file: string, arguments?: readonly string[] | undefined, options?: Options | undefined): ExecaChildProcess; (file: string, arguments?: readonly string[] | undefined, options?: Options<...> | undefined): ExecaChildProcess<...>; (file: string, options?: Options<...> | undefined): ExecaChi...' has no call signatures.
59 const { stdout, stderr, exitCode } = await execa(""pulumi"", args, { env, cwd });
~~~~~
automation/cmd.ts:15:1
15 import * as execa from ""execa"";
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Type originates at this import. A namespace-style import cannot be called or constructed, and will cause a failure at runtime. Consider using a default import or import require here instead.
automation/stack.ts:119:34 - error TS2351: This expression is not constructable.
Type 'typeof TailFile' has no construct signatures.
119 const eventLogTail = new TailFile(logPath, { startPos: 0, pollFileIntervalMs: 200 })
~~~~~~~~
automation/stack.ts:22:1
22 import * as TailFile from ""@logdna/tail-file"";
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Type originates at this import. A namespace-style import cannot be called or constructed, and will cause a failure at runtime. Consider using a default import or import require here instead.
automation/stack.ts:120:32 - error TS7006: Parameter 'err' implicitly has an 'any' type.
120 .on(""tail_error"", (err) => {
~~~
cmd/run-policy-pack/index.ts:95:39 - error TS2349: This expression is not callable.
Type '{ default: { (args?: string[] | undefined, opts?: Opts | undefined): ParsedArgs; (args?: string[] | undefined, opts?: Opts | undefined): T & ParsedArgs; (args?: string[] | undefined, opts?: Opts | undefined): T; }; }' has no call signatures.
95 const argv: minimist.ParsedArgs = minimist(args, {});
~~~~~~~~
cmd/run-policy-pack/index.ts:81:1
81 import * as minimist from ""minimist"";
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Type originates at this import. A namespace-style import cannot be called or constructed, and will cause a failure at runtime. Consider using a default import or import require here instead.
cmd/run/index.ts:116:39 - error TS2349: This expression is not callable.
Type '{ default: { (args?: string[] | undefined, opts?: Opts | undefined): ParsedArgs; (args?: string[] | undefined, opts?: Opts | undefined): T & ParsedArgs; (args?: string[] | undefined, opts?: Opts | undefined): T; }; }' has no call signatures.
116 const argv: minimist.ParsedArgs = minimist(args, {
~~~~~~~~
cmd/run/index.ts:86:1
86 import * as minimist from ""minimist"";
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Type originates at this import. A namespace-style import cannot be called or constructed, and will cause a failure at runtime. Consider using a default import or import require here instead.
Found 5 errors.
error Command failed with exit code 2.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
make: *** [build_package] Error 2
```
### Output of `pulumi about`
_No response_
### Additional context
This is an easy fix. I will implement it.
### Contributing
Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
",1, pulumi pulumi not interoperable with esmodules what happened the nodejs sdk s tsconfig file does not enable interoperability with esmodules this introduces as a result the pulumi pulumi package could potentially introduce subtle bugs in pulumi programs it s unclear how sound the errors i m seeing are its possible tsc s interop analysis is overly conservative but tsc reports them as guaranteed i find that spurious since this code has been running perfectly well forever steps to reproduce cd sdk nodejs edit tsconfig json to set the compiler option esmoduleinterop true make ensure make build and observe the errors expected behavior pulumi pulumi should be interoperable with esmodules to support the widest variety of pulumi programs actual behavior tsc refuses to build since the nodejs sdk and runtime are not interoperable with esbuild make ensure and make build and make install checking for yarn ✓ checking for node ✓ build checking for yarn ✓ checking for node ✓ yarn run tsc yarn run users robbiemckinstry workspace pulumi pulumi sdk nodejs node modules bin tsc automation cmd ts error this expression is not callable type default file string arguments readonly string undefined options options undefined execachildprocess file string arguments readonly string undefined options options undefined execachildprocess file string options options undefined execachi has no call signatures const stdout stderr exitcode await execa pulumi args env cwd automation cmd ts import as execa from execa type originates at this import a namespace style import cannot be called or constructed and will cause a failure at runtime consider using a default import or import require here instead automation stack ts error this expression is not constructable type typeof tailfile has no construct signatures const eventlogtail new tailfile logpath startpos pollfileintervalms automation stack ts import as tailfile from logdna tail file type originates at this import a namespace style import cannot be called or constructed and will cause a failure at runtime consider using a default import or import require here instead automation stack ts error parameter err implicitly has an any type on tail error err cmd run policy pack index ts error this expression is not callable type default args string undefined opts opts undefined parsedargs args string undefined opts opts undefined t parsedargs args string undefined opts opts undefined t has no call signatures const argv minimist parsedargs minimist args cmd run policy pack index ts import as minimist from minimist type originates at this import a namespace style import cannot be called or constructed and will cause a failure at runtime consider using a default import or import require here instead cmd run index ts error this expression is not callable type default args string undefined opts opts undefined parsedargs args string undefined opts opts undefined t parsedargs args string undefined opts opts undefined t has no call signatures const argv minimist parsedargs minimist args cmd run index ts import as minimist from minimist type originates at this import a namespace style import cannot be called or constructed and will cause a failure at runtime consider using a default import or import require here instead found errors error command failed with exit code info visit for documentation about this command make error output of pulumi about no response additional context this is an easy fix i will implement it contributing vote on this issue by adding a 👍 reaction to contribute a fix for this issue leave a comment and link to your pull request if you ve opened one already ,1
179400,21567154042.0,IssuesEvent,2022-05-02 01:04:48,ilan-WS/m3,https://api.github.com/repos/ilan-WS/m3,opened,"CVE-2022-0536 (Medium) detected in follow-redirects-1.13.0.tgz, follow-redirects-1.5.10.tgz",security vulnerability,"## CVE-2022-0536 - Medium Severity Vulnerability
Vulnerable Libraries - follow-redirects-1.13.0.tgz, follow-redirects-1.5.10.tgz
Direct dependency fix Resolution (react-scripts): 1.0.11
Fix Resolution (follow-redirects): 1.14.8
Direct dependency fix Resolution (axios): 0.20.0-0
***
- [ ] Check this box to open an automated fix PR
",0,cve medium detected in follow redirects tgz follow redirects tgz cve medium severity vulnerability vulnerable libraries follow redirects tgz follow redirects tgz follow redirects tgz http and https modules that follow redirects library home page a href path to dependency file src ctl ui package json path to vulnerable library src ctl ui node modules follow redirects dependency hierarchy react scripts tgz root library webpack dev server tgz http proxy middleware tgz http proxy tgz x follow redirects tgz vulnerable library follow redirects tgz http and https modules that follow redirects library home page a href path to dependency file src ctl ui package json path to vulnerable library src ctl ui node modules follow redirects dependency hierarchy axios tgz root library x follow redirects tgz vulnerable library found in base branch master vulnerability details exposure of sensitive information to an unauthorized actor in npm follow redirects prior to publish date url a href cvss score details base score metrics exploitability metrics attack vector network attack complexity high privileges required none user interaction none scope unchanged impact metrics confidentiality impact high integrity impact none availability impact none for more information on scores click a href suggested fix type upgrade version origin a href release date fix resolution follow redirects direct dependency fix resolution react scripts fix resolution follow redirects direct dependency fix resolution axios check this box to open an automated fix pr isopenpronvulnerability false ispackagebased true isdefaultbranch true packages istransitivedependency false dependencytree react scripts isminimumfixversionavailable true minimumfixversion isbinary false packagetype javascript node js packagename axios packageversion packagefilepaths istransitivedependency false dependencytree axios isminimumfixversionavailable true minimumfixversion isbinary false basebranches vulnerabilityidentifier cve vulnerabilitydetails exposure of sensitive information to an unauthorized actor in npm follow redirects prior to vulnerabilityurl ,0
607,8869558793.0,IssuesEvent,2019-01-11 06:01:43,dotnet/roslyn,https://api.github.com/repos/dotnet/roslyn,closed,Renaming an aliased type crashes the IDE,Area-IDE Bug Tenet-Reliability,"**Version Used**: VS 15.8.8 (Roslyn 2.9.0.63208)
**Steps to Reproduce**:
1. Copy the code below into a C# file.
2. Try to rename the symbol ""X"".
```c#
using X = System.Int32;
```
**Expected Behavior**: The symbol can be renamed.
**Actual Behavior**: The rename window shows and the symbol is highlighed, then the IDE hangs and crashes with a FailFast call.
The top of the stack trace looks very similar to that in #30903.
Stack trace, taken from Event Viewer:
```
System.NotImplementedException: The method or operation is not implemented.
at Microsoft.CodeAnalysis.CSharp.Simplification.CSharpSimplificationService.Expander.VisitSimpleName(SimpleNameSyntax rewrittenSimpleName, SimpleNameSyntax originalSimpleName)
at Microsoft.CodeAnalysis.CSharp.Simplification.CSharpSimplificationService.Expander.VisitIdentifierName(IdentifierNameSyntax node)
at Microsoft.CodeAnalysis.CSharp.Syntax.IdentifierNameSyntax.Accept[TResult](CSharpSyntaxVisitor`1 visitor)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.Simplification.CSharpSimplificationService.Expander.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.VisitCastExpression(CastExpressionSyntax node)
at Microsoft.CodeAnalysis.CSharp.Syntax.CastExpressionSyntax.Accept[TResult](CSharpSyntaxVisitor`1 visitor)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.Simplification.CSharpSimplificationService.Expander.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.VisitEqualsValueClause(EqualsValueClauseSyntax node)
at Microsoft.CodeAnalysis.CSharp.Syntax.EqualsValueClauseSyntax.Accept[TResult](CSharpSyntaxVisitor`1 visitor)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.Simplification.CSharpSimplificationService.Expander.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.VisitVariableDeclarator(VariableDeclaratorSyntax node)
at Microsoft.CodeAnalysis.CSharp.Syntax.VariableDeclaratorSyntax.Accept[TResult](CSharpSyntaxVisitor`1 visitor)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.Simplification.CSharpSimplificationService.Expander.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.VisitListElement[TNode](TNode node)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.VisitList[TNode](SeparatedSyntaxList`1 list)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.VisitVariableDeclaration(VariableDeclarationSyntax node)
at Microsoft.CodeAnalysis.CSharp.Syntax.VariableDeclarationSyntax.Accept[TResult](CSharpSyntaxVisitor`1 visitor)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.Simplification.CSharpSimplificationService.Expander.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.VisitLocalDeclarationStatement(LocalDeclarationStatementSyntax node)
at Microsoft.CodeAnalysis.CSharp.Syntax.LocalDeclarationStatementSyntax.Accept[TResult](CSharpSyntaxVisitor`1 visitor)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.Simplification.CSharpSimplificationService.Expander.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.Simplification.CSharpSimplificationService.Expand(SyntaxNode node, SemanticModel semanticModel, SyntaxAnnotation annotationForReplacedAliasIdentifier, Func`2 expandInsideNode, Boolean expandParameter, CancellationToken cancellationToken)
at Microsoft.CodeAnalysis.CSharp.Rename.CSharpRenameConflictLanguageService.RenameRewriter.Complexify(SyntaxNode originalNode, SyntaxNode newNode)
at Microsoft.CodeAnalysis.CSharp.Rename.CSharpRenameConflictLanguageService.RenameRewriter.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.VisitListElement[TNode](TNode node)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.VisitList[TNode](SyntaxList`1 list)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.VisitBlock(BlockSyntax node)
at Microsoft.CodeAnalysis.CSharp.Syntax.BlockSyntax.Accept[TResult](CSharpSyntaxVisitor`1 visitor)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.Rename.CSharpRenameConflictLanguageService.RenameRewriter.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.VisitMethodDeclaration(MethodDeclarationSyntax node)
at Microsoft.CodeAnalysis.CSharp.Syntax.MethodDeclarationSyntax.Accept[TResult](CSharpSyntaxVisitor`1 visitor)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.Rename.CSharpRenameConflictLanguageService.RenameRewriter.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.VisitListElement[TNode](TNode node)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.VisitList[TNode](SyntaxList`1 list)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.VisitClassDeclaration(ClassDeclarationSyntax node)
at Microsoft.CodeAnalysis.CSharp.Syntax.ClassDeclarationSyntax.Accept[TResult](CSharpSyntaxVisitor`1 visitor)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.Rename.CSharpRenameConflictLanguageService.RenameRewriter.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.VisitListElement[TNode](TNode node)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.VisitList[TNode](SyntaxList`1 list)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.VisitCompilationUnit(CompilationUnitSyntax node)
at Microsoft.CodeAnalysis.CSharp.Syntax.CompilationUnitSyntax.Accept[TResult](CSharpSyntaxVisitor`1 visitor)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.Rename.CSharpRenameConflictLanguageService.RenameRewriter.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.Rename.CSharpRenameConflictLanguageService.AnnotateAndRename(RenameRewriterParameters parameters)
at Microsoft.CodeAnalysis.Rename.ConflictEngine.ConflictResolver.Session.d__26.MoveNext()
```",True,"Renaming an aliased type crashes the IDE - **Version Used**: VS 15.8.8 (Roslyn 2.9.0.63208)
**Steps to Reproduce**:
1. Copy the code below into a C# file.
2. Try to rename the symbol ""X"".
```c#
using X = System.Int32;
```
**Expected Behavior**: The symbol can be renamed.
**Actual Behavior**: The rename window shows and the symbol is highlighed, then the IDE hangs and crashes with a FailFast call.
The top of the stack trace looks very similar to that in #30903.
Stack trace, taken from Event Viewer:
```
System.NotImplementedException: The method or operation is not implemented.
at Microsoft.CodeAnalysis.CSharp.Simplification.CSharpSimplificationService.Expander.VisitSimpleName(SimpleNameSyntax rewrittenSimpleName, SimpleNameSyntax originalSimpleName)
at Microsoft.CodeAnalysis.CSharp.Simplification.CSharpSimplificationService.Expander.VisitIdentifierName(IdentifierNameSyntax node)
at Microsoft.CodeAnalysis.CSharp.Syntax.IdentifierNameSyntax.Accept[TResult](CSharpSyntaxVisitor`1 visitor)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.Simplification.CSharpSimplificationService.Expander.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.VisitCastExpression(CastExpressionSyntax node)
at Microsoft.CodeAnalysis.CSharp.Syntax.CastExpressionSyntax.Accept[TResult](CSharpSyntaxVisitor`1 visitor)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.Simplification.CSharpSimplificationService.Expander.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.VisitEqualsValueClause(EqualsValueClauseSyntax node)
at Microsoft.CodeAnalysis.CSharp.Syntax.EqualsValueClauseSyntax.Accept[TResult](CSharpSyntaxVisitor`1 visitor)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.Simplification.CSharpSimplificationService.Expander.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.VisitVariableDeclarator(VariableDeclaratorSyntax node)
at Microsoft.CodeAnalysis.CSharp.Syntax.VariableDeclaratorSyntax.Accept[TResult](CSharpSyntaxVisitor`1 visitor)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.Simplification.CSharpSimplificationService.Expander.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.VisitListElement[TNode](TNode node)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.VisitList[TNode](SeparatedSyntaxList`1 list)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.VisitVariableDeclaration(VariableDeclarationSyntax node)
at Microsoft.CodeAnalysis.CSharp.Syntax.VariableDeclarationSyntax.Accept[TResult](CSharpSyntaxVisitor`1 visitor)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.Simplification.CSharpSimplificationService.Expander.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.VisitLocalDeclarationStatement(LocalDeclarationStatementSyntax node)
at Microsoft.CodeAnalysis.CSharp.Syntax.LocalDeclarationStatementSyntax.Accept[TResult](CSharpSyntaxVisitor`1 visitor)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.Simplification.CSharpSimplificationService.Expander.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.Simplification.CSharpSimplificationService.Expand(SyntaxNode node, SemanticModel semanticModel, SyntaxAnnotation annotationForReplacedAliasIdentifier, Func`2 expandInsideNode, Boolean expandParameter, CancellationToken cancellationToken)
at Microsoft.CodeAnalysis.CSharp.Rename.CSharpRenameConflictLanguageService.RenameRewriter.Complexify(SyntaxNode originalNode, SyntaxNode newNode)
at Microsoft.CodeAnalysis.CSharp.Rename.CSharpRenameConflictLanguageService.RenameRewriter.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.VisitListElement[TNode](TNode node)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.VisitList[TNode](SyntaxList`1 list)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.VisitBlock(BlockSyntax node)
at Microsoft.CodeAnalysis.CSharp.Syntax.BlockSyntax.Accept[TResult](CSharpSyntaxVisitor`1 visitor)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.Rename.CSharpRenameConflictLanguageService.RenameRewriter.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.VisitMethodDeclaration(MethodDeclarationSyntax node)
at Microsoft.CodeAnalysis.CSharp.Syntax.MethodDeclarationSyntax.Accept[TResult](CSharpSyntaxVisitor`1 visitor)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.Rename.CSharpRenameConflictLanguageService.RenameRewriter.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.VisitListElement[TNode](TNode node)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.VisitList[TNode](SyntaxList`1 list)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.VisitClassDeclaration(ClassDeclarationSyntax node)
at Microsoft.CodeAnalysis.CSharp.Syntax.ClassDeclarationSyntax.Accept[TResult](CSharpSyntaxVisitor`1 visitor)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.Rename.CSharpRenameConflictLanguageService.RenameRewriter.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.VisitListElement[TNode](TNode node)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.VisitList[TNode](SyntaxList`1 list)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.VisitCompilationUnit(CompilationUnitSyntax node)
at Microsoft.CodeAnalysis.CSharp.Syntax.CompilationUnitSyntax.Accept[TResult](CSharpSyntaxVisitor`1 visitor)
at Microsoft.CodeAnalysis.CSharp.CSharpSyntaxRewriter.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.Rename.CSharpRenameConflictLanguageService.RenameRewriter.Visit(SyntaxNode node)
at Microsoft.CodeAnalysis.CSharp.Rename.CSharpRenameConflictLanguageService.AnnotateAndRename(RenameRewriterParameters parameters)
at Microsoft.CodeAnalysis.Rename.ConflictEngine.ConflictResolver.Session.d__26.MoveNext()
```",1,renaming an aliased type crashes the ide version used vs roslyn steps to reproduce copy the code below into a c file try to rename the symbol x c using x system expected behavior the symbol can be renamed actual behavior the rename window shows and the symbol is highlighed then the ide hangs and crashes with a failfast call the top of the stack trace looks very similar to that in stack trace taken from event viewer system notimplementedexception the method or operation is not implemented at microsoft codeanalysis csharp simplification csharpsimplificationservice expander visitsimplename simplenamesyntax rewrittensimplename simplenamesyntax originalsimplename at microsoft codeanalysis csharp simplification csharpsimplificationservice expander visitidentifiername identifiernamesyntax node at microsoft codeanalysis csharp syntax identifiernamesyntax accept csharpsyntaxvisitor visitor at microsoft codeanalysis csharp csharpsyntaxrewriter visit syntaxnode node at microsoft codeanalysis csharp simplification csharpsimplificationservice expander visit syntaxnode node at microsoft codeanalysis csharp csharpsyntaxrewriter visitcastexpression castexpressionsyntax node at microsoft codeanalysis csharp syntax castexpressionsyntax accept csharpsyntaxvisitor visitor at microsoft codeanalysis csharp csharpsyntaxrewriter visit syntaxnode node at microsoft codeanalysis csharp simplification csharpsimplificationservice expander visit syntaxnode node at microsoft codeanalysis csharp csharpsyntaxrewriter visitequalsvalueclause equalsvalueclausesyntax node at microsoft codeanalysis csharp syntax equalsvalueclausesyntax accept csharpsyntaxvisitor visitor at microsoft codeanalysis csharp csharpsyntaxrewriter visit syntaxnode node at microsoft codeanalysis csharp simplification csharpsimplificationservice expander visit syntaxnode node at microsoft codeanalysis csharp csharpsyntaxrewriter visitvariabledeclarator variabledeclaratorsyntax node at microsoft codeanalysis csharp syntax variabledeclaratorsyntax accept csharpsyntaxvisitor visitor at microsoft codeanalysis csharp csharpsyntaxrewriter visit syntaxnode node at microsoft codeanalysis csharp simplification csharpsimplificationservice expander visit syntaxnode node at microsoft codeanalysis csharp csharpsyntaxrewriter visitlistelement tnode node at microsoft codeanalysis csharp csharpsyntaxrewriter visitlist separatedsyntaxlist list at microsoft codeanalysis csharp csharpsyntaxrewriter visitvariabledeclaration variabledeclarationsyntax node at microsoft codeanalysis csharp syntax variabledeclarationsyntax accept csharpsyntaxvisitor visitor at microsoft codeanalysis csharp csharpsyntaxrewriter visit syntaxnode node at microsoft codeanalysis csharp simplification csharpsimplificationservice expander visit syntaxnode node at microsoft codeanalysis csharp csharpsyntaxrewriter visitlocaldeclarationstatement localdeclarationstatementsyntax node at microsoft codeanalysis csharp syntax localdeclarationstatementsyntax accept csharpsyntaxvisitor visitor at microsoft codeanalysis csharp csharpsyntaxrewriter visit syntaxnode node at microsoft codeanalysis csharp simplification csharpsimplificationservice expander visit syntaxnode node at microsoft codeanalysis csharp simplification csharpsimplificationservice expand syntaxnode node semanticmodel semanticmodel syntaxannotation annotationforreplacedaliasidentifier func expandinsidenode boolean expandparameter cancellationtoken cancellationtoken at microsoft codeanalysis csharp rename csharprenameconflictlanguageservice renamerewriter complexify syntaxnode originalnode syntaxnode newnode at microsoft codeanalysis csharp rename csharprenameconflictlanguageservice renamerewriter visit syntaxnode node at microsoft codeanalysis csharp csharpsyntaxrewriter visitlistelement tnode node at microsoft codeanalysis csharp csharpsyntaxrewriter visitlist syntaxlist list at microsoft codeanalysis csharp csharpsyntaxrewriter visitblock blocksyntax node at microsoft codeanalysis csharp syntax blocksyntax accept csharpsyntaxvisitor visitor at microsoft codeanalysis csharp csharpsyntaxrewriter visit syntaxnode node at microsoft codeanalysis csharp rename csharprenameconflictlanguageservice renamerewriter visit syntaxnode node at microsoft codeanalysis csharp csharpsyntaxrewriter visitmethoddeclaration methoddeclarationsyntax node at microsoft codeanalysis csharp syntax methoddeclarationsyntax accept csharpsyntaxvisitor visitor at microsoft codeanalysis csharp csharpsyntaxrewriter visit syntaxnode node at microsoft codeanalysis csharp rename csharprenameconflictlanguageservice renamerewriter visit syntaxnode node at microsoft codeanalysis csharp csharpsyntaxrewriter visitlistelement tnode node at microsoft codeanalysis csharp csharpsyntaxrewriter visitlist syntaxlist list at microsoft codeanalysis csharp csharpsyntaxrewriter visitclassdeclaration classdeclarationsyntax node at microsoft codeanalysis csharp syntax classdeclarationsyntax accept csharpsyntaxvisitor visitor at microsoft codeanalysis csharp csharpsyntaxrewriter visit syntaxnode node at microsoft codeanalysis csharp rename csharprenameconflictlanguageservice renamerewriter visit syntaxnode node at microsoft codeanalysis csharp csharpsyntaxrewriter visitlistelement tnode node at microsoft codeanalysis csharp csharpsyntaxrewriter visitlist syntaxlist list at microsoft codeanalysis csharp csharpsyntaxrewriter visitcompilationunit compilationunitsyntax node at microsoft codeanalysis csharp syntax compilationunitsyntax accept csharpsyntaxvisitor visitor at microsoft codeanalysis csharp csharpsyntaxrewriter visit syntaxnode node at microsoft codeanalysis csharp rename csharprenameconflictlanguageservice renamerewriter visit syntaxnode node at microsoft codeanalysis csharp rename csharprenameconflictlanguageservice annotateandrename renamerewriterparameters parameters at microsoft codeanalysis rename conflictengine conflictresolver session d movenext ,1
568,8633146068.0,IssuesEvent,2018-11-22 13:00:53,ZeroPhone/ZPUI,https://api.github.com/repos/ZeroPhone/ZPUI,opened,PathPicker - detect if chosen directory does not exist,developer-friendliness good first issue help wanted reliability,"If the directory picked does not exist, `PathPicker` will likely fail. Instead, we could traverse the tree upwards until we find an existing directory (an example of such traversal is available in `apps/hardware_apps/avrdude/main.py:394`).",True,"PathPicker - detect if chosen directory does not exist - If the directory picked does not exist, `PathPicker` will likely fail. Instead, we could traverse the tree upwards until we find an existing directory (an example of such traversal is available in `apps/hardware_apps/avrdude/main.py:394`).",1,pathpicker detect if chosen directory does not exist if the directory picked does not exist pathpicker will likely fail instead we could traverse the tree upwards until we find an existing directory an example of such traversal is available in apps hardware apps avrdude main py ,1
529,8356508171.0,IssuesEvent,2018-10-02 18:41:42,dotnet/coreclr,https://api.github.com/repos/dotnet/coreclr,reopened,System.AccessViolationException while formatting stacktrace,area-Diagnostics bug reliability,"Reported by @BrennanConroy
We’re getting this exception in a test occasionally.
```
Unhandled Exception: System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
at System.Reflection.Internal.MemoryBlock.PeekCompressedInteger(Int32 offset, Int32& numberOfBytesRead)
at System.Reflection.Metadata.Ecma335.BlobHeap.GetMemoryBlock(BlobHandle handle)
at System.Reflection.Metadata.MethodDebugInformation.GetSequencePoints()
at System.Diagnostics.StackTraceSymbols.GetSourceLineInfoWithoutCasAssert(String assemblyPath, IntPtr loadedPeAddress, Int32 loadedPeSize, IntPtr inMemoryPdbAddress, Int32 inMemoryPdbSize, Int32 methodToken, Int32 ilOffset, String& sourceFile, Int32& sourceLine, Int32& sourceColumn)
at System.Diagnostics.StackFrameHelper.InitializeSourceInfo(Int32 iSkip, Boolean fNeedFileInfo, Exception exception)
at System.Diagnostics.StackTrace.CaptureStackTrace(Int32 iSkip, Boolean fNeedFileInfo, Thread targetThread, Exception e)
at System.Diagnostics.StackTrace..ctor(Exception e, Boolean fNeedFileInfo)
at System.Environment.GetStackTrace(Exception e, Boolean needFileInfo)
at System.Exception.GetStackTrace(Boolean needFileInfo)
at System.Exception.ToString(Boolean needFileLineInfo, Boolean needMessage)
at System.Exception.ToString()
```",True,"System.AccessViolationException while formatting stacktrace - Reported by @BrennanConroy
We’re getting this exception in a test occasionally.
```
Unhandled Exception: System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
at System.Reflection.Internal.MemoryBlock.PeekCompressedInteger(Int32 offset, Int32& numberOfBytesRead)
at System.Reflection.Metadata.Ecma335.BlobHeap.GetMemoryBlock(BlobHandle handle)
at System.Reflection.Metadata.MethodDebugInformation.GetSequencePoints()
at System.Diagnostics.StackTraceSymbols.GetSourceLineInfoWithoutCasAssert(String assemblyPath, IntPtr loadedPeAddress, Int32 loadedPeSize, IntPtr inMemoryPdbAddress, Int32 inMemoryPdbSize, Int32 methodToken, Int32 ilOffset, String& sourceFile, Int32& sourceLine, Int32& sourceColumn)
at System.Diagnostics.StackFrameHelper.InitializeSourceInfo(Int32 iSkip, Boolean fNeedFileInfo, Exception exception)
at System.Diagnostics.StackTrace.CaptureStackTrace(Int32 iSkip, Boolean fNeedFileInfo, Thread targetThread, Exception e)
at System.Diagnostics.StackTrace..ctor(Exception e, Boolean fNeedFileInfo)
at System.Environment.GetStackTrace(Exception e, Boolean needFileInfo)
at System.Exception.GetStackTrace(Boolean needFileInfo)
at System.Exception.ToString(Boolean needFileLineInfo, Boolean needMessage)
at System.Exception.ToString()
```",1,system accessviolationexception while formatting stacktrace reported by brennanconroy we’re getting this exception in a test occasionally unhandled exception system accessviolationexception attempted to read or write protected memory this is often an indication that other memory is corrupt at system reflection internal memoryblock peekcompressedinteger offset numberofbytesread at system reflection metadata blobheap getmemoryblock blobhandle handle at system reflection metadata methoddebuginformation getsequencepoints at system diagnostics stacktracesymbols getsourcelineinfowithoutcasassert string assemblypath intptr loadedpeaddress loadedpesize intptr inmemorypdbaddress inmemorypdbsize methodtoken iloffset string sourcefile sourceline sourcecolumn at system diagnostics stackframehelper initializesourceinfo iskip boolean fneedfileinfo exception exception at system diagnostics stacktrace capturestacktrace iskip boolean fneedfileinfo thread targetthread exception e at system diagnostics stacktrace ctor exception e boolean fneedfileinfo at system environment getstacktrace exception e boolean needfileinfo at system exception getstacktrace boolean needfileinfo at system exception tostring boolean needfilelineinfo boolean needmessage at system exception tostring ,1
200,5282866355.0,IssuesEvent,2017-02-07 19:55:34,dotnet/roslyn,https://api.github.com/repos/dotnet/roslyn,closed,VS 2017 RC3 crashes somewhere in Microsoft.CodeAnalysis.Workspaces.dll,Area-Analyzers Bug Pending Shiproom Approval Tenet-Reliability,"``` Microsoft.CodeAnalysis.Workspaces.dll!Microsoft.CodeAnalysis.ErrorReporting.FatalError.Report(System.Exception exception, System.Action handler) Unknown
Microsoft.CodeAnalysis.Workspaces.dll!Roslyn.Utilities.TaskExtensions.ReportFatalErrorWorker(System.Threading.Tasks.Task task, object continuationFunction) Unknown
mscorlib.dll!System.Threading.Tasks.ContinuationTaskFromTask.InnerInvoke() Unknown
mscorlib.dll!System.Threading.Tasks.Task.Execute() Unknown
mscorlib.dll!System.Threading.Tasks.Task.ExecutionContextCallback(object obj) Unknown
mscorlib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state, bool preserveSyncCtx) Unknown
mscorlib.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state, bool preserveSyncCtx) Unknown
mscorlib.dll!System.Threading.Tasks.Task.ExecuteWithThreadLocal(ref System.Threading.Tasks.Task currentTaskSlot) Unknown
mscorlib.dll!System.Threading.Tasks.Task.ExecuteEntry(bool bPreventDoubleExecution) Unknown
mscorlib.dll!System.Threading.Tasks.ThreadPoolTaskScheduler.TryExecuteTaskInline(System.Threading.Tasks.Task task, bool taskWasPreviouslyQueued) Unknown
mscorlib.dll!System.Threading.Tasks.TaskScheduler.TryRunInline(System.Threading.Tasks.Task task, bool taskWasPreviouslyQueued) Unknown
mscorlib.dll!System.Threading.Tasks.TaskContinuation.InlineIfPossibleOrElseQueue(System.Threading.Tasks.Task task, bool needsProtection) Unknown
mscorlib.dll!System.Threading.Tasks.StandardTaskContinuation.Run(System.Threading.Tasks.Task completedTask, bool bCanInlineContinuationTask) Unknown
mscorlib.dll!System.Threading.Tasks.Task.FinishContinuations() Unknown
mscorlib.dll!System.Threading.Tasks.Task.FinishStageThree() Unknown
mscorlib.dll!System.Threading.Tasks.Task.FinishStageTwo() Unknown
mscorlib.dll!System.Threading.Tasks.Task.Finish(bool bUserDelegateExecuted) Unknown
mscorlib.dll!System.Threading.Tasks.Task.TrySetException(object exceptionObject) Unknown
mscorlib.dll!System.Threading.Tasks.UnwrapPromise.TrySetFromTask(System.Threading.Tasks.Task task, bool lookForOce) Unknown
mscorlib.dll!System.Threading.Tasks.UnwrapPromise.InvokeCore(System.Threading.Tasks.Task completingTask) Unknown
mscorlib.dll!System.Threading.Tasks.UnwrapPromise.Invoke(System.Threading.Tasks.Task completingTask) Unknown
mscorlib.dll!System.Threading.Tasks.Task.FinishContinuations() Unknown
mscorlib.dll!System.Threading.Tasks.Task.FinishStageThree() Unknown
mscorlib.dll!System.Threading.Tasks.Task.FinishStageTwo() Unknown
mscorlib.dll!System.Threading.Tasks.Task.Finish(bool bUserDelegateExecuted) Unknown
mscorlib.dll!System.Threading.Tasks.Task.TrySetException(object exceptionObject) Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncTaskMethodBuilder.SetException(System.Exception exception) Unknown
Microsoft.CodeAnalysis.Features.dll!Microsoft.CodeAnalysis.SolutionCrawler.IdleProcessor.ProcessAsync() Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.MoveNextRunner.InvokeMoveNext(object stateMachine) Unknown
mscorlib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state, bool preserveSyncCtx) Unknown
mscorlib.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state, bool preserveSyncCtx) Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.MoveNextRunner.Run() Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.OutputAsyncCausalityEvents.AnonymousMethod__0() Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.ContinuationWrapper.Invoke() Unknown
mscorlib.dll!System.Runtime.CompilerServices.TaskAwaiter.OutputWaitEtwEvents.AnonymousMethod__0() Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.ContinuationWrapper.Invoke() Unknown
mscorlib.dll!System.Threading.Tasks.AwaitTaskContinuation.RunOrScheduleAction(System.Action action, bool allowInlining, ref System.Threading.Tasks.Task currentTask) Unknown
mscorlib.dll!System.Threading.Tasks.Task.FinishContinuations() Unknown
mscorlib.dll!System.Threading.Tasks.Task.FinishStageThree() Unknown
mscorlib.dll!System.Threading.Tasks.Task.FinishStageTwo() Unknown
mscorlib.dll!System.Threading.Tasks.Task.Finish(bool bUserDelegateExecuted) Unknown
mscorlib.dll!System.Threading.Tasks.Task.TrySetException(object exceptionObject) Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncTaskMethodBuilder.SetException(System.Exception exception) Unknown
Microsoft.VisualStudio.LanguageServices.dll!Microsoft.VisualStudio.LanguageServices.Remote.RemoteHostClientServiceFactory.SolutionChecksumUpdater.ExecuteAsync() Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.MoveNextRunner.InvokeMoveNext(object stateMachine) Unknown
mscorlib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state, bool preserveSyncCtx) Unknown
mscorlib.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state, bool preserveSyncCtx) Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.MoveNextRunner.Run() Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.OutputAsyncCausalityEvents.AnonymousMethod__0() Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.ContinuationWrapper.Invoke() Unknown
mscorlib.dll!System.Runtime.CompilerServices.TaskAwaiter.OutputWaitEtwEvents.AnonymousMethod__0() Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.ContinuationWrapper.Invoke() Unknown
mscorlib.dll!System.Threading.Tasks.AwaitTaskContinuation.RunOrScheduleAction(System.Action action, bool allowInlining, ref System.Threading.Tasks.Task currentTask) Unknown
mscorlib.dll!System.Threading.Tasks.Task.FinishContinuations() Unknown
mscorlib.dll!System.Threading.Tasks.Task.FinishStageThree() Unknown
mscorlib.dll!System.Threading.Tasks.Task.FinishStageTwo() Unknown
mscorlib.dll!System.Threading.Tasks.Task.Finish(bool bUserDelegateExecuted) Unknown
mscorlib.dll!System.Threading.Tasks.Task.TrySetException(object exceptionObject) Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncTaskMethodBuilder.SetException(System.Exception exception) Unknown
Microsoft.VisualStudio.LanguageServices.dll!Microsoft.VisualStudio.LanguageServices.Remote.RemoteHostClientServiceFactory.SolutionChecksumUpdater.SynchronizePrimaryWorkspaceAsync(System.Threading.CancellationToken cancellationToken) Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.MoveNextRunner.InvokeMoveNext(object stateMachine) Unknown
mscorlib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state, bool preserveSyncCtx) Unknown
mscorlib.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state, bool preserveSyncCtx) Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.MoveNextRunner.Run() Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.OutputAsyncCausalityEvents.AnonymousMethod__0() Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.ContinuationWrapper.Invoke() Unknown
mscorlib.dll!System.Runtime.CompilerServices.TaskAwaiter.OutputWaitEtwEvents.AnonymousMethod__0() Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.ContinuationWrapper.Invoke() Unknown
mscorlib.dll!System.Threading.Tasks.AwaitTaskContinuation.RunOrScheduleAction(System.Action action, bool allowInlining, ref System.Threading.Tasks.Task currentTask) Unknown
mscorlib.dll!System.Threading.Tasks.Task.FinishContinuations() Unknown
mscorlib.dll!System.Threading.Tasks.Task.FinishStageThree() Unknown
mscorlib.dll!System.Threading.Tasks.Task.FinishStageTwo() Unknown
mscorlib.dll!System.Threading.Tasks.Task.Finish(bool bUserDelegateExecuted) Unknown
mscorlib.dll!System.Threading.Tasks.Task.TrySetException(object exceptionObject) Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncTaskMethodBuilder.SetException(System.Exception exception) Unknown
Microsoft.CodeAnalysis.Workspaces.dll!Microsoft.CodeAnalysis.Remote.RemoteHostClient.CreateServiceSessionAsync(string serviceName, Microsoft.CodeAnalysis.Solution solution, object callbackTarget, System.Threading.CancellationToken cancellationToken) Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.MoveNextRunner.InvokeMoveNext(object stateMachine) Unknown
mscorlib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state, bool preserveSyncCtx) Unknown
mscorlib.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state, bool preserveSyncCtx) Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.MoveNextRunner.Run() Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.OutputAsyncCausalityEvents.AnonymousMethod__0() Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.ContinuationWrapper.Invoke() Unknown
mscorlib.dll!System.Runtime.CompilerServices.TaskAwaiter.OutputWaitEtwEvents.AnonymousMethod__0() Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.ContinuationWrapper.Invoke() Unknown
mscorlib.dll!System.Threading.Tasks.AwaitTaskContinuation.RunOrScheduleAction(System.Action action, bool allowInlining, ref System.Threading.Tasks.Task currentTask) Unknown
mscorlib.dll!System.Threading.Tasks.Task.FinishContinuations() Unknown
mscorlib.dll!System.Threading.Tasks.Task.FinishStageThree() Unknown
mscorlib.dll!System.Threading.Tasks.Task.FinishStageTwo() Unknown
mscorlib.dll!System.Threading.Tasks.Task.Finish(bool bUserDelegateExecuted) Unknown
mscorlib.dll!System.Threading.Tasks.Task.TrySetException(object exceptionObject) Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncTaskMethodBuilder.SetException(System.Exception exception) Unknown
Microsoft.VisualStudio.LanguageServices.Next.dll!Microsoft.VisualStudio.LanguageServices.Remote.ServiceHubRemoteHostClient.CreateServiceSessionAsync(string serviceName, Microsoft.CodeAnalysis.Execution.PinnedRemotableDataScope snapshot, object callbackTarget, System.Threading.CancellationToken cancellationToken) Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.MoveNextRunner.InvokeMoveNext(object stateMachine) Unknown
mscorlib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state, bool preserveSyncCtx) Unknown
mscorlib.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state, bool preserveSyncCtx) Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.MoveNextRunner.Run() Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.OutputAsyncCausalityEvents.AnonymousMethod__0() Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.ContinuationWrapper.Invoke() Unknown
mscorlib.dll!System.Runtime.CompilerServices.TaskAwaiter.OutputWaitEtwEvents.AnonymousMethod__0() Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.ContinuationWrapper.Invoke() Unknown
mscorlib.dll!System.Threading.Tasks.AwaitTaskContinuation.RunOrScheduleAction(System.Action action, bool allowInlining, ref System.Threading.Tasks.Task currentTask) Unknown
mscorlib.dll!System.Threading.Tasks.Task.FinishContinuations() Unknown
mscorlib.dll!System.Threading.Tasks.Task.FinishStageThree() Unknown
mscorlib.dll!System.Threading.Tasks.Task.FinishStageTwo() Unknown
mscorlib.dll!System.Threading.Tasks.Task.Finish(bool bUserDelegateExecuted) Unknown
mscorlib.dll!System.Threading.Tasks.Task.TrySetException(object exceptionObject) Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncTaskMethodBuilder.SetException(System.Exception exception) Unknown
Microsoft.VisualStudio.LanguageServices.Next.dll!Microsoft.VisualStudio.LanguageServices.Remote.JsonRpcSession.CreateAsync(Microsoft.CodeAnalysis.Execution.PinnedRemotableDataScope snapshot, System.IO.Stream snapshotStream, object callbackTarget, System.IO.Stream serviceStream, System.Threading.CancellationToken cancellationToken) Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.MoveNextRunner.InvokeMoveNext(object stateMachine) Unknown
mscorlib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state, bool preserveSyncCtx) Unknown
mscorlib.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state, bool preserveSyncCtx) Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.MoveNextRunner.Run() Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.OutputAsyncCausalityEvents.AnonymousMethod__0() Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.ContinuationWrapper.Invoke() Unknown
mscorlib.dll!System.Runtime.CompilerServices.TaskAwaiter.OutputWaitEtwEvents.AnonymousMethod__0() Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.ContinuationWrapper.Invoke() Unknown
mscorlib.dll!System.Threading.Tasks.AwaitTaskContinuation.RunOrScheduleAction(System.Action action, bool allowInlining, ref System.Threading.Tasks.Task currentTask) Unknown
mscorlib.dll!System.Threading.Tasks.Task.FinishContinuations() Unknown
mscorlib.dll!System.Threading.Tasks.Task.FinishStageThree() Unknown
mscorlib.dll!System.Threading.Tasks.Task.FinishStageTwo() Unknown
mscorlib.dll!System.Threading.Tasks.Task.Finish(bool bUserDelegateExecuted) Unknown
mscorlib.dll!System.Threading.Tasks.Task.TrySetException(object exceptionObject) Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncTaskMethodBuilder.SetException(System.Exception exception) Unknown
Microsoft.VisualStudio.LanguageServices.Next.dll!Microsoft.VisualStudio.LanguageServices.Remote.JsonRpcSession.InitializeAsync() Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.MoveNextRunner.InvokeMoveNext(object stateMachine) Unknown
mscorlib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state, bool preserveSyncCtx) Unknown
mscorlib.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state, bool preserveSyncCtx) Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.MoveNextRunner.Run() Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.OutputAsyncCausalityEvents.AnonymousMethod__0() Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.ContinuationWrapper.Invoke() Unknown
mscorlib.dll!System.Runtime.CompilerServices.TaskAwaiter.OutputWaitEtwEvents.AnonymousMethod__0() Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.ContinuationWrapper.Invoke() Unknown
mscorlib.dll!System.Threading.Tasks.AwaitTaskContinuation.RunOrScheduleAction(System.Action action, bool allowInlining, ref System.Threading.Tasks.Task currentTask) Unknown
mscorlib.dll!System.Threading.Tasks.Task.FinishContinuations() Unknown
mscorlib.dll!System.Threading.Tasks.Task.FinishStageThree() Unknown
mscorlib.dll!System.Threading.Tasks.Task.FinishStageTwo() Unknown
mscorlib.dll!System.Threading.Tasks.Task.Finish(bool bUserDelegateExecuted) Unknown
mscorlib.dll!System.Threading.Tasks.Task.TrySetException(object exceptionObject) Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncTaskMethodBuilder.SetException(System.Exception exception) Unknown
Microsoft.VisualStudio.LanguageServices.Next.dll!Microsoft.VisualStudio.LanguageServices.Remote.JsonRpcClient.InvokeAsync(string targetName, object[] arguments) Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.MoveNextRunner.InvokeMoveNext(object stateMachine) Unknown
mscorlib.dll!System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state, bool preserveSyncCtx) Unknown
mscorlib.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state, bool preserveSyncCtx) Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.MoveNextRunner.Run() Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.OutputAsyncCausalityEvents.AnonymousMethod__0() Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.ContinuationWrapper.Invoke() Unknown
mscorlib.dll!System.Runtime.CompilerServices.TaskAwaiter.OutputWaitEtwEvents.AnonymousMethod__0() Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncMethodBuilderCore.ContinuationWrapper.Invoke() Unknown
mscorlib.dll!System.Threading.Tasks.AwaitTaskContinuation.RunOrScheduleAction(System.Action action, bool allowInlining, ref System.Threading.Tasks.Task currentTask) Unknown
mscorlib.dll!System.Threading.Tasks.Task.FinishContinuations() Unknown
mscorlib.dll!System.Threading.Tasks.Task.FinishStageThree() Unknown
mscorlib.dll!System.Threading.Tasks.Task.FinishStageTwo() Unknown
mscorlib.dll!System.Threading.Tasks.Task.Finish(bool bUserDelegateExecuted) Unknown
mscorlib.dll!System.Threading.Tasks.Task.TrySetException(object exceptionObject) Unknown
mscorlib.dll!System.Runtime.CompilerServices.AsyncTaskMethodBuilder