question_id
int64 25
74.7M
| answer_id
int64 332
74.7M
| title
stringlengths 20
150
| question
stringlengths 23
4.1k
| answer
stringlengths 20
4.1k
|
|---|---|---|---|---|
73,752,861
| 73,752,982
|
Is there a way to append to global unordered_map from C++ file on compilation?
|
The problem
I want to make a little module system for my project, which will basically be a base class that users may extend with their code, include in the main project and be able to use this module in runtime.
E.g. we have a Renderer module, the base class defines all the mandatory functions to implement:
class Renderer
{
public:
virtual void BeginFrame(
/* all the possible parameters: cameras, scene metadata, etc. */) = 0;
virtual void Draw(uint32_t vertexCount,
uint32_t firstVertex,
uint32_t instanceCount = 1,
uint32_t firstInstance = 0) = 0;
// All the other overloads for different draw metadata
// ...
virtual void EndFrame() = 0;
};
Now I try to implement these function in Vulkan Renderer module - implementation doesn't matter here. What I want to do is to have some global storage for modules (seems unordered_map is a way to go), where you have a module identifier (probably a string which names the module) and function which creates this module and returns unique_ptr of base Renderer class to use in the client application.
What I've tried
Right now I define an extern unordered_map in my Renderer.hpp and the idea is to insert module metadata (identifier and a function to create one) into this map.
//
// Renderer.hpp
//
using RendererModuleCreator = std::function<std::unique_ptr<Renderer>()>;
// Append your custom renderer module here with key being any string you would
// like to name your module.
extern std::unordered_map<std::string, RendererModuleCreator> s_RendererModules;
//
// VulkanRenderer.hpp
//
// Somewhere in the header file
void*
AppendVkRenderer()
{
s_RendererModules.insert(
{ "VkRenderer", []() { return std::make_unique<VulkanRenderer>(); } });
return nullptr;
}
//
// VulkanRenderer.cpp
//
// As global variable, should execute the function and append the module to global store.
void* appendCall = AppendVkRenderer();
The above solution only works if you somehow reference VulkanRenderer in compilation units for client application. Otherwise, the code doesn't get executed even if the header file is included. If I try to declare appendCall in the header, I face linker errors because of symbol redefinition.
Is there a way to make this system work?
|
If you are using C++17 you could use a static inline (otherwise useless) member variable in some class to make sure the function is called, e.g. like this:
class Foo {
static inline void * bar_ = AppendVkRenderer();
};
But you might have to modify parts of your code to make sure s_RendererModules is initialized before AppendVkRenderer is called.
|
73,752,880
| 73,753,449
|
What is the problem with fill() here ? I used normal for loop for assigning NULL and it worked , with fill() it was giving error
|
The code below is a Trie implementation in C++. Kindly checkout what is the problem in using fill(). And if any suggestions improving my implementation are welcome
Commented fill() in the TrieNode constructor function function is giving compile time error as mentioned in the snippet . Why this is behaving wierd with fill , standard for loop is doing the same job as expected.
class TrieNode {
public :
vector<TrieNode *>vec;
bool end ;
TrieNode(){
vec.resize(26); end = false;
// fill(vec.begin() , vec.end() , NULL); // did not worked
for(int i = 0; i<vec.size(); i++){
vec[i] = NULL;
}
}
bool exist(char x){
return vec[x - 'a']!=NULL;
}
};
class Trie{
public :
TrieNode * head ;
Trie(){
head = new TrieNode();
}
void insert(string st){
TrieNode * curr = head;
for(int i = 0; i<st.length() ; i++){
if(curr->exist(st[i])){
curr = curr->vec[st[i]-'a'];
}
else{
curr->vec[st[i]-'a'] = new TrieNode();
curr = curr->vec[st[i]-'a'];
}
}
curr->end = 1;
}
bool exist(string st){
TrieNode * curr = head;
for(int i = 0; i<st.length(); i++){
if(curr->exist(st[i])){
curr = curr->vec[st[i] - 'a'];
}
else{
return false;
}
}
return curr->end;
}
};
class Solution{
public :
vector<vector<int> >palindromePairs(vi& vec){
Trie tree;
tree.insert("boys");
tree.insert("boysx");
cout << tree.exist("man");
cout << tree.exist("boys") << ' ' << tree.exist("boysd");
return {{2,3} , {1 , 2}};
}
};
error snippet
|
First, we convert your code tor a MRE.
#include <iostream>
#include <vector>
#include <string>
#include <algorithm>
class TrieNode {
public:
std::vector<TrieNode*>vec;
bool end;
TrieNode() {
vec.resize(26); end = false;
std::fill(vec.begin() , vec.end(), NULL); // did not worked
}
bool exist(char x) {
return vec[x - 'a'] != NULL;
}
};
class Trie {
public:
TrieNode* head;
Trie() {
head = new TrieNode();
}
void insert(std::string st) {
TrieNode* curr = head;
for (size_t i = 0; i < st.length(); i++) {
if (curr->exist(st[i])) {
curr = curr->vec[st[i] - 'a'];
}
else {
curr->vec[st[i] - 'a'] = new TrieNode();
curr = curr->vec[st[i] - 'a'];
}
}
curr->end = 1;
}
bool exist(std::string st) {
TrieNode* curr = head;
for (size_t i = 0; i < st.length(); i++) {
if (curr->exist(st[i])) {
curr = curr->vec[st[i] - 'a'];
}
else {
return false;
}
}
return curr->end;
}
};
int main() {
Trie trie;
trie.insert("boys");
trie.insert("boysx");
std::cout << trie.exist("man");
std::cout << trie.exist("boys") << ' ' << trie.exist("boysd");
};
Then, we compile it and get the following messages:
The problem is mentioned there.
You want to fill a std::vector of TrieNode* with NULL.
So, again, std::fill expects in your case a TrieNode*. So, give it one. You can cast your 0 to such a type. A static_cast will work:
std::fill(vec.begin() , vec.end(), static_cast<TrieNode*>(NULL));, but to be compliant to the followers of the UB religion, you must use a reinterpret_cast:
std::fill(vec.begin() , vec.end(), reinterpret_cast<TrieNode*>(NULL));
But C++ offers you the redommended solution. The nullptr. By definition the nullptr is compatible to other pointer, also the TrieNode*.
So, please do not use NULL for pointers in C++. Use nullptr instead:
std::fill(vec.begin() , vec.end(), nullptr);
|
73,753,498
| 73,753,578
|
Calling a C file in C++ is giving errors
|
The reproduceable error code is:
#ifdef __cplusplus
extern "C" {
#endif
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <ctype.h>
//#include <cstring>
//functions
#define true 1
#define false 0
#ifdef _MSC_VER
// define POSIX function strndup if not available
char* strndup(const char* s, size_t n) {
size_t len;
for (len = 0; len < n && s[len]; len++)
continue;
char* ptr = malloc(len + 1);
if (ptr) {
memcpy(ptr, s, len);
ptr[len] = '\0';
}
return ptr;
}
#endif
char** split(const char* str, const char* delimiters, int** a, int* size_of_a) {
int i, count, len;
char** final_result;
const char* p;
// phase 1: count the number of tokens
p = str + strspn(str, delimiters);
for (count = 0; *p; count++) {
p += strcspn(p, delimiters);
p += strspn(p, delimiters);
}
// phase 2: allocate the arrays
final_result = calloc(sizeof(*final_result), count + 1);
if (a) {
*a = calloc(sizeof(**a), count);
}
if (size_of_a) {
*size_of_a = count;
}
// phase 3: copy the tokens
p = str;
for (i = 0; i < count; i++) {
p += strspn(p, delimiters); // skip the delimiters
len = strcspn(p, delimiters); // count the token length
if (a) {
(*a)[i] = len;
}
final_result[i] = strndup(p, len); // duplicate the token
p += len;
}
final_result[count] = 0;
return final_result;
}
#ifdef __cplusplus
}
#endif
It started to give error:
Severity Code Description Project File Line Suppression State
Error C2440 'initializing': cannot convert from 'void *' to 'char *' example_win32_directx9 C:\Libraries\ImGui\imgui\examples\example_win32_directx9\Equation_simplifier.c 77
How can this be fixed? I have set my compiler to C++14 and I am using visual studio 2019. I am using this in a non-main cpp file which is called in main cpp file.
The main error I am getting from is malloc and calloc from what I have noticed. I am also getting error for getch().
|
It seems you want to write code in plain C and then have the split function callable from C++.
Then you first of all need to make sure to build the source as plain C, which includes making sure the source file have a .c suffix, and not contain any C++ code (like the extern "C" part, or C++ header files like <cstring>).
Once the file is built as plain C you create a header file, which use conditional compilation to use extern "C", for example one named my_split.h and looking something like this:
#ifndef MY_SPLIT_H
#define MY_SPLIT_H
#ifdef __cplusplus
extern "C" {
#endif
char** split(const char* str, const char* delimiters, int** a, int* size_of_a);
#ifdef __cplusplus
}
#endif
#endif // MY_SPLIT_H
Include that header file in your C++ code, and link with the object file generated from the C source file, and you should be able to use the split function.
As mentioned, include the header file in the C source file as well, to make sure the function declarations match.
|
73,753,596
| 73,753,709
|
Why is my iterator not std::input_iterator?
|
Why doesn't the iterator below satisfy std::input_iterator concept? What did I miss?
template <class T>
struct IteratorSentinel {};
template <class T>
class Iterator
{
public:
using iterator_category = std::input_iterator_tag;
using value_type = T;
using difference_type = std::ptrdiff_t;
using pointer = value_type*;
using reference = value_type&;
Iterator() = default;
Iterator(const Iterator&) = delete;
Iterator& operator = (const Iterator&) = delete;
Iterator(Iterator&& other) = default;
Iterator& operator = (Iterator&& other) = default;
T* operator-> ();
T& operator* ();
bool operator== (const IteratorSentinel<T>&) const noexcept;
Iterator& operator++ ();
void operator++ (int);
};
The following static assertion fails:
static_assert(std::input_iterator<Iterator<int>>);
And, for example, the code below does not compile:
template <class T>
auto make_range()
{
return std::ranges::subrange(Iterator<T>(), IteratorSentinel<T>{});
}
std::ranges::equal(make_range<int>(), std::vector<int>());
|
Your operator* is not const-qualified.
The std::input_iterator concept (through the std::indirectly_readable concept) requires that * can be applied to both const and non-const lvalues as well as rvalues of the type and that in all cases the same type is returned.
|
73,753,610
| 73,754,835
|
Cannot open source file endian.h error – what is the problem?
|
Ever since updating to the Ventura public beta I keep getting these kinds of errors:
"cannot open sources file "endian.h" (dependancy of "iostream")"
I also keep getting prompts that clang++ requires the command line tools which I have already installed (when I run xcode-select --install it says that they are alr there). When I click install on the prompt it takes about 5mins (everytime) just to tell me that the software was installed only for me to reopen vscode and for the same install popup to appear and the same error to be present.
I even went and installed the xcode 14.1 beta command line tools (the latest ones) and that didn't seem to fix it either. This was working fine in Monterey before I updated.
|
Try to go to https://developer.apple.com/download/all/ and
look for: "Command Line Tools for Xcode 14.1" in the list of downloads then click the dmg download and install it
|
73,753,966
| 73,754,139
|
Unable to convert std::string into std::basic_string<char8_t>, why?
|
I am facing the following problem, I am trying to convert an std::string object into an std::basic_string<char8_t> one, using the codecvt library. The code is the following:
#include <string>
#include <codecvt>
#include <locale>
int main()
{
std::string str = "Test string";
std::wstring_convert <std::codecvt_utf8_utf16 <char8_t>, char8_t> converter_8_t;
converter_8_t.from_bytes( str );
}
The problem is that when I try to compile it with g++ -std=c++20 (g++ 11.2.0) I got the following error:
/usr/bin/ld: /tmp/cck8g9Wa.o: in function `std::__cxx11::wstring_convert<std::codecvt_utf8_utf16<char8_t, 1114111ul, (std::codecvt_mode)0>, char8_t, std::allocator<char8_t>, std::allocator<char> >::wstring_convert()':
other.cpp:(.text._ZNSt7__cxx1115wstring_convertISt18codecvt_utf8_utf16IDuLm1114111ELSt12codecvt_mode0EEDuSaIDuESaIcEEC2Ev[_ZNSt7__cxx1115wstring_convertISt18codecvt_utf8_utf16IDuLm1114111ELSt12codecvt_mode0EEDuSaIDuESaIcEEC5Ev]+0x2c): undefined reference to `std::codecvt_utf8_utf16<char8_t, 1114111ul, (std::codecvt_mode)0>::codecvt_utf8_utf16(unsigned long)'
collect2: error: ld returned 1 exit status
Do you know what could be the problem? Am I trying to convert the std::string object in the wrong way? Thanks.
|
C++ keywords: char8_t (since C++20)
As far as I understand char8_t is of char type as is your std::string. A simple cast should work.
|
73,754,094
| 73,756,015
|
Why does `LD_DEBUG=libs` fail to display a library that loaded in an application?
|
Background:
I am trying to discover where libqbscore.so is loaded from, and when it happens. When I set LD_DEBUG=libs and run the program, /bin/qtcreator, I do not find libqbscore.so amidst the debug.
If however, I set LD_PRELOAD=/path/to/libqbscore.so, then I will start finding its occurences in the output.
Question:
Why would LD_DEBUG fail to display a library it is clearly loading?
Is it perhaps simply silent on libraries without debugging symbols?
How can I fix this so I can determine the origin of libqbscore.so when I run QtCreator?
Thanks.
|
It's because the qtcreator process does not load the libqbscore.so. The qbs child process loads it.
Because Qt Creator and Qbs are open source projects, their interactions can be analyzed by analyzing the source codes.
|
73,754,698
| 73,755,308
|
How to create a common range?
|
Why doesn't std::ranges::common_view compile in the code below?
#include <ranges>
#include <vector>
#include <algorithm>
template <class T>
struct IteratorSentinel {};
template <class T>
class Iterator
{
public:
using iterator_category = std::input_iterator_tag;
using value_type = T;
using difference_type = std::ptrdiff_t;
using pointer = value_type*;
using reference = value_type&;
Iterator() = default;
Iterator(const Iterator&) = delete;
Iterator& operator = (const Iterator&) = delete;
Iterator(Iterator&& other) = default;
Iterator& operator = (Iterator&& other) = default;
T* operator-> () { return cur(); }
T& operator* () { return *cur(); }
T& operator* () const { return *cur(); };
bool operator== (const IteratorSentinel<T>&) const noexcept;
Iterator& operator++ ();
void operator++ (int);
private:
T* cur() const
{
return pCur;
}
T* pCur = nullptr;
};
static_assert(std::input_iterator<Iterator<int>>);
template <class T>
auto make_range()
{
return std::ranges::subrange(Iterator<T>(), IteratorSentinel<T>{});
}
int main()
{
auto r = make_range<int>();
auto cr = std::ranges::common_view{ r };
std::vector<int> v;
std::copy(cr.begin(), cr.end(), std::back_inserter(v));
return 0;
}
Can't figure out what template parameter does it require.
MSVC2022 error (/std:c++latest):
error C2641: cannot deduce template arguments for 'std::ranges::common_view'
error C2893: Failed to specialize function template 'std::ranges::common_view<_Vw> std::ranges::common_view(_Vw) noexcept(<expr>)'
GCC12 errors:
prog.cc: In function 'int main()':
prog.cc:67:43: error: class template argument deduction failed:
67 | auto cr = std::ranges::common_view{ r };
| ^
prog.cc:67:43: error: no matching function for call to 'common_view(std::ranges::subrange<Iterator<int>, IteratorSentinel<int>, std::ranges::subrange_kind::unsized>&)'
In file included from prog.cc:1:
/opt/wandbox/gcc-12.1.0/include/c++/12.1.0/ranges:3724:7: note: candidate: 'template<class _Vp> common_view(_Vp)-> std::ranges::common_view<_Vp>'
3724 | common_view(_Vp __r)
| ^~~~~~~~~~~
|
Your range is built out of an iterator/sentinel pair. The definition of a common range is a range where the sentinel type is an iterator. So the range itself is not a common range.
common_view can generate a common range from a non-common range. Which means that it will have to create two iterators. And since it starts the process with only one iterator, that means that, at some point, it must copy that iterator (thus creating two usable iterators).
Which it can't do because your iterator is non-copyable. Which is why common_view has an explicit requirement that the iterator is copyable.
|
73,754,872
| 74,079,195
|
SDL_DrawRect() does not draw a proper rect
|
When I try to draw a rectangle, the bottom line always is one pixel up on the right side:
The problem also persists when I change the size and position.
Below I have a minimal working solution that should reproduce the problem, if it's not my computer going crazy:
#include "SDL.h"
#include <iostream>
int main(int args, char **argv) {
if (SDL_Init(SDL_INIT_EVERYTHING) != 0) {
printf("error initializing SDL: %s\n", SDL_GetError());
}
SDL_Window *window = SDL_CreateWindow("Testing", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 1280, 720, 0);
Uint32 renderFlags = SDL_RENDERER_ACCELERATED;
SDL_Renderer *renderer = SDL_CreateRenderer(window, -1, renderFlags);
if (renderer == nullptr) {
std::cout << "Error initializing _renderer: " << SDL_GetError() << std::endl;
}
int close = 0;
SDL_Event event;
while (!close) {
while (SDL_PollEvent(&event)) {
switch (event.type) {
case SDL_QUIT:
close = 1;
break;
}
}
SDL_RenderClear(renderer);
SDL_SetRenderDrawColor(renderer, 0, 255, 0, 255);
SDL_Rect rect = {100, 100, 100, 100};
SDL_RenderDrawRect(renderer, &rect);
SDL_SetRenderDrawColor(renderer, 0, 0, 0, 255);
SDL_RenderPresent(renderer);
SDL_Delay(1000 / 240);
}
SDL_Quit();
return 0;
}
I am using Fedora 36 Linux and the Gnome 42 desktop.
I also tried starting it with x11 instead of wayland with SDL_VIDEODRIVER=x11, but that doesn't change anything.
What could be the problem here?
|
It seems like I solved the problem by changing from the Flatpak version of CLion to the native one installed from the JetBrains toolbox.
The problem only occurs when I run the compiled executable from inside CLion, so I think it's a CLion problem. I reported the problem to JetBrains here.
|
73,755,207
| 73,768,313
|
Override std::tuple serializing functionality in boost::json
|
boost::json::tag_invoke works fine with structs, but is totally ignored with std::tuple.
Look the next example and mind how the two json arrays are diferent:
Coliru link: https://coliru.stacked-crooked.com/a/e8689f9c523cee2a
#include <iostream>
#include <string>
#include <vector>
#include <boost/json/src.hpp>
using namespace boost::json;
struct Marker
{
double dist{};
double lenght;
int slope{};
};
void tag_invoke( value_from_tag, value& jv, const Marker& marker )
{
jv = { marker.dist, marker.slope };
}
void tag_invoke( value_from_tag, value& jv, const std::tuple<double, double, int>& marker )
{
jv = { std::get<0>(marker), std::get<2>(marker) };
}
int main()
{
boost::json::object json;
std::vector<Marker> markers1{ {0.0, 100.0, 8}, {250.0, 75.0, -6}, {625.0, 200.0, 11}, {830.0, 55.0, -3} };
std::vector<std::tuple<double, double, int>> markers2{ {0.0, 100.0, 8}, {250.0, 75.0, -6}, {625.0, 200.0, 11}, {830.0, 55.0, -3} };
json["grad1"] = boost::json::value_from(markers1);
json["grad2"] = boost::json::value_from(markers2);
std::cout << boost::json::serialize(json) << std::endl;
}
Is there any way to overwrite the std::tuple to boost::json::value to only extract the first and third members?
|
These functions are found via ADL. With std::tuple<double, double, int>, only overloads in the std:: namespace are searched, which your overload is not in, so it isn't found.
Boost suggests to either put it where ADL can find it, or in the boost:: namespace if not possible.
So you can put it in the boost namespace:
namespace boost {
void tag_invoke( value_from_tag, value& jv, const std::tuple<double, double, int>& marker )
{
jv = { std::get<0>(marker), std::get<2>(marker) };
}
}
Or you can associate it with the :: namespace:
struct MyTuple : std::tuple<double, double, int> {
using tuple::tuple;
};
void tag_invoke( value_from_tag, value& jv, const MyTuple& marker )
{
jv = { std::get<0>(marker), std::get<2>(marker) };
}
// ...
std::vector<MyTuple> markers2{ ... };
|
73,755,655
| 73,756,437
|
Code Exiting on above 500,000 number of input
|
I was performing sorting algorithm to calculate their runtime to execute, in which I was giving millions of number of input to sort, but my code is exiting on above 500,000 input and not showing any output. Is there anyway I can solve it.
int size;
cout<<"Enter size of the array: "<<endl;
cin>>size;
int a[size];
for(int i=0;i<size;i++)
{
a[i]=rand()%size;
}
int temp = 0;
double cl=clock();
for (int i = 0; i < size; i++)
{
for (int j = i + 1; j < size; j++)
{
if (a[j] < a[i])
{
temp = a[i];
a[i] = a[j];
a[j] = temp;
}
}
}
double final=clock()-cl;
cout<<final/(double)CLOCKS_PER_SEC;
}
|
You code crashes on 500'000 input because of stack overflow, you're allocating array on stack of too big size:
int a[size];
Stack size is usually few megabytes at most.
Also it is probably an extensions not of all compilers to have dynamically allocated array on stack, usually size should be a compile time constant.
To overcome stack crash either you have to use std::vector which can provide any size as big as there is free memory, for that do:
std::vector<int> a(size);
(also #include <vector>). Or you may use dynamically allocated array through new operator:
int * a = new int[size];
For this case don't forget to do delete[] a; at the end of program (see docs here).
Don't forget that input 500'000 takes very much of time using your bubble sort. For example 10 times less, 50'000, takes around 10 seconds on my machine.
Full working code using std::vector plus code formatting:
Try it online!
#include <iostream>
#include <vector>
using namespace std;
int main() {
int size;
cout << "Enter size of the array: " << endl;
cin >> size;
std::vector<int> a(size);
for (int i = 0; i < size; i++) {
a[i] = rand() % size;
}
int temp = 0;
double cl = clock();
for (int i = 0; i < size; i++) {
for (int j = i + 1; j < size; j++) {
if (a[j] < a[i]) {
temp = a[i];
a[i] = a[j];
a[j] = temp;
}
}
}
double final = clock() - cl;
cout << final / (double)CLOCKS_PER_SEC;
}
|
73,756,141
| 73,760,102
|
SDL doesn't render BMP image on mac
|
I have the following code in main.cpp:
#include <SDL2/SDL.h>
#include <iostream>
int main(){
SDL_Init(SDL_INIT_VIDEO);
bool quit = false;
SDL_Event event;
SDL_Window * window = SDL_CreateWindow("Chess",
SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, 720, 640, 0);
SDL_Delay(100);
SDL_Renderer * renderer = SDL_CreateRenderer(window, -1, SDL_RENDERER_ACCELERATED | SDL_RENDERER_PRESENTVSYNC);
SDL_Surface * board = SDL_LoadBMP("board.bmp");
if (board == NULL) {
std::cout << "The image 'board.bmp' could not be loaded due to the following SDL error: " << SDL_GetError() << std::endl;
return 1;
}
else {
std::cout << "The image 'board.bmp' was loaded successfully" << std::endl;
}
SDL_Texture * board_texture = SDL_CreateTextureFromSurface(renderer, board);
if (board_texture == nullptr){
SDL_DestroyRenderer(renderer);
SDL_DestroyWindow(window);
std::cout << "SDL_CreateTextureFromSurface Error: " << SDL_GetError() << std::endl;
SDL_Quit();
return 1;
}
SDL_RenderCopy(renderer, board_texture, NULL, NULL);
while (!quit)
{
SDL_WaitEvent(&event);
SDL_RenderPresent(renderer);
switch (event.type)
{
case SDL_QUIT:
quit = true;
break;
}
SDL_Delay(15);
}
SDL_DestroyTexture(board_texture);
SDL_FreeSurface(board);
SDL_DestroyRenderer(renderer);
SDL_DestroyWindow(window);
SDL_Quit();
return 0;
}
Both error checks return nothing, and I can't figure out why.
Also, I build using:
g++ chess.cpp -o chess -I include -L lib -l SDL2-2.0.0
This seems to work for my Windows PC, but not on my Intel Macbook Pro. Are there any solutions/workarounds available?
|
As documentation says
The backbuffer should be considered invalidated after each present; do
not assume that previous contents will exist between frames. You are
strongly encouraged to call SDL_RenderClear() to initialize the
backbuffer before starting each new frame's drawing, even if you plan
to overwrite every pixel.
So it should be:
SDL_WaitEvent(&event);
SDL_RenderClear(renderer);
SDL_RenderCopy(renderer, board_texture, NULL, NULL);
SDL_RenderPresent(renderer);
|
73,756,285
| 73,756,504
|
Is it OK to use lambda function parameter as a constant expression?
|
Why in this example the first call doesn't compile and the second one compiles?
consteval auto foo(auto x) {
static_assert(x);
}
int main(){
foo(42); // error: non-constant condition for static assertion
foo([]{}); // OK
}
If I understand correctly, the first one is wrong due to lvalue-to-rvalue conversion not being a constant expression. Why then the second one is OK?
|
static_assert(x); while passing []{} works, because a capture-less lambda has a conversion operator to function pointer and a function pointer can be converted to a bool (which is going to be true for everything but a null pointer which the conversion can't return).
An expression is a core constant expression as long as it doesn't fall in any of a number of exceptions listed in [expr.const]/5. In relation to the potentially relevant exceptions, neither is x a reference, which would disqualify the expression x from being a constant expression immediately, nor is a lvalue-to-rvalue conversion on x or one of its subobjects required in the call to the conversion function or the conversion to bool. The returned function pointer is in no way dependent on the value of the lambda. The call to the conversion function is also allowed in a constant expression, since it is specified to be constexpr ([expr.prim.lambda.closure]/11), so that the exception of calling a non-constexpr function also doesn't apply.
None of the exceptions apply and x (including the conversions to bool) is a constant expression. The same is not true if 42 is passed, because the conversion to bool includes an lvalue-to-rvalue conversion on x itself.
|
73,756,505
| 73,757,504
|
Qt6 Docker Ubuntu22.04 CMake Failed to find Qt component "Widgets"
|
I'm trying to build a small Qt6 application on a Docker container. It is running on Ubuntu:22.04. I have installed the qt6-base-dev package. Here is my small test app:
#include <QApplication>
#include <QWidget>
#include <iostream>
int main(int argc, char argv)
{
QApplication app(argc, argv);
QWidget widget;
widget.setFixedSize(400, 400);
QString helloString = "Hello from " + qgetenv("USER") + "!";
widget.setWindowTitle(helloString);
widget.show();
return QApplication::exec();
}
And here is my CMakeList.txt:
cmake_minimum_required(VERSION 3.0)
project(testproj)
find_package(Qt6 REQUIRED COMPONENTS Widgets)
add_executable(testproj main.cpp)
target_link_libraries(testproj PRIVATE Qt6::Widgets)
But when my cmake is configuring this error appear:
CMake Error at CMakeLists.txt:5 (find_package):
Found package configuration file:
/usr/lib/x86_64-linux-gnu/cmake/Qt6/Qt6Config.cmake
but it set Qt6_FOUND to FALSE so package "Qt6" is considered to be NOT
FOUND. Reason given by package:
Failed to find Qt component "Widgets".
Expected Config file at
"/usr/lib/x86_64-linux-gnu/cmake/Qt6Widgets/Qt6WidgetsConfig.cmake" exists
The /usr/lib/x86_64-linux-gnu/cmake/Qt6Widgets/Qt6WidgetsConfig.cmake is existing.
I couldn't find anything online about this problem.
|
I have installed libgl1-mesa-dev and libglvnd-dev and it worked perfectly !
|
73,756,597
| 73,756,922
|
C++ Elliptic Integral of first kind
|
I'm working on a program that makes calculations in C++; one of which involves using the complete elliptic integral of the first kind.
Python and Mathematica produce these results
Python:
from scipy import special
print(special.ellipk(0.1))
print(special.ellipk(0.2))
print(special.ellipk(0.3))
with output
1.6124413487202192
1.659623598610528
1.713889448178791
Mathematica
EllipticK[0.1]
EllipticK[0.2]
EllipticK[0.3]
same output
1.61244
1.65962
1.71389
But in C++
#include <stdio.h>
#include <cmath>
int main()
{
printf("0.1 %f \n", std::comp_ellint_1(0.1));
printf("0.2 %f \n", std::comp_ellint_1(0.2));
printf("0.3 %f \n", std::comp_ellint_1(0.3));
return 0;
}
calculates different numbers
0.1 1.574746
0.2 1.586868
0.3 1.608049
It seems odd that there would be a mistake in C++'s standard cmath library. Is there a solution to this?
EDIT:
When a small value (like 0.00001) is used, the numbers start to agree, so I think the issue is the calculation is an insufficient approximation. I am wondering if there is a version of this function for C++ that is as accurate as Mathematica or SciPy.
|
Elaborating on Steve's answer: there is no mistake or inaccuracy (select isn't broken). They are simply using different normalizations.
Specifically, if you carefully read the documentation, C++'s std::comp_ellint_1(k) returns
whereas Python's scipy.special.ellipk(m) returns
Note the k^2 versus m. So to get the result you want, you should call std::comp_ellint_1(std::sqrt(0.1)).
Try it on godbolt
This is consistent with your observation that the results agree more closely when the inputs are close to 0. If a number is very close to zero, then so is its square root, and both formulas yield a result very close to pi/2. If your input is really really small, then the difference will seem to vanish entirely due to underflow.
This kind of discrepancy is unfortunately common in mathematics; there are no universally standardized definitions for mathematical functions, and it's not unusual to find variations. One that's always annoyed me: when defining the Fourier transform and inverse Fourier transform of an arbitrary function, there are several different options for where you could put the necessary factors of 2 pi. So if you look it up in four different textbooks, you may very well find it done in four different and incompatible ways. Thus you can't stop when you see the words "Fourier transform"; you have to look for the specific definition they are actually using, and adapt it if necessary to match what you want.
|
73,756,772
| 73,757,959
|
What is the issue in this inheritence cpp problem
|
class Temporary_Employee: public Employee{
//consolidated monthly pay
public:
int con_pay;
int sal;
Temporary_Employee(string name,int num,string des,int cp) :Employee(name,num,des){
con_pay=cp;
}
void salary(){
sal=con_pay;
}
void display_t(){
cout<<"Name: "<<employee_name<<endl;
cout<<"Number: "<<employee_no<<endl;
cout<<"Designation: "<<desig<<endl;
cout<<"Monthly Salary: "<<sal<<endl;
}
};
I'm getting error:'Employee::Employee(std::string, int, std::string)' is private within this context
|
It seems like you need make the data members in you "Employee" class as public. like this
Employee(...){
public:
{data members}
}
|
73,757,090
| 73,757,575
|
Create matrix (2d-array) of size specified by parameter input in C++
|
I am learning C++ with experiencein mostly Python, R and SQL.
The way arrays (and vectors which differes somehow from 1d-arrays? and matrices which are 2d-arrays?) work in C++ seems quite different as I cannot specify the size of dimension of the array with an argument from the function.
A toy-example of my goal is some thing like this:
Have a function my_2d_array which takes two arguments M and N and returns a matrix or 2d-array of dimension (MxN) with elements indicating the position of that element. E.g. calling my_2d_array(4,3) would return:
[[00, 01, 02],
[10, 11, 12],
[20, 21, 22],
[30, 31, 32]]
The main function should execute my_2d_array and be able to potentially perform calculations with the result or modify it.
This is my attempt (with errors):
int my_2d_array(int N, int M) {
int A[N][M];
for (int i = 0; i < N; i++) {
for (int j = 0; j < M; j++) {
std::string element = std::to_string(i) + std::to_string(j);
A[i][j] = element;
}
}
return A;
}
void main() {
int N, M;
N = 4;
M = 3;
int A[N][M] = my_2d_array(N, M);
// Print the array A
for (int i = 0; i < N; i++) {
for (int j = 0; j < M; j++) {
std::cout << A[i][j] << " ";
}
std::cout << "\n";
}
}
One (1) dimensional attempt of @JustLearning's suggestion:
int my_array(int N) {
std::array<int, N> A;
for (int i = 0; i < N; i++) {
A[i] = i;
}
return A;
}
int main() {
int N = 4;
int A[N] = my_array(N);
// Print the array A
for (int i = 0; i < N; i++) {
std::cout << A[i] << " ";
}
}
|
Following your comment, I can see why you are confused in your attempts to use a matrix in code.
There are many types of containers in C++. Many of them you can find in the standard library (std::vector, std::list, std::set, ...), others you can create yourself or use other libraries. Plain arrays (like int a[5]) are a somewhat unique case because they come from C and are part of the language itself.
A plain array lives on the stack (not very important but you might want to read up on stack vs heap allocations), and refers to a contiguous region of memory.
If you declare some array a like int a[5], you get a region of 5 integers one after the other, and you can point to the first one by just writing a. You can access each of them using a[i] or, equivalently, *(a+i).
If you declare a like int a[5][3], you now get a region of 15 integers, but you can access them slightly differently, like a[i][j], which is equivalent to *(a+i*3+j).
The important thing to you here is that the sizes (5 and 3) must be compile-time constants, and you cannot change them at runtime.
The same is true for std::array: you could declare a like std::array<std::array<int, 3, 5> a and get a similar region of 15 integers, that you can access the same way, but with some convenience (for example you can return that type, whereas you cannot return a plain array type, only a pointer, losing the size information in the process).
My advice is not to think of these arrays as having dimensionality, but as simple containers that give you some memory to work with however you choose. You can very well declare a like std::array<int, 15> a and access elements in a 2D way by indexing like this: a[i*3+j]. Memory-wise, it's the same.
Now, if you want the ability to set the sizes at runtime, you can use std::vector in a similar way. Either you declare a like std::vector<std::vector<int>> a(5, std::vector<int>(3)) and deal with the nested vectors (that initialization creates 5 std::vector<int> of size 3 each), or you declare a as a single vector like std::vector<int> a(15) and index it like a[i*3+j]. You can even make your own class that wraps a vector and helps with the indexing.
Either way, it's rare in C++ to need a plain array, and you should generally use some kind of container, with std::vector being a good choice for a lot of things.
Here is an example of how your code would look like using vectors:
#include <vector>
#include <string>
#include <iostream>
std::vector<std::string> my_2d_array(int N, int M) {
std::vector<std::string> A(N*M);
for (int i = 0; i < N; i++) {
for (int j = 0; j < M; j++) {
std::string element = std::to_string(i) + std::to_string(j);
A[i*M+j] = element;
}
}
return A;
}
int main() {
int N, M;
N = 4;
M = 3;
std::vector<std::string> A = my_2d_array(N, M);
// Print the array A
for (int i = 0; i < N; i++) {
for (int j = 0; j < M; j++) {
std::cout << A[i*M+j] << " ";
}
std::cout << "\n";
}
}
And here is a very crude example of a Matrix class used to wrap the vectors:
#include <vector>
#include <string>
#include <iostream>
template<typename T>
class Matrix {
public:
Matrix(int rowCount, int columnCount) : v(rowCount*columnCount), columnCount(columnCount) {}
T& operator()(int row, int column) {
return v[row*columnCount + column];
}
private:
std::vector<T> v;
int columnCount;
};
Matrix<std::string> my_2d_array(int N, int M) {
Matrix<std::string> A(N, M);
for (int i = 0; i < N; i++) {
for (int j = 0; j < M; j++) {
std::string element = std::to_string(i) + std::to_string(j);
A(i, j) = element;
}
}
return A;
}
int main() {
int N, M;
N = 4;
M = 3;
Matrix<std::string> A = my_2d_array(N, M);
// Print the array A
for (int i = 0; i < N; i++) {
for (int j = 0; j < M; j++) {
std::cout << A(i, j) << " ";
}
std::cout << "\n";
}
}
|
73,757,346
| 73,757,438
|
Global vector data disappearing when executing another function
|
I am facing a problem with global a vector. I actually have a vector of vectors of pairs, and I declared it as a global variable.
vector<vector<pair<int, float>>> numbers;
In the main function I do some push_backs passing a vector os pairs as an argument, which works just fine.
numbers.push_back(VectorOfPairs);
The problem comes out when I call a function that uses numbers, which is my vector of vectors.
For some reason, all the content that I stored in the vector gets empty for no reason.
I tried to debbug, and I saw that in function main the size of the vector is actually right, but when I call some function that uses the numbers vector, the size changes from any number to 0.
vector<vector<pair<int, float>>> numbers;
//vector declaration
//suppose I add some elements in the vector in function **main**
//printing the size of the vector works just fine
void matrixVectorMult(){
//if I call this function right after and try to print the size again, it prints **0**.
printf("-%d-", numbers.size());
}
I would appreciate any help or hints about how to solve this problem.
https://pastebin.com/vkNNk6Ls
that's my code.
|
you have a variable called numbers in main, and you pass this as an argument to those functions. That 'numbers' has nothing to do with the global valiable of the same name.
|
73,757,548
| 73,757,811
|
Strange behavior with function that converts a vector2 to an angle in degrees
|
I am attempting to get a gun to rotate around the player based on a vector2 input
if the y component of the vector is positive it works perfectly as you can see here
Working as intended (Ignore my placeholder graphics)
if the y component is negative however, it returns the same value as if the y value was positive
Not working as intended
I'm sure this has to do with the equation I'm using, in particular the fact that the y component is removed from the equation when multiplied by the y component in my base vector, but other methods I've used only make things worse, usually causing the gun to not rotate at all while the y value is negative.
static u16 vector2To512Ang(vector2 v) {
// Avoid division by zero
if (v.getMagnitude() == 0)
return 0;
// Base vector
vector2 b = {1, 0};
float angle = acos((v.x * b.x + v.y * b.y)/abs(v.getMagnitude())) * 57.2957795131f;
// Convert to scale of 0-512
return (angle * 512) / 360;
}
To clear up any questions
The scale of the output is weird because I'm working with old hardware and it needs a range of 0-512. Removing this scaling results in the same issue so that isn't the problem
The multiplication by 57.2957795131 is the same as 180 / PI precomputed and is done to convert from radians to degrees
|
I'm unsure what you're after but I think you want the angle between v.x and v.y scaled to 0-512. In that case:
#include <cmath>
#include <numbers>
// ...
float angle = std::atan2(v.y, v.x);
if(angle < 0.) angle += 2 * std::numbers::pi_v<float>;
return angle * 512 / (2 * std::numbers::pi_v<float>);
// return angle * 256 / std::numbers::pi_v<float>;
|
73,757,693
| 73,757,808
|
Precedence between multiple different operators Order of evaluation
|
I have some problems with how an expression Order of evaluation is executed.
int x = 1;
int y = 2;
int z = ++x || y++; // how this expression actually executed?
cout << x << " " << y << " " << z;
output: 2 2 1
I know if started from left to right with ++x it was evaluated and a short circuit will be done.
But why we didn't evaluate from y++ which has higher precedence than any other operator here and then do ++x which has less precedence and finally do || between them?
|
Order of evaluation and operator precedence are not the same thing.
Operator precedence here tells us that the expression ++x || y++ is equivalent to (++x) || (y++) rather than some other placement of parentheses, but that still doesn't tell us in which order the subexpressions are evaluated and operator precedence is not relevant for that (aside from the grouping of expressions).
By-default there are no guarantees for this ordering in C++. It is not guaranteed to be left-to-right and evaluations of different branches of the expression tree can interleave. The value computations and side effects of subexpressions are said to be unsequenced.
However some expressions enforce some sequencing rules. In particular the built-in || and && are special in that they will conditionally evaluate only their left-hand operand at all.
|| will always sequence the value computation and all side effects of the left-hand side before those of the right-hand operand (if it is evaluated at all). And if the left-hand operand's value is true it will not evaluate the right-hand operand. This is known as short-circuit evaluation. What operators are used in the expressions in the left- and right-hand operands is irrelevant.
So ++x is evaluated first, resulting in a value 2 which converted to bool is true. Therefore y++ is never evaluated. Whether or not the pre-increment or post-increment has higher operator precedence is completely irrelevant at this stage.
|
73,758,012
| 73,758,063
|
How to return an array in a function in cpp
|
This is my code
*int my_arr(int a) {
int arr[a];
for (int i = 0; i < a; i++) {
cin >> arr[i];
}
return arr;
}
I keep getting errors
|
You can't return an array in cpp. Instead use vectors.
vector<int> my_arr(int a) {
vector<int> arr;
for (int i = 0; i < a; i++) {
int input;
cin >> input;
arr.push_back(input); // to push elements into vector
}
return arr;
}
|
73,758,079
| 73,759,565
|
There is in Directx something similar to glfwSetWindowUserPointer?
|
So, I was using OpenGL with GLFW until now. Now I want to include DirectX in my project and I'm wondering, there is in directx, HWND something similar to glfwSetWindowUserPointer?
I have this struct:
struct WindowData
{
std::string Title;
int X, Y;
int Width, Height;
bool VSync;
bool Fullscreen;
};
And I want to send an instance of this struct as a WindowUserPointer
|
There are some tutorials on the Internet that appear to cover how to integrate DirectX with GLFW windowing and the like. For details on creating a Direct3D 11 device, see this blog post.
That said, if you want to author a 'native' DirectX application, a Win32 rendering loop and Direct3D device setup is quite simple. For examples of doing this for DirectX 11 and DirectX12 for both classic Win32 and the Universal Windows Platform (UWP), see the directx-vs-templates GitHub project.
HWND hwnd = CreateWindowExW(0, L"MyAppWindowClass", g_szAppName, WS_OVERLAPPEDWINDOW,
CW_USEDEFAULT, CW_USEDEFAULT, rc.right - rc.left, rc.bottom - rc.top, nullptr, nullptr, hInstance,
nullptr);
if (!hwnd)
return 1;
ShowWindow(hwnd, nCmdShow);
SetWindowLongPtr(hwnd, GWLP_USERDATA, reinterpret_cast<LONG_PTR>(g_game.get()));
LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam)
{
auto game = reinterpret_cast<Game*>(GetWindowLongPtr(hWnd, GWLP_USERDATA));
...
For utility code including GamePad/Keyboard/Mouse input handling, audio, and DirectX support, see DirectX Tool Kit for DX11 / DX12.
|
73,758,245
| 73,758,366
|
Does storing a returned reference into a variable allow you to access that variable?
|
I am confused as to why storing a reference to a member variable with an alias allows us to access it, while storing it with a variable does not allow us to access the same variable.
Let me clear up what I mean, say we have class Journey:
class Journey {
protected:
Coordinate start; //coordinate consists of x and y values
public:
Journey(Coordinate startIn,): start(startIn){}
//methods ......
Coordinate & getStart(){ //returns a reference to the "start" member
return start;
}
};
now say I do this
int main() {
Journey j(Coordinate(1,1));
Coordinate & a = j.getStart(); //function returns a reference and is stored under the alias "a"
a.setX(0); //changes X value of the "start" member variable
cout << j.getStart().getX() << endl; //returns 0 - as expected!
}
the example above works as I returned a reference to the member variable "start" and stored it under an alias and I accessed it to change the original member variable
But say I stored the reference to start under a variable instead
int main() {
Journey j(Coordinate(1,1));
Coordinate a = j.getStart(); //function returns a reference and is stored under the VARIABLE "a"
a.setX(0);
cout << j.getStart().getX() << endl; //returns 1 - Original start was not changed?
}
I cannot do the same as a does not access the start member variable
I am not sure why this happens? What happened behind the scenes? We are storing the reference to start under a variable instead of an alias.
|
Because you are calling a copy constructor when instantiating a Coordinate object with Coordinate a = j.getStart(). That is, Coordinate a is a copy of start, not a reference to it. Read here about copy constructors.
As for the question in the title, no, it doesn't. If you want some identifier to be a reference, you need to define it as a reference as you did in the first case.
|
73,758,252
| 73,758,309
|
Target link multiple libraries in a project on clion
|
I am trying to use target_link_libraries in a project on clion, but when i run the project the following error is printed:
/usr/bin/ld: cannot find -lctop_common
/usr/bin/ld: cannot find -lctop_log
/usr/bin/ld: cannot find -lctop_util
/usr/bin/ld: cannot find -leigen
/usr/bin/ld: cannot find -lcrl
/usr/bin/ld: cannot find -lcrl-algorithm
/usr/bin/ld: cannot find -lcrl-loader
/usr/bin/ld: cannot find -lcrl-tsplib
collect2: error: ld returned 1 exit status
ninja: build stopped: subcommand failed.
This is what my cmakelist file includes:
cmake_minimum_required(VERSION 2.8.3)
project(planner_standalone_grasp)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -O0")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fno-diagnostics-color")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++17")
# find all cpp files in currect directory (where CMakeLists.txt is)
file(GLOB SOURCE_FILES FILES_MATCHING PATTERN "./src/*.cpp")
include_directories(src)
add_executable(${PROJECT_NAME} main.cpp ${SOURCE_FILES})
set(EXE_LIBS
ctop_common
ctop_log
ctop_util
eigen
crl
crl-algorithm
crl-loader
crl-tsplib
yaml-cpp
)
target_link_libraries(${PROJECT_NAME} ${EXE_LIBS})
... Any help would be highly appreciated.
|
target_link_libraries tells cmake what libraries to link against (the -l flag that is). You have to specify where to find said libraries too! this can be done with target_link_directories... (the -L flag).
If you have these libraries in the lib folder, make sure they are compiled for the same platform as the executable you are trying to build.
I believe on linux you can also drop the libraries in the lib folder and not use the target_link_directories inc make, not sure...
so:
target_link_libraries specifies the names of the libraries,
target_link_directories specifies the directories of these libraries.
|
73,758,291
| 73,764,030
|
Is there a way to specify the c++ standard of clangd without recompiling it?
|
I'm trying to use a feature in c++17(const lambdas) without having clangd error me. I've searched online and every answer tells me to recompile clangd with a flag. Is there truly no other way?
Edit: Clangd is not the compiler. It's a language server, which is a program made to be used with IDEs that basically checks your code for errors and warnings before compiling it. See https://clangd.llvm.org .
|
Answer inspired by n. 1.8e9-where's-my-share m.
From https://clangd.llvm.org/config#compileflags:
You can create a config file for clangd. In the config file, you can specify the compiler options mimicked. For my question, do this:
CompileFlags:
Add: [-std=c++20]
|
73,758,360
| 73,758,499
|
Why is it not correct when constructor is called recursively?
|
This is leetcode 341. When I write like this, it is correct:
class NestedIterator {
public:
vector<int> flatted;
int current=0;
NestedIterator(vector<NestedInteger> &nestedList) {
flatten(nestedList);
}
void flatten(vector<NestedInteger> &nestedList)
{
for(NestedInteger i:nestedList)
{
if(i.isInteger())
flatted.push_back(i.getInteger());
else
flatten(i.getList());
}
}
int next() {
current++;
return flatted[current-1];
}
bool hasNext() {
if(current<flatted.size())
return true;
else
return false;
}
};
But, if I write it like this, it is not correct:
class NestedIterator {
public:
vector<int> flatted;
int current=0;
NestedIterator(vector<NestedInteger> &nestedList) {
for(NestedInteger i:nestedList)
{
if(i.isInteger())
flatted.push_back(i.getInteger());
else
NestedIterator(i.getList());
}
}
};
The only difference is that in method 2, I call the constructor recursively. Why is it not correct?
|
As others have mentioned, the iterator you create below the loop is not working because it has its own list that it populates and since you never keep any reference to it, after it is finished, it goes out of scope and is destroyed.
Try this instead:
class NestedIterator {
public:
vector<int> flatted;
int current=0;
NestedIterator(vector<NestedInteger> &nestedList) {
for(NestedInteger i:nestedList)
{
if(i.isInteger())
flatted.push_back(i.getInteger());
else {
auto iter = NestedIterator(i.getList());
this->flatted.insert(this->flatted.end(), iter.flatted.begin(), iter.flatted.end()); // here we copy the vector from this iterator
}
}
}
};
The goal above is to copy the vector from the temporary iterator, into the current one
|
73,758,579
| 73,760,157
|
QPalette does not change the background of the button
|
I have the problem that the background of the button cannot be changed. Only the border is changed.
I assumed that some attribute prevented this. But I think I'm wrong on that point.
I've gone through the Qt documentation but can't find anything. I can only find examples on the internet that give me the same result. Is there a way to change the background?
Here is the code:
#include "mainwindow.h"
#include "ui_mainwindow.h"
MainWindow::MainWindow(QWidget *parent)
: QMainWindow(parent)
, ui(new Ui::MainWindow)
{
ui->setupUi(this);
//ui->pushButton->setAttribute(Qt::WA_SetPalette);
//ui->pushButton->setAttribute(Qt::WA_SetStyle);
ui->pushButton->setAutoFillBackground(true);
ui->pushButton->setAttribute(Qt::WA_ShowModal);
QPalette pal = ui->pushButton->palette();
//pal.setColor(QPalette::Base, Qt::cyan);
//pal.setBrush(QPalette::Base, Qt::cyan);
pal.setColor(QPalette::Button, Qt::cyan);
pal.setBrush(QPalette::Button, Qt::cyan);
ui->pushButton->setPalette(pal);
ui->pushButton->update();
//ui->pushButton->setAttribute(Qt::WA_NoSystemBackground, true);
// setAutoFillBackground(true);
// QPalette pal2 = palette();
// pal2.setBrush(QPalette::Button, Qt::cyan);
// pal2.setColor(QPalette::ButtonText, Qt::cyan);
// QApplication::setPalette(pal2);
}
|
Here's the code I use to set the background color of a QPushButton (or any QAbstractButton):
// @param btn button to change colors of
// @param bc new background color for button
// @param optTextColor if non-NULL, the new color to use for the
// button's text label. If NULL, this function
// will choose either black or white (whichever
// it thinks will be more readable)
void SetButtonColor(QAbstractButton * btn, const QColor & bc, const QColor * optTextColor = NULL)
{
QPalette p = btn->QWidget::palette();
p.setColor(QPalette::Button, bc);
const QColor fc = GetContrastingTextColor(bc);
p.setColor(QPalette::Active, QPalette::ButtonText, optTextColor?*optTextColor:fc);
p.setColor(QPalette::Inactive, QPalette::ButtonText, optTextColor?*optTextColor:fc);
p.setColor(QPalette::Disabled, QPalette::ButtonText, optTextColor?*optTextColor:MixColors(fc, Qt::lightGray, 0.5f));
btn->setPalette(p);
}
// Returns either Qt::white or Qt::black, whichever will make for more readable text
// in front of the passed-in background color
QColor GetContrastingTextColor(const QColor & c)
{
const int darkThresh = 64;
const int loneDelta = 129;
const int loneRed = ((c.green()<darkThresh)&&(c.blue() <darkThresh)) ? loneDelta : 0;
const int loneGreen = 0;
const int loneBlue = ((c.red() <darkThresh)&&(c.green()<darkThresh)) ? loneDelta : 0;
return (std::max(c.red()-loneRed, std::max(c.green()-loneGreen, c.blue()-loneBlue)) >= 128) ? Qt::black : Qt::white;
}
// Returns a weighted average value between (v1) and (v2)
static int Mix(int v1, const int v2, float p)
{
return (int) ((((float)v2)*p)+(((float)v1)*(1.0f-p)));
}
// Returns a weighted average value between (c1) and (c2)
QColor MixColors(const QColor & c1, const QColor & c2, float p)
{
return QColor(Mix(c1.red(), c2.red(), p), Mix(c1.green(), c2.green(), p), Mix(c1.blue(), c2.blue(), p));
}
|
73,758,747
| 73,758,930
|
Looking for the description of the algorithm to convert UTF8 to UTF16
|
I have 3 bytes representing an unicode char encoded in utf8. For example I have E2 82 AC (UTF8) that represent the unicode char € (U+20AC). Is their any algorithm to make this conversion? I know their is the windows api MultiByteToWideChar but I would like to know if their is a simple mathematical relation between E2 82 AC and U+20AC. So is the mapping between utf8 -> utf16 a simple mathematic function or if it's a hardcoded map.
|
Converting a valid UTF-8 byte sequence directly to UTF-16 is doable with a little mathematical know-how.
Validating a UTF-8 byte sequence is trivial: simply check that the first byte matches one of the patterns below, and that (byte and $C0) = $80 is true for each subsequent byte in the sequence.
The first byte in a UTF-8 sequence tells you how many bytes are in the sequence:
(byte1 and $80) = $00: 1 byte
(byte1 and $E0) = $C0: 2 bytes
(byte1 and $F0) = $E0: 3 bytes
(byte1 and $F8) = $F0: 4 bytes
anything else: error
There are very simple formulas for converting UTF-8 1-byte, 2-byte, and 3-byte sequences to UTF-16, as they all represent Unicode codepoints below U+10000, and thus can be represented as-is in UTF-16 using just one 16-bit codeunit, no surrogates needed, just some bit twiddling, eg:
1 byte:
UTF16 = UInt16(byte1 and $7F)
2 bytes:
UTF16 = (UInt16(byte1 and $1F) shl 6)
or UInt16(byte2 and $3F)
3 bytes:
UTF16 = (UInt16(byte1 and $0F) shl 12)
or (UInt16(byte2 and $3F) shl 6)
or UInt16(byte3 and $3F)
Converting a UTF-8 4-byte sequence to UTF-16, on the other hand, is slightly more involved, since it represents a Unicode code point that is U+10000 or higher, and thus will need to use UTF-16 surrogates, which requires some additional math to calculate, eg:
4 bytes:
CP = (UInt32(byte1 and $07) shl 18)
or (UInt32(byte2 and $3F) shl 12)
or (UInt32(byte3 and $3F) shl 6)
or UInt32(byte4 and $3F)
CP = CP - $10000
highSurrogate = $D800 + UInt16((CP shr 10) and $3FF)
lowSurrogate = $DC00 + UInt16(CP and $3FF)
UTF16 = highSurrogate, lowSurrogate
Now, with that said, let's look at your example: E2 82 AC
The first byte is ($E2 and $F0) = $E0, the second byte is ($82 and $C0) = $80, and the third byte is ($AC and $C0) = $80, so this is indeed a valid UTF-8 3-byte sequence.
Plugging in those byte values into the 3-byte formula, you get:
UTF16 = (UInt16($E2 and $0F) shl 12)
or (UInt16($82 and $3F) shl 6)
or UInt16($AC and $3F)
= (UInt16($02) shl 12)
or (UInt16($02) shl 6)
or UInt16($2C)
= $2000
or $80
or $2C
= $20AC
And indeed, Unicode codepoint U+20AC is encoded in UTF-16 as $20AC.
|
73,760,751
| 73,760,972
|
c++20 seems to be not supporting constexpr vector - is my installation incorrect
|
I just finished upgrading my compiler to C++20 on ubuntu 20.04. g++ version gives me the following output :
c++ (Ubuntu 10.3.0-1ubuntu1~20.04) 10.3.0
I am trying the following code as suggested on stackoverflow
constexpr int f() {
std::vector<int> v = {1, 2, 3};
return v.size();
}
int main() {
static_assert(f() == 3);
}
But I am getting the following error :
error: variable ‘v’ of non-literal type ‘std::vector<int>’ in ‘constexpr’ function
Am I going wrong somewhere. Or is my installation incorrect
|
You need to upgrade gcc to at least 12 to get the C++23 constexpr support for non-literal types as std::vector<int>.
From compiler support @ cppreference:
gcc
clang
EDG eccp
Non-literal variables (and labels and gotos) in constexpr functions
P2242R3
12
15
6.3
Feature test:
__cpp_constexpr >= 202110L
|
73,761,005
| 73,765,706
|
Connect Qt to MariaDB Database
|
I am going to connect my program, which is written in Qt, to my database which is defined in MariaDB DBMS (XAMPP Software Package). So as you can see in the figure below, I have to install a MySQL connector instead of a MariaDB connector (This is what my book, Hand on-GUI programming with C++ and Qt said).
I installed the latest MySQL connector from its official website and I went running my project this is the error that I faced after running this code.
The book said there is no need to worry just go copy the file "libmysql.dll" from the MySQL connector path to the execution path in Qt. The version I installed (MySQL Connector C++ 8.0), did not contain this file and I found a version (MySQL Connector C 6.1) that it does and I continued with the instructions of the book but the problem is not solved yet. I googled, checked StackOverflow questions, read articles, and search on youtube. Some of the solutions didn't work for me and others I didn't understand. Till now I just copied files such as "libmysql.dll", "qsqlmysql.dll", etc to many folders of Qt software path (and as u might guess nothing changed) and I also check the Qt website for this problem and I did not understand what should I do. I'll be very thankful to help me with this problem.
|
So I had the same problem earlier. (windows)
then I found this link with debugged components.
https://github.com/thecodemonkey86/qt_mysql_driver/releases
and it worked.
I have the same server type as you, so it should work for you too. You have to put the appropriate files in the Qt folder. Only the current Qt version is important.
Put the files in these folders:
C:\Qt\6.3.1\mingw_64\plugins\sqldrivers
C:\Qt\6.3.1\mingw_64\lib
C:\Qt\6.3.1\mingw_64\bin
Make sure. That the database you set in Qt exists, otherwise you'll keep getting errors
|
73,761,362
| 73,773,589
|
Is it UB to access a non-existent object?
|
There seems to be no more silly question than this. But does the standard allow it?
Consider:
void* p = operator new(sizeof(std::string));
*static_cast<std::string*>(p) = "string";
[basic.life]/6:
Before the lifetime of an object has started but after the storage which the object will occupy has been allocated24 or, after the lifetime of an object has ended and before the storage which the object occupied is reused or released... The program has undefined behavior if:
the pointer is used to access a non-static data member or call a non-static member function of the object, or
the pointer is used as the operand of a static_cast ([expr.static.cast]), except when the conversion is to pointer to cv void, or to pointer to cv void and subsequently to pointer to cv char, cv unsigned char, or cv std::byte ([cstddef.syn]), or
(Note that according to [intro.object]/10, a std::string object is not implicitly created by operator new because it is not of implicit-lifetime type.)
However, [basic.life]/6 does not apply to this code because there are no objects at all.
What am I missing?
|
[intro.object]/10
Some operations are described as implicitly creating objects within a specified region of storage. For each operation that is specified as implicitly creating objects, that operation implicitly creates and starts the lifetime of zero or more objects of implicit-lifetime types in its specified region of storage if doing so would result in the program having defined behavior. If no such set of objects would give the program defined behavior, the behavior of the program is undefined.
[intro.object]/11
Further, after implicitly creating objects within a specified region of storage, some operations are described as producing a pointer to a suitable created object. These operations select one of the implicitly-created objects whose address is the address of the start of the region of storage, and produce a pointer value that points to that object, if that value would result in the program having defined behavior. If no such pointer value would give the program defined behavior, the behavior of the program is undefined.
[intro.object]/13
Any implicit or explicit invocation of a function named operator new or operator new[] implicitly creates objects in the returned region of storage and returns a pointer to a suitable created object.
If an std::string[1] (or std::string[1][1] etc.) object were created, and a pointer to the std::string subobject were produced by operator new(sizeof(std::string)), then *static_cast<std::string*>(p) = "string" would have undefined behavior per [basic.life]/(7.2)
the glvalue [denoting an out-of-lifetime object] is used to call a non-static member function of the object
If operator new(sizeof(std::string)) produced a pointer to object of some other type (like int or double), then undefined behavior would be triggered by [expr.ref]/8:
If E2 is a non-static member and the result of E1 is an object whose type is not similar to the type of E1, the behavior is undefined.
So, there is no set of objects which would give the program defined behavior. Thus, the highlighted sentence of [intro.object]/10 apply here.
|
73,762,161
| 73,762,196
|
Is automating mutex like this in C++ safe?
|
I'm learning about mutex and threading right now. I was wondering if there's anything dangerous or inherently wrong with automating mutex with a class like this:
class AutoMutex
{
private:
std::mutex& m_Mutex;
public:
AutoMutex(std::mutex& m) : m_Mutex(m)
{
m_Mutex.lock();
}
~AutoMutex()
{
m_Mutex.unlock();
}
};
And then, of course, you would use it like this:
void SomeThreadedFunc()
{
AutoMutex m(Mutex); // With 'Mutex' being some global mutex.
// Do stuff
}
The idea is that, on construction of an AutoMutex object, it locks the mutex. Then, when it goes out of scope, the destructor automatically unlocks it.
You could even just put it in scopes if you don't need it for an entire function. Like this:
void SomeFunc()
{
// Do stuff
{
AutoMutex m(Mutex);
// Do race condition stuff.
}
// Do other stuff
}
Is this okay? I don't personally see anything wrong with it, but as I'm not the most experienced, I feel there's something I may be missing.
|
It's safe to use a RAII wrapper, and in fact safer than using mutex member functions directly, but it's also unnecessary to write since standard library already provides this. It's called std::lock_guard.
However, your implementation isn't entirely safe, because it's copyable, and a copy will attempt to re-unlock the mutex which will lead to undefined behaviour. std::lock_guard resolves this issue by being non-copyable.
There's also std::unique_lock which is very similar, but allows things such as releasing the lock within the lifetime. std::scoped_lock should be used if you need to lock multiple mutexes. Using multiple lock guard may lead to deadlock. std::scoped_lock is also fine to use with a single mutex, so you can replace all uses of lock guard with it.
|
73,762,265
| 73,763,891
|
What is the expected behavior for ListObjects in GCP with a non existent path? (C++)
|
I expected the following C++/GCP code to return an empty list for a path which doesn't exist, but it doesn't seem to do so in practice:
for (auto&& object_metadata :
cli->ListObjects(bucket, path))
{
// how many time will this be hit?
}
Does anyone know what is the expected behavior? I couldn't find this in the documentation (perhaps I didn't look in the right place)
|
TL;DR; the expected behavior is to return an empty set if the request is successful but there are no objects with that path, and to return a single element (with the error) if the request fails.
Loosely speaking, ListObjects() returns an input range of google::cloud::StatusOr<google::cloud::storage::ObjectMetadata>. If (for example) the bucket does not exist, then you get a single element, and the element contains the error. A bit off-topic: the request may fail even after returning some objects, under the hood there is a pagination request, and fetching a new page may fail.
for (auto o : cli->ListObjects(bucket, gcs::Prefix(path))) {
if (!o) throw std::move(o).status();
// this is reached only if the request is successful
// *and* there are objects in `path`
}
|
73,762,352
| 73,770,476
|
How to define default for pybind11 function as parameter
|
As per the pybind documentation, we can pass a function from Python to C++ via the following code:
#include <pybind11/pybind11.h>
#include <pybind11/functional.h>
int func_arg(const std::function<int(int)> &f) {
return f(10);
}
PYBIND11_MODULE(example, m) {
m.def("func_arg", &func_arg);
}
How can we define a default function int -> 1 that is used for f if no parameter is passed?
$ python
>>> import example
>>> example.func_arg() # should return 1
|
One possibility is
namespace py = pybind11;
int func_arg(const std::function<int(int)> &f) {
return f(10);
}
std::function<int(int)> default_func = [](int) -> int { return 1; };
PYBIND11_MODULE(example, m) {
m.def("func_arg", &func_arg, py::arg("f")=default_func);
}
An even simpler one is just overload the func_arg wrapper itself
PYBIND11_MODULE(foo, m) {
m.def("func_arg", &func_arg);
m.def("func_arg", [](void) -> int { return 1; });
}
|
73,763,951
| 73,764,250
|
How do I joint iterate over 2 vectors by ref?
|
I'm trying to iterate over 2 vectors in one go using std::views::join
Here is what I'm trying:
std::vector<int> a {1, 2, 3}, b {4, 5, 6};
for (auto &v : {a, b} | std::views::join)
{
std::cout << v << std::endl;
}
This fails to compile. Now, if I change the code to:
for (auto &v : std::ranges::join_view(std::vector<std::vector<int>> {a, b}))
{
std::cout << v << std::endl;
v++;
}
it compiles and executes, however, the content of a and b is not modified.
How do I jointly iterate over a and b in a way where I can modify elements of a and b inside the loop?
|
How do I jointly iterate over a and b in a way where I can modify
elements of a and b inside the loop?
You can use views::all to get references to two vectors and combine them into a new nested vector
std::vector<int> a {1, 2, 3}, b {4, 5, 6};
for (auto &v : std::vector{std::views::all(a), std::views::all(b)} // <-
| std::views::join)
{
std::cout << v << std::endl;
v++;
}
Demo
|
73,764,284
| 73,764,490
|
32 bit builtin population count for clang counts long long integer c++
|
I was using the __builtin_popcount with clang compiler and I needed to count a 64 bit number (unsigned long long or uint64_t). From looking it up, __builtin_popcount counts 16 bits, __builtin_popcountl counts 32 bits, and __builtin_popcountll counts 64 bits. When I tested it, __builtin_popcountl was able to do calculations on 64 bit integers. Does anybody know the reason for this?
#include <iostream>
int main() {
unsigned long long bb = 0b1000000100000001000000010000000100000001000000010000000100000001;
std::cout << __builtin_popcountl(bb) << std::endl; //returns 9 (correct answer)
}
|
int __builtin_popcountl (unsigned long) is for unsigned longs.
int __builtin_popcountll (unsigned long long) is for unsigned long longs.
unsigned long is 64 bit on your platform, so the conversion from unsigned long long to unsigned long is lossless, and you can use __builtin_popcountl for 64 bit numbers too.
int is guaranteed to be 16 bits or wider, long is guaranteed to be 32 bits or wider, and long long is guaranteed to be 64 bits or wider. That means you can always use __builtin_popcountl with 32 bit numbers, and you may or may not be able to use it with 64 bit numbers (in this case you could).
Related question: What is the bit size of long on 64-bit Windows?
|
73,764,752
| 74,234,608
|
How to correctly format input and resize output data whille using TensorRT engine?
|
I'm trying implementing deep learning model into TensorRT runtime. The model conversion step is done quite OK and i'm pretty sure about it.
Now there's 2 parts i'm currently struggle with is memCpy data from host To Device (like openCV to Trt) and get the right output shape in order to get the right data. So my questions is:
How actually a shape of input dims relate with memory buffer. What is the difference when the model input dims is NCHW and NHWC, so when i read a openCV image, it's NHWC and also the model input is NHWC, do i have to re-arange the buffer data, if Yes then what's the actual consecutive memory format i have to do ?. Or simply what does the format or sequence of data that the engine are expecting ?
About the output (assume the input are correctly buffered), how do i get the right result shape for each task (Detection, Classification, etc..)..
Eg. an array or something look similar like when working with python .
I read Nvidia docs and it's not beginner-friendly at all.
//Let's say i have a model thats have a dynamic shape input dim in the NHWC format.
auto input_dims = nvinfer1::Dims4{1, 386, 342, 3}; //Using fixed H, W for testing
context->setBindingDimensions(input_idx, input_dims);
auto input_size = getMemorySize(input_dims, sizeof(float));
// How do i format openCV Mat to this kind of dims and if i encounter new input dim format, how do i adapt to that ???
And the expected output dims is something like (1,32,53,8) for example, the output buffer result in a pointer and i don't know what's the sequence of the data to reconstruct to expected array shape.
// Run TensorRT inference
void* bindings[] = {input_mem, output_mem};
bool status = context->enqueueV2(bindings, stream, nullptr);
if (!status)
{
std::cout << "[ERROR] TensorRT inference failed" << std::endl;
return false;
}
auto output_buffer = std::unique_ptr<int>{new int[output_size]};
if (cudaMemcpyAsync(output_buffer.get(), output_mem, output_size, cudaMemcpyDeviceToHost, stream) != cudaSuccess)
{
std::cout << "ERROR: CUDA memory copy of output failed, size = " << output_size << " bytes" << std::endl;
return false;
}
cudaStreamSynchronize(stream);
//How do i use this output_buffer to form right shape of output, (1,32,53,8) in this case ?
|
Could you please edit your question and tell us which model you're using if it's a commonly known NN, prehaps one we can download to test locally?
Then, the answer since it doesn't depend on the model (even though it would help to answer)
How actually a shape of input dims relate with memory buffer
If the input is NxCxHxW, you need to allocate N*C*H*W*sizeof(float) memory for that on your CPU and GPU. To be more precise, you need to allocate space on GPU for all the bindings and on CPU for only input and output bindings.
when i read a openCV image, it's NHWC and also the model input is NHWC, do i have to re-arange the buffer data
No, you do not have to re-arrange the buffer data. If you would have to change between NHWC and NCHW you can check this or google 'opencv NHWC to NHCW'.
Full working code example here, especially this function.
Or simply what does the format or sequence of data that the engine are expecting ?
This depends on how the neural network was trained. You should in general know exactly which kind of preprocessing and image data formats have been used to train the NN. You should even use the same libraries to load images and process them if possible. It's an open problem in ML: if you try to replicate results of some papers and use their models but they haven't open sourced the preprocessing you might get worse results. In the "worst" case you can implement both NHCW and NCHW and test which of them works.
About the output (assume the input are correctly buffered), how do i get the right result shape for each task (Detection, Classification, etc..).. Eg. an array or something look similar like when working with python .
This question clearly requires me to understand which NNs you are referring to. But I myself do the following:
Load the TensorRT .engine file in my code like this and deserialize like this
Print the bindings like this
Then I know the size of the input binding or bindings if there are many inputs, and the size of the output binding or bindings if there are many outputs.
This way you know the right result shape for each task. I hope this answered your question. If not, please add detailed comments and edit your post to be more precise. Thank you.
I read Nvidia docs and it's not beginner-friendly at all.
Yes I agree. You're better of searching TensorRT c++ (or Python) repositories from Github and studying their code. Have you seen TensorRT samples? It doesn't really take many lines of code to implement TensorRT inference.
|
73,764,769
| 73,764,787
|
Vector of Arrays or Array of vectors cpp
|
int n = 10;
vector<int> adj[n];
Does this line create an array of vectors or a Vector of Arrays.
And how is it different from
vector<vector <int>> vect;
|
vector<int> adj[n];
Creates an array of n vectors of ints. However, if n is not a compile time constant, this is not standard C++, which does not support variable length arrays. Some compilers may implement it as an extension.
vector<vector <int>> vect;
This creates a vector of vectors of ints.
The dimensions of the latter are no longer fixed, which makes them functionally quite different. vect can contain any number of vector<int> values.
There are also significant ramifications to using std::vector or std::array vs. raw arrays when it comes to passing results to/from functions.
|
73,764,997
| 73,765,174
|
Is there easier way to check for a variadic template type?
|
I've got a function
template<typename T, typename FuncT, typename ... Args>
static void Submit(Handle<T> handle, FuncT&& funcT, Args&& ... args);
Handle is a class that contains an index to the data inside some array. This data can be retrieved through a handle_cast function.
T& data = *handle_cast<T*>(handle);
I'm not going to cover this implementation, because it's not related to my question.
Handle<T> handle is the main handle to a resource. I'd like args to be a mix, sometimes it'd be a handle, sometimes a different data.
FuncT first argument is T& and the rest of the arguments might be either *handle_cast<Args*> if this argument is Handle<Some Type> or just Arg
I came out with this solution, but I'm not sure if it's correct or maybe it could be done easier.
template<typename T, typename FuncT, typename ... Args>
static void Submit(Handle<T> handle, FuncT&& funcT, Args&& ... args)
{
std::variant<Args...> v;
bool isHandle = std::visit([](auto&& arg) {
using T = std::decay_t<decltype(arg)>;
if constexpr (std::is_base_of_v<HandleBase, T>) {
return true;
} return false;
}, v);
func(*handle_cast<T*>(handle), std::forward<Args>(isHandle ? *handle_cast<Args*>(args) : args)...);
}
Is this solution ok or it can be done easier/cleaner?
|
You may be looking for something like this (not tested):
// Cloned from the standard std::forward
template<typename T>
T&& maybe_handle_forward(typename std::remove_reference<T>::type& t ) noexcept {
return std::forward<T>(t);
}
template<typename T>
T& maybe_handle_forward(Handle<T> h) noexcept {
return *handle_cast<T*>(h);
}
Now you can write
func(*handle_cast<T*>(handle), maybe_handle_forward<Args>(args)...);
|
73,765,330
| 73,765,346
|
Is it a race condition?
|
Two threads are executing a function named job, inside which they are incrementing a global variable. Will there be a race condition here?
int i = 0;
void *job(void *args)
{
i += 1;
}
|
Yes, this might result in parallel access to the variable, which is a problem. To avoid that, it's recommended to declare i as std::atomic<int>.
|
73,765,487
| 73,767,916
|
sfml gravity clipping shapes through floor
|
I tried to make a cube that moves side to side and bounces off the floor.
It bounced a couple times and then fell through the floor.
I tried making the floor higher.
I tried adding extra vertical velocity.
I have tried everything i can think of.
I would like to get the cube to not fall through the floor.
how do I do that?
#include <SFML/Graphics.hpp> <iostream>
int main(){
sf::RenderWindow window(sf::VideoMode(1000, 700), "project");
window.setFramerateLimit(60);
sf::RectangleShape rect;
int w = 100;
int h = 100;
rect.setSize(sf::Vector2f(w, h));
sf::Vector2f rectangle_position(500 - (w/2), 300 - (h/2));
rect.setPosition(rectangle_position);
float x_velocity = 3;
float y_velocity = 3;
while (window.isOpen()) {
sf::Event event;
while (window.pollEvent(event)) {
if (event.type == sf::Event::Closed) window.close();
if (sf::Keyboard::isKeyPressed(sf::Keyboard::Escape)) window.close();
}
if (rectangle_position.x > 1000 - w) {
x_velocity = x_velocity * -1;
}
if (rectangle_position.x < 1 ) {
x_velocity = x_velocity * -1;
}
if (rectangle_position.y > 700 - h) {
y_velocity = y_velocity * -1;
}
if (rectangle_position.y < 50) {
y_velocity = y_velocity * -1;
}
y_velocity = y_velocity + 3;
rectangle_position.x = rectangle_position.x + x_velocity;
rectangle_position.y = rectangle_position.y + y_velocity;
rect.setPosition(rectangle_position);
window.clear();
window.draw(rect);
window.display();
}
}
|
In your implementation, once the bottom of the rectangle goes below the ground, you just reverse the velocity but never update the position of the rectangle. This causes the rectangle to sink below the ground.
You should make sure that the bottom of the rectangle never goes below the ground. This can be done by adding the following condition:
if (rectangle_position.y > 700 - h) {
// make sure that rectangle never goes below the ground
rectangle_position.y -= 3;
y_velocity = y_velocity * -1;
}
And the result is:
Hope it helps.
|
73,765,835
| 73,765,880
|
How to call a function with the same name within another class?
|
I have two classes, a Card class and Deck class. Both have display() functions, and I cannot change the name of the function. I am unsure of how I can call Card::display() inside of Deck::display().
Card.cpp
void Card::display( ) const
{
cout << rank << suit;
}
Deck.cpp //in Deck.h I did #include "Card.h", I did not have Deck inherit
because Deck did not need to access the private member variables of Card (aka suit and rank)
void Deck::display( ) const
{
for (int i = 0; i < 52; i++)
{
if (i % 13 == 0 && i != 0)
{
cout << "\n";
deck[i].display(); <--//deck is an array of 52 Cards, each index consists of
// a rank and a suit; here I am trying
//to display the entire deck of cards 4 by 13 2d array
//hence why I want to call the display function from class
// Card
}
else
{
deck[i].display() <--
}
}
}
So that when the display function from Deck is called from the main.cpp
it will look like (for example, if the cards are in order):
AS 2S 3S 4S 5S 6S 7S ... (all the way until King of Spades) KS //new line, new row
AH ...(all the way until King of Hearts) KH //new line, next row
AD ...(all the way until King of Diamonds) KD //new line, next row
AC ...(all the way until King of Clubs) KC
Because the Card::display function (from code snippet above) displays
the rank then suit, and Deck::display would display the entire deck of cards.
I have been trying to do my own research online to no avail, so I would appreciate the help, thank you!
|
If deck is an array of Cards(as you mentioned), than when you write:
deck[i].display()
This display function will be implicit the display function of your Card class, so your code should be fine
|
73,765,945
| 73,765,986
|
C++ - Can a vector's size become more than the system RAM?
|
If I keep on inserting elements in a vector until I get an out of memory exception.
What is the limit to the final vector's size?
Size of RAM
Size of secondary memory because virtual memory is being used.
|
This is entirely system dependent.
However, on typical desktop and server systems, allocations via new in a C++ application are allocations of virtual memory. If the system has swap space, then it is entirely possible to allocate more virtual memory than the size of physical RAM, so your #2 is closer to the truth.
Of course your vector cannot grow to fill all of physical memory plus swap, because some is needed for the OS itself, the rest of your program, other processes running on the system, etc. The system might be configured to impose other limits, such as a fixed limit on the amount of virtual memory available to any one process or user, and it might also reserve some amount of memory for system-critical uses.
|
73,766,300
| 73,766,664
|
UTF-8 constexpr std::string_view from static constexpr std::array not valid on MSVC
|
The title is a bit wordy, the code demonstrates the problem better:
// Equivalent to क़
constexpr auto arr = std::array<char, 3>{static_cast<char>(0340),
static_cast<char>(0245),
static_cast<char>(0230)};
int main()
{
constexpr auto a = std::string_view{"क़"};
constexpr auto b = std::string_view{arr.data(), arr.size()};
static_assert(a.size() == 3);
static_assert(b.size() == 3);
static_assert(a[0] == b[0]);
static_assert(a[1] == b[1]);
static_assert(a[2] == b[2]);
static_assert(a == b);
return EXIT_SUCCESS;
}
The last static_assert fails on MSVC, but is fine on gcc and clang. At first I thought it might have been a Windows thing not supporting UTF-8 well, but it works fine at runtime:
int main()
{
constexpr auto a = std::string_view{"क़"};
constexpr auto b = std::string_view{arr.data(), arr.size()};
return a == b ? EXIT_SUCCESS : EXIT_FAILURE;
}
Adding /utf-8 to the compiler args makes no difference. It does appear to be a Unicode/UTF-8 issue, because a plain ASCII string works:
// foo
constexpr auto arr = std::array<char, 3>{'f', 'o', 'o'};
int main()
{
constexpr auto a = std::string_view{"foo"};
constexpr auto b = std::string_view{arr.data(), arr.size()};
static_assert(a == b);
return EXIT_SUCCESS;
}
This feels like a compiler bug, but I'm no language lawyer so it could be that I'm doing something I'm not supposed to - can anybody see what?
|
This is a compiler bug which Microsoft devs seem to already be aware of, see this bug report against the standard library.
It seems that comparing narrow string literals with bytes outside the [0,127] range against non string literals currently fails at compile-time, because the built-in __builtin_memcmp has a bug.
The issue is already a year old, but I couldn't find an update on it.
|
73,766,477
| 73,766,772
|
Is there a way to print base address (hex) not as characters but as readable 0x00000 format in a messagebox
|
void PrintBaseAddr() {
while (true) {
if (GetAsyncKeyState(VK_F6) & 0x80000) {
HMODULE BaseAddr = GetModuleHandleA(NULL);
BaseAddr += 0x351333;
MessageBoxA(NULL, (LPCSTR)BaseAddr, "Base Address", MB_OK);
}
}
}
so this is my code and the problem is, that BaseAddr is printed like this:
output
how can i cast it to something like 0xABC345FF?
|
MessageBox expects to receive a string. Since you want to pass an address and render it as a hexadecimal number, you need to create a string holding the hexadecimal number first.
void PrintBaseAddr() {
while (true) {
if (GetAsyncKeyState(VK_F6) & 0x80000) {
HMODULE BaseAddr = GetModuleHandleA(NULL);
BaseAddr += 0x351333;
std::ostringsream buffer;
buffer << (void *)BaseAddr;
MessageBoxA(NULL, buffer.str().c_str(), "Base Address", MB_OK);
}
}
}
If you're using MessageBox quite a bit, it's sometimes useful to create a manipulator to handle this:
class MessageBox {
std::string title;
int type;
public:
MessageBox(std::string const &title, int type = MB_OK)
: title(title), type(type)
{ }
friend std::ostream &operator<<(std::ostream &os, MessageBox const &m) {
auto &oss = dynamic_cast<std::ostringstream &>(os);
MessageBox(NULL, oss.str().c_str(), m.title, m.type);
return os;
}
};
You'd use this something like:
std::ostringstream os;
os << (void *)BaseAddress << MessageBox("Base Address");
The good point here is that you get all the usual stream capabilities, so if you need to print out some random class you can't deal with directly (but provides an overloaded << operator) you can use it in a MessageBox.
|
73,766,767
| 73,767,172
|
How C++ expands multiple parameters packs simultaneously,
|
Having following functions f0, f1, f2 in C++14 code, which accepts arbitrary number of fixed-length arrays:
#include <functional>
template<typename... TS, size_t N> void f0( TS(&& ... args)[N] ) {}
template<typename T, size_t... NS> void f1( T(&& ... args)[NS] ) {}
template<typename... TS, size_t... NS> void f2( TS(&& ... args)[NS] ) {}
int main(){
f0({1,2}, {3.0,4.0}, {true, false});
f1({1,2,3}, {4,5}, {6});
f2({1,2,3}, {4.0,5.0}, {true});
return 0;
}
Function f0 accepts arrays with different types and fixed array length. Function f1 accepts arrays with fixed type and different array lengths. It's clear how this works: C++ compiler deduces variable-length parameter pack in immediate context of template function instantiation, which is expanded in (&& ... args) expression.
Function f2 accepts arrays with different types and different array lengths, which produces two variable-length parameter packs, however there is only one ellipsis operator in pack expansion (&& ... args), but code compiles and works well.
So question is: what is general rule for expanding multiple parameter packs within single ellipsis operator? Obviously, at a minimum, they must be the same length, but what are the other requirements? Is there a precise definition that the n-th element of the first parameter packing should expand along with the n-th element of the second parameter packing?
Also, following code with explicit template argument provision does not compile: f2<int,float,bool,3,2,1>({1,2,3},{4.0f,5.0f},{true});. It would be interesting to know the reasons for this behaviour.
|
This is specified in C++ Standard section [temp.variadic]. Basically, it's what you described: when a pack expansion expands more than one pack, all those packs must have the same number of elements. And the expansion in most cases forms a list where the nth element in the resulting list uses the nth element of each expanded pack.
More exactly, paragraph 5 defines
A pack expansion consists of a pattern and an ellipsis, the instantiation of which produces zero or more instantiations of the pattern in a list (described below). The form of the pattern depends on the context in which the expansion occurs. Pack expansions can occur in the following contexts:
In a function parameter pack; the pattern is the parameter-declaration without the ellipsis.
...
In your example, each function template declares a function parameter pack named args. The patterns are TS(&& args)[N], T(&& args)[NS], and TS(&& args)[NS].
Paragraph 7 (after clarifying which packs are expanded by which pack expansions that when one pack expansion appears inside another) has the requirement
All of the packs expanded by a pack expansion shall have the same number of arguments specified.
And paragraph 8:
The instantiation of a pack expansion that is neither a sizeof... expression nor a fold-expression produces a list of elements E1, E2, ..., EN, where N is the number of elements in the pack expansion parameters. Each Ei is generated by instantiating the pattern and replacing each pack expansion parameter with its ith element.
So yes, for the instantiation of f3 where TS is deduced as int, double, bool and NS is deduced as 3, 2, 1, the pack expansion becomes a function parameter list with types int(&&)[3], double(&&)[2], bool(&&)[1].
|
73,767,539
| 73,767,638
|
when can xvalues become lvalues?
|
or, to phrase it another way; when can we assign to a temporary?
cppreference.com says:
"an rvalue expression is either prvalue or xvalue" and "an rvalue can't be used as the left-hand operand of the built-in assignment or compound assignment operators"
with that context in mind, I would like to understand what is happening in this code, specifically; why does the last line work, when all the others fail?
struct type { int x; };
type get_type() { return type{0}; }
int get_int() { return int{0}; }
int main() {
int * p1 = &get_int();// fail "cannot take the address of an rvalue"
type * p2 = &get_type();// fail "taking the address of a temporary object"
get_int() = int{1};// fail "expression is not assignable"
get_type() = type{1};// works
}
|
Assuming C++17 or later:
int * p1 = &get_int();// fail "cannot take the address of an rvalue"
A function call expression to a function returning by-value is a prvalue expression. The built-in & does not accept prvalues as operand.
type * p2 = &get_type();// fail "taking the address of a temporary object"
Same as above.
get_int() = int{1};// fail "expression is not assignable"
As above get_int() is a prvalue. Overload resolution will choose again the built-in = here, which does not accept prvalues on the left-hand side.
get_type() = type{1};// works
get_type() is still a prvalue and type{1} is (the same as int{1}) a functional style explicit cast to a non-reference type, which are also prvalue expressions. However overload resolution will not choose the built-in =, but instead the implicit (move) assignment operator of type which is declared as
constexpr type& operator=(type&&) noexcept;
is chosen, so that we have effectively a member function call:
get_type().operator=(type{1})
This causes both prvalues to be converted to xvalues by temporary materialization. This is triggered by calling a member function on the prvalue or by initializing a reference with the prvalue (the one in the parameter). The reference is then bound to the temporary object.
For a (non-normative) list of situations in which temporary materialization (i.e. prvalue-to-xvalue conversion) happens see the note 3 in [class.temporary]. When the conversion is applied is described normatively throughout the standard at relevant places. Cppreference also has a list of these situations.
(The compiler error messages are not using precise standard terminology by the way. That is often the case. For example in the first two cases no temporary is ever created. Taking the address of a temporary is therefore not really the cause of the failure. Instead the problem is just what I described above.)
Practically speaking, you can not assign to non-class-type rvalues, but you usually can to class-type rvalues. That is how the implicit assignment operator works and how the conventional assignment operators work. However a class can be defined so that it accepts assignment only to lvalues.
Also, xvalues do not become lvalues. The relevant part is only that prvalues become xvalues by temporary materialization conversion and that both xvalues and lvalues (collectively glvalues) share a lot of behavior, in particular that they refer to objects or functions (which prvalues don't).
|
73,767,655
| 73,768,355
|
Non-type template argument, from constant function argument?
|
I'm experimenting with C++ templates, and a kind of heterogenous type-safe map. Keys go with specific types. An example use would be something like a CSS stylesheet. I've got it to where I can things write:
styles.set<StyleKey::fontFamily>("Helvetica");
styles.set<StyleKey::fontSize>(23.0);
That type checks as desired; it won't compile calls where the key does not match it's intended value type.
But I'm wondering if there's also a way to write it like this:
styles.set(StyleKey::fontFamily, "Helvetica");
styles.set(StyleKey::fontSize, 23.0);
... and have it deduce the same thing, because the first argument is a constant.
Here's my flailing attempt, pasted below and on godbolt. The set2 template does not work.
#include <iostream>
#include <string>
using namespace std;
struct color {
float r,g,b;
};
ostream &operator <<(ostream &out, const color &c) {
return out << "[" << c.r << ',' << c.g << ',' << c.b << "]";
}
// Gives something that would have types: string, float, color, bool
enum class StyleKey {
fontFamily = 1, fontSize, fontColor, visible
};
template <StyleKey key>
struct KeyValueType {
};
struct StyleMap;
template <>
struct KeyValueType<StyleKey::fontFamily> {
typedef string value_type;
static void set(StyleMap *sm, value_type value);
};
template <>
struct KeyValueType<StyleKey::fontSize> {
typedef float value_type;
static void set(StyleMap *sm, value_type value);
};
struct StyleMap {
string fontFamily = "";
float fontSize = 14;
color fontColor = color{0,0,0};
bool visible = true;
template <StyleKey key>
void set(typename KeyValueType<key>::value_type value) {
cout << "set " << (int)key << " value: " << value << endl;
KeyValueType<key>::set(this, value);
}
template <StyleKey key>
void set2(StyleKey key2, typename KeyValueType<key>::value_type value) {
static_assert(key == key2);
cout << "set " << (int)key << " value: " << value << endl;
}
};
void KeyValueType<StyleKey::fontFamily>::set(StyleMap *sm, string str) {
sm->fontFamily = str;
}
void KeyValueType<StyleKey::fontSize>::set(StyleMap *sm, float sz) {
sm->fontSize = sz;
}
void print(const StyleMap &sm) {
cout << "font family : " << sm.fontFamily << endl;
cout << "font size : " << sm.fontSize << endl;
cout << "color : " << sm.fontColor << endl;
cout << "visible : " << sm.visible << endl;
}
int main() {
// Goal:
//
// StyleMap styles;
// styles[fontFamily] = "Helvetica";
// styles[fontSize] = 15.0;
// string fam = styles[fontFamily]
// float sz = styles[fontSize];
StyleMap styles;
// This works!
styles.set<StyleKey::fontFamily>("Helvetica");
styles.set<StyleKey::fontSize>(23.0);
// This won't compile, as desired
// styles.set<StyleKey::fontFamily>(20);
// But can we write it like this?
// styles.set2(StyleKey::fontFamily, "Helvetica");
// styles.set2(StyleKey::fontSize, 23.0);
print(styles);
}
|
You can't do SFINAE or specialization on function parameters, only template parameters. That means this can't work using your current approach.
What you could do is change your StyleKeys from being enum values to being empty tag structs with different types. Then you could specialize KeyValueType on each of those types and pass an object of one of those types to StyleMap::set. It can then use the type of that object to deduce the correct KeyValueType specialization to dispatch to.
namespace StyleKey {
static constexpr inline struct FontFamily {} fontFamily;
static constexpr inline struct FontSize {} fontSize;
};
template <typename KeyType>
struct KeyValueType {};
struct StyleMap;
template <>
struct KeyValueType<StyleKey::FontFamily> {
using value_type = std::string;
static void set(StyleMap *sm, value_type value);
};
template <>
struct KeyValueType<StyleKey::FontSize> {
using value_type = float;
static void set(StyleMap *sm, value_type value);
};
struct StyleMap {
std::string fontFamily = "";
float fontSize = 14;
template <auto key>
void set(typename KeyValueType<std::remove_const_t<decltype(key)>>::value_type value) {
KeyValueType<std::remove_const_t<decltype(key)>>::set(this, value);
}
template <typename KeyType>
void set2(KeyType key, typename KeyValueType<KeyType>::value_type value) {
KeyValueType<KeyType>::set(this, value);
}
};
void KeyValueType<StyleKey::FontFamily>::set(StyleMap *sm, std::string str) {
sm->fontFamily = str;
}
void KeyValueType<StyleKey::FontSize>::set(StyleMap *sm, float sz) {
sm->fontSize = sz;
}
Demo
Note: This approach will also work for the goal mentioned in the comment in your main function. Example.
|
73,767,924
| 73,772,059
|
C++ std::make_unique usage
|
This is the first time I am trying to use std::unique_ptr but I am getting an access violation
when using std::make_unique with large size .
what is the difference in this case and is it possible to catch this type of exceptions in c++ ?
void SmartPointerfunction(std::unique_ptr<int>&Mem, int Size)
{
try
{
/*declare smart pointer */
//Mem = std::unique_ptr<int>(new int[Size]); // using new (No crash)
Mem = std::make_unique<int>(Size); // using make_unique (crash when Size = 10000!!)
/*set values*/
for (int k = 0; k < Size; k++)
{
Mem.get()[k] = k;
}
}
catch(std::exception& e)
{
std::cout << "Exception :" << e.what() << std::endl;
}
}
|
When you invoke std::make_unique<int>(Size), what you actually did is allocate a memory of size sizeof(int) (commonly 4bytes), and initialize it as a int variable with the number of Size. So the size of the memory you allocated is only a single int, Mem.get()[k] will touch the address which out of boundary.
But out of bounds doesn't mean your program crash immediately. As you may know, the memory address we touch in our program is virtual memory. And let's see the layout of virtual memory addresses.
You can see the memory addresses are divided into several segments (stack, heap, bss, etc). When we request a dynamic memory, the returned address will usually located in heap segment (I use usually because sometimes allocator will use mmap thus the address will located at a memory shared area, which is located between stack and heap but not marked on the diagram).
The dynamic memory we obtained are not contiguous, but heap is a contiguous segment. from the OS's point of view, any access to the heap segment is legal. And this is what the allocator exactly doing. Allocator manages the heap, divides the heap into different blocks. These blocks, some of which are marked "used" and some of which are marked "free". When we request a dynamic memory, the allocator looks for a free block that can hold the size we need, (split it to a small new block if this free block is much larger than we need), marks it as used, and returns its address. If such a free block cannot be found, the allocator will call sbrk to increase the heap.
Even if we access address which out of range, as long as it is within the heap, the OS will regard it as a legal operation. Although it might overwrite data in some used blocks, or write data into a free block. But if the address we try to access is out of the heap, for example, an address greater than program break or an address located in the bss. The OS will regard it as a "segment fault" and crash immediately.
So your program crashing is nothing to do with the parameter of std::make_unique<int>. It just so happens that when you specify 1000, the addresses you access are out of the segment.
|
73,768,316
| 73,768,954
|
Qt zombies: Calling functions containing new with QTimer
|
I'm trying to display a dynamically updating table using QTimer with a refresh rate of 1s. I'm calling the function create_table which in turn calles create_title. The code runs without complaints but tracking memory usage one finds that it keeps creeping up steadily till the program becomes unresponsive, or crashes.
void MainWindow::TableDisplay(QWidget *tab) {
auto timer = new QTimer(this);
connect(timer, &QTimer::timeout, [refreshed_table_data, tab, this]() {
auto tableView = create_table(tab, refreshed_table_data, "My table");
});
timer->start(1000);
}
The culprits seem to be the various new statements (indicated by comments in my code) which seem to create a fresh zombies/leak every time QTimer runs a loop.
auto create_title(const std::string &title, QWidget * widget) {
auto tableTitle = new QLabel(title.c_str(), widget); // <-- new
tableTitle->setAlignment(Qt::AlignLeft);
tableTitle->show();
}
auto create_table(QWidget * tab,
const std::vector<std::string> &v,
const std::string &title) {
auto tableView = new QTableView(tab); // <-- new
auto model = new MyModel(v, tab); // <-- new
tableView->setModel(model);
tableView->setStyle(new QProxyStyle(QStyleFactory::create("Fusion"))); // <-- new
tableView->setHorizontalHeader(new QHeaderView(Qt::Horizontal)); // <-- new
create_title(title, tab);
tableView->show();
return tableView;
}
One solution could be to have the pointers initalized by new defined as member variables of MainWindow so that they are defined only once. But if I'm calling several such timed-functions, it would quickly become cumbersome, even more so when one considers there are rendering elements such as the line new QProxyStyle(QStyleFactory::create("Fusion")) etc which would then need to be accorded the status of member variables.
I tried wrapping the new statements with QScopePointers but unfortunately the instances get destroyed right after create_table goes out its scope, resulting in no rendering of the table graphics.
What's the right way of writing create_table without creating zombies?
|
For the question itself: Your createTable method could call QTimer::singleShot to schedule an update one second later.
auto create_table(QWidget * tab,
const std::vector<std::string> &v,
const std::string &title) {
...
QTimer::singleShot(1000 /*ms*/, this /*context*/, [this, tab, v, title]() {
create_table(tab, v, title);
});
return tableView;
}
However, as Igor noted, this whole approach seems backwards. The model should update itself, preferably event-based but if it has to poll external resources, it may have to use a timer itself.
Setting an entirely new model for a view is very expensive (because the view has to create everything from scratch) and can have annoying side-effects on the GUI side.
|
73,768,399
| 73,776,367
|
Why does C++23 ranges::to not constrain the container type C to be a range?
|
C++23 introduced the very powerful ranges::to for constructing an object (usually a container) from a range, with the following definition ([range.utility.conv.to]):
template<class C, input_range R, class... Args> requires (!view<C>)
constexpr C to(R&& r, Args&&... args);
Note that it only constrains the template parameter C not to be a view, that is, C may not even be a range.
However, its implementation uses range_value_t<C> to obtain the element type of C, which makes C at least a range given the range_value_t constraint that the template parameter R must model a range.
So, why is ranges::to so loosely constrained on the template parameter C?
I noticed that the R3 version of the paper used to constrain C to be input_range, which was apparently reasonable since input_range guaranteed that the range_value_t to be well-formed, but in R4 this constraint was removed. And I didn't find any comments about this change.
So, what are the considerations for removing the constraint that C must be input_range?
Is there a practical example of the benefits of this constraint relaxation?
|
This is a problem with the wording that we'll need to address, I'll open an issue later today. This is LWG 3785.
So, what are the considerations for removing the constraint that C must be input_range?
The goal of ranges::to is to collect a range into... something. But it need not be an actual range. Just something which consumes all the elements. Of course, the most common usage will be an actual container type, and the most common actual container type will be std::vector.
There are other interesting use-cases though, that there really isn't much reason to reject.
Let's say we have a range of std::expected<int, std::exception_ptr>, call it results. Maybe we ran a bunch of computations and maybe some of them failed. I could collect that into a std::vector<std::expected<int, std::exception_ptr>>, and that might be useful. But there's another alternative: I could collect it into a std::expected<std::vector<int>, std::exception_ptr>. That is, if all of the computations succeeded, I get as a value type all of the results. However, if any of them failed, I get the first error. That's a very useful thing to be able to do, that is very much conceptually in line with what ranges::to is doing to its input - so this could support:
auto processed = results | ranges::to<std::expected>();
if (not processed) {
std::rethrow_exception(processed.error());
}
std::vector<int> values = std::move(processed).value();
// go do more stuff
This is quite useful to support - especially since it doesn't really cost anything to not support it. We just have to not prematurely reject it.
|
73,768,552
| 73,771,523
|
Why does RegOpenKeyEx return "0" value?
|
I use CodeBlocks version 20.03 (x86) as IDE.
Here are my codes:
#include "iostream"
#include "windows.h"
using namespace std;
int main()
{
HKEY hkRegedit;
long longHataKodu;
longHataKodu = RegOpenKeyExA(HKEY_LOCAL_MACHINE, "Software\\Microsoft\\Chkdsk", 0, KEY_ALL_ACCESS | KEY_WOW64_64KEY, &hkRegedit);
cout << longHataKodu;
getchar();
return 0;
}
But the problem is even I run my program without administrator privileges, my program returns 0.
Here is the screenshot of my program:
How is that possible even I have written HKEY_LOCAL_MACHINE and KEY_ALL_ACCESS?
As far as I know, value 0 means ERROR_SUCCESS.
|
Registry virtualization is a compatibility shim for old applications that redirects writes to HKEY_USERS\<User SID>_Classes\VirtualStore instead of causing access denied errors.
To turn it off, add a manifest to your application with a requestedExecutionLevel node to mark yourself Vista/UAC compatible.
|
73,769,115
| 73,769,435
|
GCC C++ GTK ENTRY SET TEXT with GLADE
|
I have a small problem, I have used GTK with other languages but with C ++ being new I cannot find tutorials that allow me to use interfaces created with Glade and the ones I have found explain everything else.
I tried to write a code in C ++ where I have to insert a text in a GTK_ENTRY but nothing appears even following the instructions and using alternatives errors are shown during compilation.
I need to use GLADE because the software will have to work in many languages and the Glade file is customizable.
gtk_entry.cpp
#include <stdio.h>
#include <gtk/gtk.h>
#include <gmodule.h>
GtkWidget *MyEntry;
int main (int argc, char *argv[]){
GtkBuilder *XML;
GtkWidget *window;
gtk_init(&argc, &argv);
XML = gtk_builder_new();
gtk_builder_add_from_file (XML, "gtk_entry.glade", NULL);
window = GTK_WIDGET(gtk_builder_get_object(XML, "MyWindow"));
MyEntry = GTK_WIDGET(gtk_builder_get_object(XML, "MyEntry"));
gtk_builder_connect_signals(XML, NULL);
g_object_unref(XML);
gtk_widget_show(window);
gtk_main();
return 0;
}
extern "C" G_MODULE_EXPORT void on_Write_clicked(GtkButton *Write, gpointer user_data)
{
const gchar *TextVar;
printf("\n Verify Button Write");//Its work correctly
TextVar = "Verify Button Write"; // Work too
//doesnt appar nothing
void gtk_entry_set_text (GtkEntry* MyEntry, const gchar *TextVar);
}
extern "C" G_MODULE_EXPORT void on_Exit_clicked(GtkButton *Exit, gpointer user_data)
{
gtk_main_quit();
}
extern "C" G_MODULE_EXPORT void on_MyWindow_destroy(GtkWidget *MyWindow, gpointer user_data)
{
gtk_main_quit();
}
gtk_entry.glade
<?xml version="1.0" encoding="UTF-8"?>
<!-- Generated with glade 3.40.0 -->
<interface>
<requires lib="gtk+" version="3.24"/>
<object class="GtkWindow" id="MyWindow">
<property name="can-focus">False</property>
<signal name="destroy" handler="on_MyWindow_destroy" swapped="no"/>
<child>
<object class="GtkFixed">
<property name="visible">True</property>
<property name="can-focus">False</property>
<child>
<object class="GtkEntry" id="MyEntry">
<property name="width-request">100</property>
<property name="height-request">39</property>
<property name="visible">True</property>
<property name="can-focus">True</property>
</object>
<packing>
<property name="x">56</property>
<property name="y">38</property>
</packing>
</child>
<child>
<object class="GtkButton" id="Write">
<property name="label" translatable="yes">Write Text</property>
<property name="width-request">100</property>
<property name="height-request">80</property>
<property name="visible">True</property>
<property name="can-focus">True</property>
<property name="receives-default">True</property>
<signal name="clicked" handler="on_Write_clicked" swapped="no"/>
</object>
<packing>
<property name="x">60</property>
<property name="y">100</property>
</packing>
</child>
<child>
<object class="GtkButton" id="Exit">
<property name="label" translatable="yes">Exit</property>
<property name="width-request">100</property>
<property name="height-request">80</property>
<property name="visible">True</property>
<property name="can-focus">True</property>
<property name="receives-default">True</property>
<signal name="clicked" handler="on_Exit_clicked" swapped="no"/>
</object>
<packing>
<property name="x">200</property>
<property name="y">100</property>
</packing>
</child>
</object>
</child>
</object>
</interface>
|
Change the following line
//doesnt appar nothing
void gtk_entry_set_text (GtkEntry* MyEntry, const gchar *TextVar);
with
gtk_entry_set_text(GTK_ENTRY(MyEntry), TextVar);
And make sure you are compiling with -rdynamic flag.
|
73,770,170
| 73,770,709
|
Is std::istream_iterator<int> trivially copy constuctible?
|
Why does this program compile on MSVC but not on GCC and Clang? godbolt
#include <iterator>
#include <type_traits>
static_assert(std::is_trivially_copy_constructible_v<std::istream_iterator<int>>);
According to [istream.iterator.cons]/6, the constructor must be trivial. Does it relate to P0738 which moves the wording from Effects to Remarks?
|
Seems like there is a difference in implementation. The gcc library has a copy constructor
istream_iterator(const istream_iterator& __obj)
: _M_stream(__obj._M_stream), _M_value(__obj._M_value),
_M_ok(__obj._M_ok)
{ }
which is non-trivial.
MSVC does not, and apparently relies on a compiler generated one.
|
73,770,185
| 73,770,357
|
Is std::vector::end()[-1] a undefined behavior?
|
std::vector v { 1, 2, 3, 4, };
The v.end()[-1] is able to access element 4.
For a longabout 2 years time, I use *(v.rbegin() + 0)...
Will it cause any problems if I only use indexes ranged on [-static_cast<int>(v.size()), -1]?
|
You are allowed to use the subscript operator on a LegacyRandomAccessIterator.
i[n] convertible to reference *(i + n)
where n is of type std::iterator_traits<It>::difference_type (which is signed).
So, as long as you stay within the valid bounds, it's fine.
int main() {
std::vector v { 1, 2, 3, 4, };
for(auto i = -static_cast<std::ptrdiff_t>(v.size()); i < 0; ++i) {
std::cout << v.end()[i] << '\n';
}
}
The cast may overflow though.
|
73,770,870
| 73,771,320
|
Why do compilers now accept to call str() member on a returned std::ostream& from std::stringstream::operator<<()?
|
Consider the following line:
std::string s = (std::stringstream() << "foo").str();
This should not compile because std::stringstream::operator<<() is inherited by std::ostream and returns a std::ostream& which does not have an str() member.
It seems the main compilers are now accepting this code where they didn't in the past. What standard change happened to make this compile?
I made some tests with GCC, Clang and MSVC and I could find the version where the change happened:
Compiler
Rejects until (version)
Accepts from (version)
GCC
11.1
11.2
Clang
12.0.1
13.0.0
MSVC
v19.14
v19.15
You can find the test here
|
They all added the rvalue overload (see here) at around the same time.
The rvalue overload was introduced in added to C++11 and returns the same type of stream as its left-hand operand.
As has been noted in the comments, the reason for it being added to the compilers seemingly after a whole decade is that it was added to C++11 retroactively and very recently, probably after being approved for inclusion in C++20.
I'm turning this into a community wiki in case anyone has the inclination and patience to search for the reasoning behind the retroactive addition and amend the answer.
|
73,771,368
| 73,774,169
|
Why doesn't boost::asio::ip::udp::socket::receive_from throw interruption exception in Windows?
|
volatile std::sig_atomic_t running = true;
int main()
{
boost::asio::thread_pool tpool;
boost::asio::signal_set signals(tpool, SIGINT, SIGTERM);
signals.async_wait([](auto && err, int) { if (!err) running = false; });
while(running)
{
std::array<std::uint8_t, 1024> data;
socket.recieve_from(boost::asio::buffer(data), ....); // (1)
// calc(data);
}
return 0;
}
If my code is blocked in the (1) line in Linux and I try raise the signal, for example, with htop then the line (1) throws exception about the interruption but in Windows it doesn't. The problem in what I don't know how to exit the application.
What needs to do my program works equally in both OSs? Thanks.
Use Windows 10 (msvc 17), Debian 11 (gcc-9), Boost 1.78.
|
Regardless of the question how you "raise the signal" on Windows there's the basic problem that you're relying on OS specifics to cancel a synchronous operation.
Cancellation is an ASIO feature, but only for asynchronous operations. So, consider:
signals.async_wait([&socket](auto&& err, int) {
if (!err) {
socket.cancel();
}
});
Simplifying without a thread_pool gives e.g.:
Live On Coliru
#define BOOST_ASIO_ENABLE_HANDLER_TRACKING 1
#include <boost/asio.hpp>
namespace asio = boost::asio;
using asio::ip::udp;
using boost::system::error_code;
struct Program {
Program(asio::any_io_executor executor)
: signals_{executor, SIGINT, SIGTERM}
, socket_{executor} //
{
signals_.async_wait([this](error_code ec, int) {
if (!ec) {
socket_.cancel();
}
});
socket_.open(udp::v4());
socket_.bind({{}, 4444});
receive_loop();
}
private:
asio::signal_set signals_;
udp::socket socket_;
std::array<std::uint8_t, 1024> data_;
udp::endpoint ep_;
void receive_loop() {
socket_.async_receive_from( //
asio::buffer(data_), ep_, [this](error_code ec, size_t) {
if (!ec)
receive_loop();
});
}
};
int main() {
asio::io_context ioc;
Program app(ioc.get_executor());
using namespace std::chrono_literals;
ioc.run_for(10s); // for COLIRU
}
Prints (on coliru):
@asio|1663593973.457548|0*1|signal_set@0x7ffe0b639998.async_wait
@asio|1663593973.457687|0*2|socket@0x7ffe0b6399f0.async_receive_from
@asio|1663593973.457700|.2|non_blocking_recvfrom,ec=system:11,bytes_transferred=0
@asio|1663593974.467205|.2|non_blocking_recvfrom,ec=system:0,bytes_transferred=13
@asio|1663593974.467252|>2|ec=system:0,bytes_transferred=13
@asio|1663593974.467265|2*3|socket@0x7ffe0b6399f0.async_receive_from
@asio|1663593974.467279|.3|non_blocking_recvfrom,ec=system:11,bytes_transferred=0
@asio|1663593974.467291|<2|
@asio|1663593975.481800|.3|non_blocking_recvfrom,ec=system:0,bytes_transferred=13
@asio|1663593975.481842|>3|ec=system:0,bytes_transferred=13
@asio|1663593975.481854|3*4|socket@0x7ffe0b6399f0.async_receive_from
@asio|1663593975.481868|.4|non_blocking_recvfrom,ec=system:11,bytes_transferred=0
@asio|1663593975.481878|<3|
@asio|1663593976.494097|.4|non_blocking_recvfrom,ec=system:0,bytes_transferred=13
@asio|1663593976.494138|>4|ec=system:0,bytes_transferred=13
@asio|1663593976.494150|4*5|socket@0x7ffe0b6399f0.async_receive_from
@asio|1663593976.494164|.5|non_blocking_recvfrom,ec=system:11,bytes_transferred=0
@asio|1663593976.494176|<4|
@asio|1663593976.495085|>1|ec=system:0,signal_number=2
@asio|1663593976.495119|1|socket@0x7ffe0b6399f0.cancel
@asio|1663593976.495129|<1|
@asio|1663593976.495151|>5|ec=system:125,bytes_transferred=0
@asio|1663593976.495162|<5|
@asio|1663593976.495184|0|socket@0x7ffe0b6399f0.close
@asio|1663593976.495244|0|signal_set@0x7ffe0b639998.cancel
So that's 3 successful receives, followed by a signal 2 (INT) and cancellation which results in ec=125 (asio::error:operation_aborted) and shutdown.
Multi-threading
There's likely no gain for using multiple threads, but if you do, use a strand to synchronize access to the IO objects:
asio::thread_pool ioc;
Program app(make_strand(ioc));
|
73,771,983
| 73,772,519
|
circular dependent incomplete templated types
|
I am trying to create a graph data-structure in an adjacency-map style, however I am running into some problems with circularly dependent structs. I have tried doing a forward declaration, but it doesn't seem to work. I keep getting invalid use of incomplete type node_collection<int>, which makes sense since I am forward declaring my types, but I would still like to get this working somehow.
#include <unordered_map>
#include <vector>
#include <iostream>
template<typename T> struct edge;
template<typename T> struct node;
template<typename T> using edge_collection = std::unordered_map<std::string, edge<T>>;
template<typename T> using edge_refference = typename edge_collection<T>::iterator;
template<typename T> using node_collection = std::unordered_map<std::string, node<T>>;
template<typename T> using node_refference = typename node_collection<T>::iterator;
template<typename T>
struct node {
T data;
std::vector<edge_refference<T>> outgoing_edges{};
std::vector<edge_refference<T>> ingoing_edges{};
};
template<typename T>
struct edge {
T data;
node_refference<T> source;
node_refference<T> target;
};
template<typename T>
struct graph {
node_collection<T> nodes;
edge_collection<T> edges;
};
int main() {
graph<int> g{};
}
g++ main.cpp
What is really odd is that this works just fine when using clang on macos! I am only getting this issue on my linux PC.
|
The program is not guaranteed to compile, although the specification of the language is somewhat lacking in this area.
Your variable in main requires instantiation of graph<int>, which in turn requires instantiation of std::unordered_map<std::string, node<T>>.
std::unordered_map does not promise that it can be instantiated with incomplete types. Therefore instantiation of node<T> may also happen at this point.
Instantiation of node<T> requires in turn that edge_collection<T> be complete in order to access its ::iterator member to determine the type of node<T>'s member. But edge_collection<T> for the same reason as above may require instantiation of edge<T>. edge<T> in turn requires instantiation of node_collection<T> which is just std::unordered_map<std::string, node<T>> so that you now have circular dependence of the instantiations.
So you are just lucky if it happens to compile.
In practice compilers don't actually require that a class be complete when a member is accessed with :: as the standard requires, but allow lookup of any member that has been declared before the member declaration which caused the implicit instantiation requiring the lookup. However even with that more permissive behavior you would need to be lucky that iterator is declared in std::unordered_map before any use of the value type that would require it to be complete.
The only containers that the standard currently guarantees can be used this way with incomplete types are std::vector, std::list and std::forward_list (since C++17, none before).
Specifically for libstdc++ it seems that there is some intention to allow std::unordered_map to be usable with incomplete types, based on this commit, which is likely the reason that it works in GCC 12, but not GCC 11. If incomplete types were supported by std::unordered_map, then your program would be fine.
|
73,772,122
| 73,772,342
|
An efficient algorithm to sample non-duplicate random elements from an array
|
I'm looking for an algorithm to pick M random elements from a given array. The prerequisites are:
the sampled elements must be unique,
the array to sample from may contain duplicates,
the array to sample from is not necessarily sorted.
This is what I've managed to come up with. Here I'm also making an assumption that the amount of unique elements in the array is greater (or equal) than M.
#include <random>
#include <vector>
#include <algorithm>
#include <iostream>
const std::vector<int> sample(const std::vector<int>& input, size_t n) {
std::random_device rd;
std::mt19937 engine(rd());
std::uniform_int_distribution<int> dist(0, input.size() - 1);
std::vector<int> result;
result.reserve(n);
size_t id;
do {
id = dist(engine);
if (std::find(result.begin(), result.end(), input[id]) == result.end())
result.push_back(input[id]);
} while (result.size() < n);
return result;
}
int main() {
std::vector<int> input{0, 0, 1, 1, 2, 2, 3, 3, 4, 4};
std::vector<int> result = sample(input, 3);
for (const auto& item : result)
std::cout << item << ' ';
std::cout << std::endl;
}
This algorithm does not seem to be the best. Is there a more efficient (with less time complexity) algorithm to solve this task? It would be good if this algorithm could also assert the amount of unique elements in the input array is not less than M (or pick as many unique elements as possible if this is not the case).
Possible solution
As MSalters suggested, I use std::unordered_set to remove duplicates and std::shuffle to shuffle elements in a vector constructed from the set. Then I resize the vector and return it.
const std::vector<int> sample(const std::vector<int>& input, size_t M) {
std::unordered_set<int> rem_dups(input.begin(), input.end());
if (rem_dups.size() < M) M = rem_dups.size();
std::vector<int> result(rem_dups.begin(), rem_dups.end());
std::mt19937 g(std::random_device{}());
std::shuffle(result.begin(), result.end(), g);
result.resize(M);
return result;
}
|
The comments already note the use of std::set. The additional request to check for M unique elements in the input make that a bit more complicated. Here's an alternative implementation:
Put all inputs in a std::set or std::unordered_set. This removes duplicates.
Copy all elements to the return vector
If that has more than M elements, std::shuffle it and resize it to M elements.
Return it.
|
73,772,424
| 73,774,817
|
Refreshing QTableView based on data refreshed periodically via QTimer
|
I'm trying to refresh a QTableView based on data from a method get_data which is called periodically by QTimer as below.
Once I get the data, I update it in my model (via a member function refresh) and then call refresh_table to hopefully update the tableView with the current data. However, the call to tableView->update() (I've also tried tableView->repaint()) does not work as intended, and tableView does not update - staying put at its first-call value. Note that my data is liable to change completely, and it's not simply a matter of inserting extra rows.
void MainWindow::DataDisplay(QWidget *tab) {
auto tableView = create_table(tab,...);
auto timer = new QTimer(this);
auto model = new MyModel();
connect(timer, &QTimer::timeout, [tableView, model, this]() {
const auto data = get_data();
model->refresh(data);
refresh_table(model, tableView);
});
timer->start(1000);
}
auto refresh_table(MyModel *model,
QTableView *tableView) {
tableView->setModel(model);
tableView->update(); // does not work
}
What's the canonical way of refreshing one's model in Qt such that the associated TableView is updated?
|
The canonical way is for your model to emit a dataChanged() signal. Any associated QTableViews will receive that signal and respond by re-querying the model for new values and updating the associated cells of their table.
|
73,774,029
| 73,774,135
|
Convert if with init-statement (c++17) to c++14
|
This works only for c++17. Is there a way to convert this to c++14?
if (auto user = static_cast<CUser*>(pMover); user && !user->UserState())
return;
|
You have to split the if into 2 statements.
In order to limit the scope of user to the if statement, you can enclose it with {...}:
{
auto user = static_cast<CUser*>(pMover);
if (user && !user->UserState())
return;
}
|
73,774,540
| 73,774,958
|
Need to call some code in an arbitrary Windows process (externally) at some time interval. Is there any OS API call for it?
|
I want to test my idea, wherein I execute some code in the context of another process at some interval. What API call or kernel functionality or l technique should I look into to execute code in another process at some interval?
Seems like I need to halt the process and modify the instruction pointer value before continuing it, if that’s remotely possible. Alternatively, I could hook into the kernel code which schedules time on the CPU for each process, and run the code each time the next time slot happens for a process. But PatchGuard probably prevents that.
This time interval doesn’t need to be precise.
|
The wording of the question tells me you're fairly new to programming. A remote process doesn't have AN instruction pointer, it typically has many - one per executing thread. That's why the normal approach would be to not mess with any of those instruction pointers. Instead, you create a new thread in the remote process CreateRemoteThreadEx.
Since this thread is under your control, it can just run an infinite loop alternating between Sleep and the function you want to call.
|
73,776,372
| 73,781,421
|
how to solve cpp library confliction within anaconda?
|
I tried to install lightgbm with gpu in anaconda. I used pip in anaconda directly with --install-option='--gpu'. it's built successfully and lib_lightgbm.so links to libstdc++.so under /lib64.
as anaconda has it's own libstdc++.so under anaconda3/lib and it's different from the one under /lib64, when I try to import lightgbm, I got the error saying
anaconda3/lib/libstdc++.so.6: version GLIBCXX_3.4.29 not found (required by lib_lightgbm.so)
what's the recommend way to keep libstdc++.so in anaconda consistent with lightgbm (or any other libraries) built outside anaconda?
do I need to find the cpp compiler information used by anaconda? where can I find such information?
|
As described in microsoft/LightGBM#5106, when building lightgbm from source in a conda environment on Linux, the most reliable way to avoid linking to a libstdc++ that is using a new GLIBCXX version than the one found by conda is to use conda's C/C++ compilers.
Like the following.
# create conda env with lightgbm's dependencies
conda create \
--name lightgbm-env \
--yes \
-c conda-forge \
python=3.10 \
numpy \
scikit-learn \
scipy
# install `conda`'s C++ build tools into that environment
conda install \
--name lightgbm-env \
--yes \
-c conda-forge \
python=3.10 \
cmake \
gcc_linux-64 \
gxx_linux-64
# clone LightGBM
git clone \
--recursive \
--branch stable \
https://github.com/microsoft/LightGBM.git
# activate the conda env, so that conda's compilers are found first
source activate lightgbm-env
# install lightgbm with `pip`
cd LightGBM/python-package
pip install .
NOTE: This answer applies to lightgbm versions <=3.3.2.99.
|
73,776,523
| 73,776,849
|
Setting a test-wide tolerance with BOOST_DATA_TEST_CASE_F
|
Within a BOOST_FIXTURE_TEST_CASE, you can set a tolerance for all BOOST_TEST calls like so:
BOOST_FIXTURE_TEST_CASE(Testname, SomeFixture, *utf::tolerance(.01))
However, I cannot find a way to make this work with BOOST_DATA_TEST_CASE_F.
From Boost:
BOOST_DATA_TEST_CASE_F(my_fixture, test_case_name, dataset, var1, var2..., varN)
I've tried the obvious
BOOST_DATA_TEST_CASE_F(my_fixture, test_case_name, dataset, var1, var2..., varN, *utf::tolerance(.01))
but to no avail. It seems to me like it's just not supported.
Does anyone have any ideas on how to replicate similar behavior without having to specify a tolerance within every single BOOST_TEST call within a BOOST_DATA_TEST_CASE_F?
I am using version 1.72.
|
I don't see a way to do that, but you can set the tolerance in a BOOST_AUTO_TEST_SUITE(), and you can have as many of those suites as you want. So:
BOOST_AUTO_TEST_SUITE(suite1, *utf::tolerance(.01))
BOOST_DATA_TEST_CASE_F(my_fixture, test_case_name, dataset, var1, var2..., varN)
BOOST_AUTO_TEST_SUITE_END()
Repeat as necessary.
|
73,777,055
| 73,785,383
|
Failing to read from std::cin after pipe() with dup2()
|
I was trying to read output from a child process using a pipe(). I used dup2() to get the pipe's output through stdin, and cin.get(c) in a loop to get the child's output. The first time I do this, everything works fine. After that, however, every time I try to do this again, cin.get() returns EOF.
I thought that perhaps cin.clear() would help, since cin.get() is setting the failbit and eofbit when it reaches EOF... This does remove the flags, but the main problem remains: cin.get() does not read anything.
Here is a simplified case that shows the problem:
#include <iostream>
#include <unistd.h>
int main()
{
int fd[2];
pipe(fd);
int pid = fork();
//FIRST TIME
if (pid == 0)
{
close(fd[0]);
dup2(fd[1], 1);
close(fd[1]);
std::cout << "PROCESS 1 OUTPUT" << std::endl;
exit(0);
}
wait(NULL);
close(fd[1]);
int original = dup(0);
dup2(fd[0], 0);
close(fd[0]);
if (std::cin.fail())
std::cout << "FAIL1" << std::endl;
char c;
std::string output1;
while (std::cin.get(c))
{
output1 += c;
}
std::cin.clear(); // This helps with "FAIL"s, but doesn't fix the problem :(
if (std::cin.fail())
std::cout << "FAIL2" << std::endl;
dup2(original, 0);
close(original);
std::cout << "output: " << output1 << std::endl;
//SECOND TIME
int fd2[2];
pipe(fd2);
int pid2 = fork();
if (pid2 == 0)
{
close(fd2[0]);
dup2(fd2[1], 1);
close(fd2[1]);
std::cout << "PROCESS 2 OUTPUT" << std::endl;
exit(0);
}
wait(NULL);
close(fd2[1]);
int original2 = dup(0);
dup2(fd2[0], 0);
close(fd2[0]);
if (std::cin.fail())
std::cout << "FAIL3" << std::endl;
char c2;
std::string output2;
while (std::cin.get(c2))
{
output2 += c2;
}
if (std::cin.fail())
std::cout << "FAIL4" << std::endl;
dup2(original2, 0);
close(original2);
std::cout << "output2: " << output2 << std::endl;
return 0;
}
This program outputs the following:
output: PROCESS 1 OUTPUT
FAIL4
output2:
What I expected to get was the two outputs and no "FAIL". Do you have any idea why this happens?
I found an alternative solution using read() and reading straight from the pipe's fd[0], however I would love to understand why this is happening when I use cin.
|
As William Pursell mentioned, using both clear() and clearerr() will solve your problem.
After reading around for a bit, I came to the conclusion that this is due to the fact that std::cin and stdin are two different things, and therefore you need to reset their error flags separately:
std::cin is an object of class istream, and its internal
error state flags are set using std::cin.clear().
stdin is actually a FILE *, and its error flags and the EOF indicator are reset using std::clearerr(stdin).
std::cin is associated with the standard C input stream stdin
(according to cppreference.com), but I guess you still need to reset both separately.
These StackOverflow questions and answers might also help:
How would one generalise clearerr() under C++?
cin.clear() leave EOF in stream?
|
73,777,190
| 73,778,335
|
How to join two vector<vector<string>> in c++ based on a common column
|
So what I want to do is essentially do an inner join of two tables in c++. However, the only guide I could is this one from geeksforgeeks: https://www.geeksforgeeks.org/joining-tables-using-multimaps/
An example below is as follows:
// First Column Name: Numbers
// Second Column Name: Alphabets
vector<vector<string>> v1 = { vector<string> { "Numbers", "Alphabets" }
vector<string> { "1", "a" },
vector<string> { "2", "b" },
vector<string> { "3", "a" } }
// First Column Name: Fruits
// Second Column Name: Alphabets
vector<vector<string>> v2 = { vector<string> { "Fruits", "Alphabets" }
vector<string> { "apple", "a" },
vector<string> { "pear", "a" },
vector<string> { "peach", "c" },
vector<string> { "orange", "b" } }
So if I want to do a inner join via alphabets, (like the solution in the link, I can pass two arguments which is the index of the columns I want to compare) and get the following output:
vector<vector<string>> result = { vector<string> { "Numbers", "Alphabets", "Fruits" }
vector<string> { "1", "a", "apple" },
vector<string> { "1", "a", "pear" },
vector<string> { "3", "a", "apple" }
vector<string> { "3", "a", "pear" },
vector<string> { "2", "b", "orange" }, }
As you can see, I currently also need the column names in the final table, in the sense that I need to know that column 1 refers to Numbers, column 2 is Alphabets, and column 3 is Fruits, as I would need to extract out the data in a specific column later in my code, kind of like a SQL "get Numbers from result" which I can easily do with a for loop.
How do I do so? I've been stuck on this since I'm sure when building the new table, I would need to omit the first row of headers for both table when doing the comparison etc, which makes things complicated if there a strings of the same value.
EDIT: My data does not necessarily have to be in that format of vector<vector>. How I do it is I have an API that gives me a vector once I pass it my column Name, so I can easily store this in an unordered_map as well. I modeled into a "table-like" structure after that solution.
My original code looked something like this:
joinTwoTable(vector<vector<string>>& v1, int columnIndex1,
vector<vector<string>>& v2, int columnIndex2)
{
vector<vector<string>> result;
vector<string> header;
// Add in the header_row
for (string s : v1[0]) {
current.push_back(s);
}
for (string s2 : v2[0]) {
// Skip the common column again
if (s2 == v2[0][columnIndex2]) {
continue;
}
current.push_back(s2);
}
for (vector<string> v : v1) {
for (vector<string> v2 : v2) {
if (v[columnIndex1] == v2[columnIndex2]) {
vector<string> current;
for (string s : v) {
current.push_back(s);
}
for (string s2 : v2) {
// Skip the common column again
if (s2 == v2[columnIndex2]) {
continue;
}
current.push_back(s2);
}
result.push_back(current);
}
}
}
}
But the issue is also because here I can only inner join on one column, but what if I need to inner join on every common column names?
Thank you in advance!
|
In your code, you are actually computing for the column header names two times. First you are pushing them manually, then in the main loop you have for (vector<string> v : v1) which actually will start from the first row, which is again those column names.
Instead you can use a normal for loop and start from the second row.
Another problem is here
for (string s2 : v2) {
// Skip the common column again
if (s2 == v2[columnIndex2]) {
continue;
}
current.push_back(s2);
}
Here you are assuming that no other row item has the same string value as the row item of common column, which is not a good assumption at all. This worked in case of headings as column names have to be unique in SQL.
Here is my try
joinTwoTable(vector<vector<string>>& v1, int columnIndex1, vector<vector<string>>& v2, int columnIndex2)
{
vector<vector<string>> result;
vector<string> columns;
string commonColumn = v1[0][columnIndex1];
for(string s: v1[0]) {
columns.push_back(s);
}
for(string s: v2[0]) {
if(s == commonColumn) continue;
columns.push_back(s);
}
// push headers
result.push_back(columns);
for(int i=1; i!=v1.size(); i++) {
string joiner = v1[i][columnIndex1];
for(int j=1; j!=v2.size(); j++) {
if(v2[j][columnIndex2] == joiner) {
// push a new joined row
vector<string> temp;
for(string s1: v1[i]) {
temp.push_back(s1);
}
for(auto k=0; k!=v2[j].size(); k++) {
if(k==columnIndex2) continue;
temp.push_back(v2[j][k]);
}
result.push_back(temp);
}
}
}
return result;
}
|
73,777,244
| 73,777,617
|
Why does the bit-width of the mantissa of a floating point represent twice as many numbers compared to an int?
|
I am told that a double in C++ has a mantissa that is capable of safely and accurately representing [-(253 − 1), 253 − 1] integers.
How is this possible when the mantissa is only 52 bits? Why is it that an int16 is only capable of having a range of [-32,768, +32,767], or [-215, 215-1], when the same thing could be used for int16 to allow twice as many representable numbers?
|
The format of the double (64 bits) is as follows:
1 bit: sign
11 bits: exponent
52 bits: mantissa
We only need to look at the positive integers we can represent with the mantissa, since the sign bit takes care of the negative integers for us.
Naively, with 52 bits, we can store unsigned integers from 0 to 2^52 - 1. With a sign bit, this lets us store from -2^52 - 1 to 2^52 - 1.
However, we have a little trick we can use. We say that the first digit of our integer is always a 1, which gives us an extra bit to work with.
To see why this works, let's dig a little deeper.
Every positive integer will have at least one 1 in its binary representation. So, we shift the mantissa left or right until we get a 1 at the start, using the exponent. An example might help here:
9, represented as an unsigned int: 000...0001001 (dots representing more 0s).
Written another way: 1.001 * 2^3. (1.001 being in binary, not decimal.)
And, we'll agree to never use a 0 as the first bit. So even though we could write 9 as 0.1001 * 2^4 or 0.01001 * 2^5, we won't. We'll agree that when we write the numbers out in this format, we'll always make sure we use the exponent to "shift" bits over until we start with a 1.
So, the information we need to store to get 9 is as follows:
e: 3
i: 1.001
But if i always starts with 1, why bother writing it out every time? Let's just keep the following instead:
e: 3
i: 001
Using precisely this information, we can reconstruct the number as: 1.i * 2^e == 9.
When we get up to larger numbers, our "i" will be bigger, maybe up to 52 bits used, but we're actually storing 53 bits of information because of the leading 1 we always have.
Final Note: This is not quite what is stored in the exponent and mantissa of a double, I've simplified things to help explain, but hopefully this helps people understand where the missing bit is from. Also, this does not cover 0, which gets a special representation (the trick used above will not work for 0, since the usual representation of 0 doesn't have any 1s in it).
|
73,778,605
| 73,779,512
|
How to restrict access to the most direct base class, while still exposing the base-base classes?
|
I have a class hierarchy similar to this:
class Widget {
// a lot of virtual members
};
class Button : public Widget {
// new stuff + overrides
};
class MySuperButton : public Button {
// ...
};
And I would like to hide the fact that MySuperButton inherit from Button, but not from Widget. Basically making the inheritance from Button private while keeping all its base classes public.
Why?
I have a complicated widget build on Button, which needs to maintain some invariant with its button state. Exposing it has a Button might allow something to
modify the button directly breaking these invariants.
Example:
MySuperButton button;
button.setText("Click me!") // calls Button::setText
// Oh no, MySuperButton set some special text which has now been overriden =(
What doesn't work
Making MySuperButton inherit from Button privately also hides Widget, preventing me from doing Widget things with my button.
Using access specifiers does not prevent MySuperButton to be converted into a Button.
So void doButtonStuff(Button& b); will accept a MySuperButton& just fine.
Using compositon forces me to reimplement a bunch of stuff that Button already reinmplements, just to forward it which is a PITA. Especially since the actual hierarchy is rather deep and these are big classes.
Virtual inheritance doesn't seem to work as the base isn't visible (not sure why that would be a problem though). See Godbolt
I can not modify the Button or Widget classes as they are from an external library (Qt in this case). Also the actual code is somewhat more complicated, the provided hierarchy is for illustration.
Is there any way to do this, or do I need to accept that my widget can be broken if I am not careful ?
|
What you are asking is not really possible.
A possible Qt-specific solution is the following:
class MySuperButton : public Widget {
public:
MySuperButton () {
QVBoxLayout *layout = new QVBoxLayout;
layout->addWidget(button = new Button());
setLayout(layout);
}
private:
Button *button;
}
|
73,778,815
| 73,779,325
|
Understanding what typedef HRESULT(__stdcall* endScene)(IDirect3DDevice9* pDevice) does
|
I found this piece of code and i cant understand what it means and what it does:
typedef HRESULT(__stdcall* endScene)(IDirect3DDevice9* pDevice);
endScene pEndScene;
I would appreciate any clues
|
It is declaring an alias for a function pointer type, and then declaring a variable of that type.
It is declaring a type named endScene, which is a pointer to a function that takes in an IDirect3DDevice9* as input, uses the __stdcall calling convention, and returns an HRESULT as output.
And then it is declaring a variable named pEndScene of type endScene.
|
73,778,901
| 73,779,109
|
Type deduction in C++ for positional information
|
Let's assume I have a collection of functions following the following pattern:
template <typename T, typename ... Args>
T example(Args ... args, T* defaultValue);
Furthermore I have another function that operates on these collections:
template <auto Function, typename T, typename ... Args>
auto transform(Args ... args) {
// ...
T defaultValue /* = ... */;
auto result = Function(args... , &defaultValue);
// ...
return result;
}
For example the transform function can be used like
auto result = transform<ExampleFunction, ExampleT>(int a, int b, int c);
The compiler completes the types for Args. But I have the problem that I have to specify T. However I know that T is always the last type of the parameters of Function.
Is there a way to "teach" the compiler this property so I can call the transform
function without specifying T?
auto result = transform<ExampleFunction>(int a, int b, int c);
Example
In vulkan there are these functions vkEnumerateInstanceExtensionProperties and such. You always have to call these functions twice to retrieve some data. I want to create a function listify which returns this data as vector.
template <auto Function, typename T, typename ... Args>
auto listify(Args ... args)
{
uint32_t size;
Function(args... , &size, nullptr);
std::vector<T> result(size);
Function(args... , &size, result.data());
return result;
}
I would like to call this function like so
auto result = listify<vkEnumerateInstanceExtensionProperties>(nullptr);
|
You can get the parameters from function.
for example you can do something like this
#include <tuple>
template<typename F>
struct last_arg{};
template<typename ret, typename ...args>
struct last_arg<ret(*)(args...)>{
using type = std::tuple_element_t<sizeof...(args)-1,std::tuple<args...>>;
};
template <auto F, typename ...Args>
auto f(Args... args) {
// in this case it's unlikely you'd want to specify T, so you can move it out from template parameter
using T=std::remove_pointer_t<typename last_arg<decltype(F)>::type>;
T defaultValue;
auto result = F(args... , &defaultValue);
return result;
}
int bar(int,int*);
void foo(){
f<bar>(1);
}
(this assumes the function is not a function object, but that's also possible)
|
73,779,251
| 73,779,907
|
Calculating the position of an orbiting object with a pivot point based on angle or vector
|
I'm trying to get an object to orbit around a point at a fixed distance. I've tried some methods such as setting the position of the object to the normalized vector multiplied by how far away I want the object to be from the pivot, but the input vector is sometimes (0, 0), which when multiplied results in 0 causing the orbiting object to snap to the position of the pivot.
In object based scene hierarchies such as the Unity Game Engine, you are able to parent an object to another, and use that parent as a pivot point, causing the child to behave in exactly the way I want my orbiting object to behave.
Example of this behavior in Unity:
I don't need the rotation of the child to change as I already have that handled, I just need the position of the object to move based on the angle so it always maintains the same distance from the pivot. Is there a formula to calculate this position based on either an angle or a vector or any resources that I could use to learn more about how to achieve something like this?
All I need is to take input of an angle or vector and receive a position relative to the pivot location as output.
If anything needs clarification let me know. Thanks!
|
If your pivot point has coordinates (x, y), then you only need to set the position of your object (ideally its center of mass) to
(x + r*cos(t), y + r*sin(t))
where r is the distance from the pivot and t is an angle, presumably a function of time (or something you periodically increase).
In C++, you have std::sin and std::cos. They take radians as argument, so if you want for example t=0 and t=1 to produce the same result, you can use 2*pi*t instead of just t.
I don't know how the parenting thing is implemented in game engines like Unity, but you could use some sort of observer pattern (or just straight callbacks), where the orbiting object has the position of its center of orbit, and whenever the pivot object position changes, it calls the orbiting object to update its center of orbit. Or the orbiting object could keep a reference to the pivot point, but then you need to explicitly take care of lifetime considerations.
|
73,779,453
| 73,780,198
|
How to avoid console flickering/opening when executing a system() command in QT?
|
I'm working on an implementation to force the exit of a process by PID in QT. The only way I found to solve this problem is using the following lines of code:
QString processToKill = "taskkill /F /PID " + QString(getAppPid());
system(processToKill.toStdString().c_str());
These lines do their job and works well, the only detail I have found is that when executing this command a console opens and closes quickly (a flicker). Is there any way to prevent this behavior?
|
If you are using system() you cannot avoid the occasional flash of the console window. Were you to use any other program you might even see its window flash.
I won’t go into any detail about the security flaws inherent to using system().
The correct way to do this with the Windows API.
Even so, you are taking a sledgehammer approach. You should first signal the process to terminate gracefully. If it hasn’t done so after a second or two, only then should you crash it.
The SO question “How to gracefully terminate a process” details several options to properly ask a process to terminate.
If that fails, then you can simply kill a process using the TerminateProcess() Windows API function (which is what taskkill /f does). Here is a good example of how to do that: https://github.com/malcomvetter/taskkill
The relevant code has the following function:
BOOL TerminateProcess(int pid)
{
WORD dwDesiredAccess = PROCESS_TERMINATE;
BOOL bInheritHandle = FALSE;
HANDLE hProcess = OpenProcess(dwDesiredAccess, bInheritHandle, pid);
if (hProcess == NULL)
return FALSE;
BOOL result = TerminateProcess(hProcess, 1);
CloseHandle(hProcess);
return(TRUE);
}
Microsoft has a page all about Terminating a Process that you may wish to review as well.
|
73,779,721
| 73,779,785
|
Why does the clang sanitizer think this left shift of an unsigned number is undefined?
|
I know there are many similar questions on SO. Please read carefully before calling this a dup. If it is, I would be happy to get a reference to the relevant question.
It seems to me that the clang sanitizer is complaining about a perfectly valid left shift of an unsigned number.
int main()
{
unsigned int x = 0x12345678;
x = x << 12;
return 15 & x;
}
Compiled thusly:
clang -fsanitize=undefined,integer shift-undefined.cpp -lubsan -lstdc++
Results in this error:
shift-undefined.cpp:4:11: runtime error: left shift of 305419896 by 12 places cannot be represented in type 'unsigned int'
I understand that some bits will be shifted off into oblivion, but I thought that was legal for unsigned numbers. What gives?
|
-fsanitize=address,integer
The integer sanitizer turns on checking for "suspicious" overflows of unsigned integers too, which do not have undefined behavior.
See "-fsanitize=unsigned-integer-overflow: Unsigned integer overflow, where the result of an unsigned integer computation cannot be represented in its type. Unlike signed integer overflow, this is not undefined behavior, but it is often unintentional. This sanitizer does not check for lossy implicit conversions performed before such a computation (see -fsanitize=implicit-conversion)."
I'd remove that option and only concentrate on signed integer overflow:
-fsanitize=address,signed-integer-overflow
|
73,780,089
| 73,830,489
|
How to download a file from a private repository using curl?
|
I'm trying to download a file from a private GitHub repository, current code:
#include <curl/curl.h>
static size_t WriteMemoryCallback(void* contents, size_t size, size_t nmemb, void* userp) {
size_t realsize = size * nmemb;
auto& mem = *static_cast<std::string*>(userp);
mem.append(static_cast<char*>(contents), realsize);
return realsize;
}
void Download(std::string& data, char* url)
{
CURL* curl_handle;
CURLcode res;
struct curl_slist* slist{};
curl_handle = curl_easy_init();
curl_easy_setopt(curl_handle, CURLOPT_URL, url);
SecureZeroMemory(url, strlen(url));
slist = curl_slist_append(slist, "Authorization: token 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");
slist = curl_slist_append(slist, "Content-Type: application/json");
curl_easy_setopt(curl_handle, CURLOPT_HTTPHEADER, slist);
curl_easy_setopt(curl_handle, CURLOPT_TCP_KEEPALIVE, 0);
curl_easy_setopt(curl_handle, CURLOPT_WRITEFUNCTION, WriteMemoryCallback);
curl_easy_setopt(curl_handle, CURLOPT_WRITEDATA, &data);
curl_easy_setopt(curl_handle, CURLOPT_USERAGENT, "libcurl-agent/1.0");
curl_easy_setopt(curl_handle, CURLOPT_FOLLOWLOCATION, 1L); // redirects
//curl_easy_setopt(curl_handle, CURLOPT_VERBOSE, 1L); // only to debug
res = curl_easy_perform(curl_handle);
if(res != CURLE_OK)
std::cerr << "curl_easy_perform() failed: " << curl_easy_strerror(res) << '\n';
curl_easy_cleanup(curl_handle);
curl_global_cleanup();
}
int main(int argc, char* argv[]) {
std::string data;
Download(data, /* link */);
}
This is what being downloaded to data:
<!DOCTYPE html>
<html lang="en" data-color-mode="auto" data-light-theme="light" data-dark-theme="dark" data-a11y-animated-images="system">
<head>
<meta charset="utf-8">
<link rel="dns-prefetch" href="https://github.githubassets.com">
<link rel="dns-prefetch" href="https://avatars.githubusercontent.com">
<link rel="dns-prefetch" href="https://github-cloud.s3.amazonaws.com">
<link rel="dns-prefetch" href="https://user-images.githubusercontent.com/">
<link rel="preconnect" href="https://github.githubassets.com" crossorigin>
<link rel="preconnect" href="https://avatars.githubusercontent.com">
...
I think the problem is setting the authorization token, i also tried:
slist = curl_slist_append(slist, "Authorization: Bearer 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");
slist = curl_slist_append(slist, "Content-Type: application/json");
|
Add Authorization and Accept headers like this:
slist = curl_slist_append(slist, "Authorization: token <Your Token>");
slist = curl_slist_append(slist, "Accept: application/vnd.github.v3+raw");
then call
curl_easy_setopt(curl_handle, CURLOPT_HTTPHEADER, slist);
and provide the link to the file in this form:
https://api.github.com/repos/<owner>/<repository>/contents/<path>
|
73,780,645
| 73,780,762
|
Segmentation Fault in Stack, Printing Weird Numbers
|
I keep getting a segmentation fault and the printsum() areas of my code print crazy numbers. I've tried running it through an online GDB debugger but couldn't find the error. Essentially the first input contains the number of queries and then it prompts the user to type a "type" of query for each query that determines what the program will do. Here are the types:
Type 1: A single integer follows, which you have to push in the stack.
Type 2: Two space separated integers, n and x follow. You have to push
the given element x, n times in the stack.
Type 3: You have to print(in a new line) and pop the top most element
in stack.
Type 4: A single integer ‘n’ follows. You have to pop the top ‘n’
integers of the stack. Also print(in a new line) the sum of the popped
elements.
Type 5: You have to simply print the sum of all the elements of the
stack.
Note: In case of 'n'(the number of elements to be popped) being greater than the total number of elements in the stack, return the sum of elements present in the stack.
Here is my code:
#include <iostream>
#include <stack>
using namespace std;
stack<int> s;
void printsum() {
int sum;
while (!s.empty()) {
sum += s.top();
s.pop();
}
cout << sum << endl;
}
void type2(int numofPush, int numtoPush) {
int i;
for (i = 0; i < numofPush; ++i) {
s.push(numtoPush);
}
}
void type3() { s.pop(); cout << s.top() << endl; }
void type4(int singleInt) {
int i, sum = 0;
for (i = 0; i < singleInt; ++i) {
sum += s.top();
s.pop();
}
cout << sum << endl;
}
int main() {
int i, numQueries = 0, numtoPush = 0, numofPush, type = 0, singleInt;
cin >> numQueries;
for (i = 0; i < numQueries; ++i) {
cin >> type;
if (type == 1) {
cin >> numtoPush;
s.push(numtoPush);
} else if (type == 2) {
cin >> numofPush >> numtoPush;
type2(numofPush, numtoPush);
} else if (type == 3) {
type3();
} else if (type == 4) {
cin >> singleInt;
type4(singleInt);
} else if (type == 5) {
printsum();
}
}
}
And here is a sample result that happens with my code:
Input: 12 2 42 5 3 5 4 6 5 2 5 42 5 4 6 4 10 5 3 5
Output: Exited with return code -11 (SIGSEGV).
Output: 5 22144
Instead of:
5 205 30 175 385 215 50 120 5 115
|
For type 5:
Type 5: You have to simply print the sum of all the elements of the stack.
Since there is no way to iterate a stack, you get the sum by poping and getting stack top. To get the sum of a stack, you can copy the stack and on the sum calculation is complete, copy it back to stack s.
So update printsum like this:
void printsum() {
stack<int> j = s;
int sum = 0; //Be sure to init to zero
while (!s.empty()) {
sum += s.top();
s.pop();
}
cout << sum << endl;
s = j;
}
You will get the correct answer:
|
73,781,105
| 73,781,168
|
How to debug c++ file in vscode on macOS?
|
I've been trying to debug .cpp file in vscode and following files have been installed perfectly. (I've ran .cpp file via code-runner extension but now I'd like to debug c++ files.)
$ clang --version // or g++ --version (both same result)
Apple clang version 13.1.6 (clang-1316.0.21.2.5)
Target: arm64-apple-darwin21.5.0
Thread model: posix
InstalledDir: /Library/Developer/CommandLineTools/usr/bin
And I've clicked the debug button and tried every compiler that listed
C/C++: clang++ build and debug active file
C/C++: clang build and debug active file
C/C++: g++ build and debug active file
C/C++: cpp build and debug active file
But on every try I've encountered following result:
Build finished successfully.
* Terminal will be reused by tasks, press any key to close it.
or an error message
The preLaunchTask 'C/C++: g++ build active file' terminated with exit code -1.
Is there another way that I can debug c++ file in vscode on macOS?
My tasks.json file (my c++ file is in the project root directory)
{
"tasks": [
{
"type": "cppbuild",
"label": "C/C++: clang++ build active file",
"command": "/usr/bin/clang++",
"args": [
"-fcolor-diagnostics",
"-fansi-escape-codes",
"-g",
"${file}",
"-o",
"${fileDirname}/${fileBasenameNoExtension}"
],
"options": {
"cwd": "${fileDirname}"
},
"problemMatcher": [
"$gcc"
],
"group": {
"kind": "build",
"isDefault": true
},
"detail": "Task generated by Debugger."
}
],
"version": "2.0.0"
}
My launch.json file
{
"configurations": [
{
"name": "C/C++: clang build and debug active file",
"type": "cppdbg",
"request": "launch",
"program": "${fileDirname}/${fileBasenameNoExtension}",
"args": [],
"stopAtEntry": false,
"cwd": "${fileDirname}",
"environment": [],
"externalConsole": false,
"MIMode": "lldb",
"preLaunchTask": "C/C++: clang build active file"
}
],
"version": "2.0.0"
}
|
the launch.json "preLaunchTask": "C/C++: clang build active file"
use task.json "label": "C/C++: clang++ build active file"
as it's execute task
so you launch.json should be modified
notice change clang to clang++ to match task label
{
"configurations": [
{
"name": "C/C++: clang build and debug active file",
"type": "cppdbg",
"request": "launch",
"program": "${fileDirname}/${fileBasenameNoExtension}",
"args": [],
"stopAtEntry": false,
"cwd": "${fileDirname}",
"environment": [],
"externalConsole": false,
"MIMode": "lldb",
"preLaunchTask": "C/C++: clang++ build active file"
}
],
"version": "2.0.0"
}
|
73,781,772
| 73,781,954
|
Result of string in conditional statement in C++
|
if( condition && "Text in double quotes")
{
do this;
}
else
{
do that;
}
Saw this code in a program, in which case would the conditional expression in the IF-statement return true and in which case would it return false?
|
false && "Literally anything" is always false.
The primary expression is false, operator && is a logical operator (unless it was overloaded), so it will short-circuit.
This is because false && E (where E is an expression) is known to always evaluate to false. Therefore E doesn't need to be evaluated at all.
So if E was a function call with side effects: the side effects won't occur since the function is never called.
Example:
int main() { return false && "anything"; } // returns 0
Live on compiler explorer
|
73,781,815
| 73,781,941
|
Image blurring with different approaches
|
I am learning how to write down c++ program using open cv for Gaussian filtering for image blurring.
After browsing lot of websites I found two different types of coding but both claim that those code can be used from image blurring and they applied Gauusian filter method for blurring.
Code 1: Source
int main(int argc, char** argv) {
cv::Mat image;
image = cv::imread(argv[1],cv::IMREAD_COLOR);
cv::Mat dst;
cv::Mat dst2;
cv::pyrDown(image, dst);
cv::pyrDown(dst,dst2);
cv::imshow("original image", image);
cv::imshow("1st downsample",dst);
cv::imshow("2nd downsample",dst2);
cv::waitKey();
}
Code 2: Source[Book: Hands-On GPU-Accelerated Computer Vision with OpenCV and CUDA]
int main ()
{
cv::Mat h_img1 = cv::imread("images/cameraman.tif",0);
cv::cuda::GpuMat d_img1,d_result3x3,d_result5x5,d_result7x7;
d_img1.upload(h_img1);
cv::Ptr<cv::cuda::Filter> filter3x3,filter5x5,filter7x7;
filter3x3 = cv::cuda::createGaussianFilter(CV_8UC1,CV_8UC1,cv::Size(3,3),1);
filter3x3->apply(d_img1, d_result3x3);
filter5x5 = cv::cuda::createGaussianFilter(CV_8UC1,CV_8UC1,cv::Size(5,5),1);
filter5x5->apply(d_img1, d_result5x5);
filter7x7 = cv::cuda::createGaussianFilter(CV_8UC1,CV_8UC1,cv::Size(7,7),1);
filter7x7->apply(d_img1, d_result7x7);
cv::Mat h_result3x3,h_result5x5,h_result7x7;
d_result3x3.download(h_result3x3);
d_result5x5.download(h_result5x5);
d_result7x7.download(h_result7x7);
cv::imshow("Original Image ", h_img1);
cv::imshow("Blurred with kernel size 3x3", h_result3x3);
cv::imshow("Blurred with kernel size 5x5", h_result5x5);
cv::imshow("Blurred with kernel size 7x7", h_result7x7);
cv::waitKey();
return 0;
}
My question is if both codes use for the same blurring technique using Gaussian filter why there are lot of syntactical and functional differences in two codes?
|
pyrDown() not only blurs the image by applying a Guassian filter, it also downsamples the resulting image to produce a smaller sized image. It is producing an image pyramid by applying Gaussian blurring and downsampling. The reason for applying Gaussian blurring is that if you downsample directly, the resulting image will have a jagged appearance.
The second code snippet does only the Gaussian blurring. If you also appropriately downsample the resulting images then you will see results that are very close to the first code snippet. Specifically, pyrDown() by default applies the 5X5 Gaussian filter and downsamples by choosing every other pixel in both rows and columns to result in an image that is half the size in both rows and columns. So, you could try downsampling the result of applying the 5X5 filter likewise and compare.
The very close caveat is because in order to match the results of the two approaches, they both have to (1) deal with borders and (2) downsample in the same way.
|
73,782,191
| 73,784,077
|
Mocking a class instance created in other class constructor C++ Google Mock
|
To give you a bit of an insight into my system - I've got a hardware device that can be communicated with using USB, SPI or UART bus from a host like Raspberry PI or regular PC (USB only). It's a communication dongle and the library should be available to users so the ease of use is a crucial thing for me.
Now I'd like to apply tests to my code (Gtest). I've already made an interface called Bus, to which you can inject objects of USB/SPI/UART classes.
The Bus interface pointer is in the dongle's class let's say "Dongle". When the user creates the Dongle class object, the constructor is called in which an appropriate Bus object is created based on an enum given by the user in Dongle constructor parameters. Moreover it tries to reset the dongle into a known state using the Bus interface methods (transfer). Some pseudocode to visualize:
#include "uart_dev.hpp"
#include "spi_dev.hpp"
#include "usb_dev.hpp"
class Dongle
{
private:
Bus* bus;
public:
Dongle(busType_E type)
{
switch(type)
{
case busType_E::SPI:
{
bus = new SPI();
break;
}
case busType_E::UART:
{
bus = new UART();
break;
}
case busType_E::USB:
{
bus = new USB();
break;
}
}
reset();
}
bool reset(void)
{
/* uses the selected object under the bus interface */
return bus->transfer(...);
}
};
How can I mock the USB/SPI/UART communication busses, when I have to replace the object created in the Dongle constructor? I know I could pass the USB/SPI/UART object into the Dongle constructor, but this might be confusing for the end users and if possible I'd like to avoid that.
I was also wondering about solutions given here, however I'm not sure how could I replace the created object with a mock just before the reset() method is called.
Maybe my approach is totally incorrect - any advice is appreciated.
|
You might have extra constructor:
class Dongle
{
private:
std::unique_ptr<Bus> bus;
public:
// also used for testing to pass a MockBus
// Can be made protected, and make friend a test factory.
explicit Dongle(std::unique_ptr<Bus> bus) : bus(std::move(bus)) { reset(); }
explicit Dongle(busType_E type) : Dongle(make_bus(type)) {}
static std::unique_ptr<Bus> make_bus()
{
switch(type)
{
case busType_E::SPI: { return std::make_unique<SPI>(); }
case busType_E::UART: { return std::make_unique<UART>(); }
case busType_E::USB: { return std::make_unique<USB>(); }
}
return nullptr; // Or other error handling as throw
}
// ...
};
|
73,782,267
| 73,782,728
|
construction and destruction of parameterized constructor argument?
|
Here, i am getting different out on different compiler, why is that ?
On msvc compiler, there i'm getting extra destructor statement ?
Why i'm getting this behaviour ?
Am i missing something ?
i had looked many question on stackoverflow, but i can't find anything related to my problem ?
i also tried to look for duplicate, but didn't find one.
class A {
public:
A()
{
std::cout << "A::constructor" << "\n";
}
~A()
{
std::cout << "A::Destructor" << "\n";
}
int x = 0;
int y = 0;
};
class B {
public:
A member_var_1;
int member_var_2;
B()
{
std::cout << "B::constructor" << '\n';
}
B(A a, int b)
{
member_var_1 = a;
member_var_2 = b;
std::cout << "B(A, int)::constructor " << '\n';
}
~B()
{
std::cout << "B::destructor" << '\n';
}
};
int main()
{
B v1 {A(), 5};
}
GCC output:
A::consturctor // parameterized constructor first argument constructor
A::consturctor // construction of B's class member (member_var_1)
B(A, int)::consturcotr // B class parameterized constructor
A::Destructor // Destruction of argument of parameterized constructor
B::destructor // object goes out of scope, so B destructor called
A::Destructor // B's Destructor called member's destructor
MSVC output:
A::consturctor
A::consturctor
B(A, int)::consturcotr
A::Destructor
A::Destructor // what is it destroying? if i define a "class A" copy constructor, then i don't get this output.
B::destructor
A::Destructor
|
Since you're using C++17 and there is mandatory copy elision from C++17(&onwards), the extra destructor call must not be there.
A msvc bug has been reported as:
MSVC produces extra destructor call even with mandatory copy elision in C++17
Note that if you were to use C++11 or C++14, then it was possible to get an extra destructor call because prior to c++17 there was no mandatory copy elision and the parameter a could've been created using the copy/move constructor which means that you'll get the fourth destructor call as expected. You can confirm this by using the -fno-elide-constructors flag with other compilers. See Demo that has a contrived example of this.
|
73,782,471
| 74,617,118
|
How Did the GNU libstdc++ Library Find Its Way into Our App?
|
The Question
This is not about legal advice. We are asking why/how the GNU libstdc++ library seemingly found its way into our app, when to the best of our knowledge it should not be part of our app, according to our IDE setup.
Context
We recently used Android's documentation to include open source notices in our app. After using this, an activity can be shown presumably listing all the open-source software (OSS) dependencies in our app. Clicking on any of these dependencies shows the relevant OSS license.
In the list of autogenerated OSS dependencies there is an entry titled STL, which when clicked displays the following text:
SGI STL
The STL portion of GNU libstdc++ that is used with gcc3 and gcc4 is licensed under the GPL, with the following exception:
[...]
The problem is that to the best of our knowledge we do not use the GNU libstdc++ nor do we use the gcc3/gcc4 compiler in our app, according to our IDE setup.
IDE Setup
We use Android Studio and the JNI to also compile code written in C++. Our C++ code uses the standard library (e.g. we use the string class among others).
This guide from Android's documentation states that
LLVM's libc++ is the C++ standard library that has been used by the Android OS since Lollipop, and as of NDK r18 is the only STL available in the NDK.
We are way past Lollipop and the NDK we use is 21.4.7075529, so this should be enough to guarantee that the standard library used in our app is LLVM's libc++ (and not GNU's libstdc++). But in any case, we also specify Clang (which is part of the LLVM) as the compiler, just to be sure.
All of this is reflected in our build.gradle (app) file, some parts of which are pasted below:
...
android {
compileSdkVersion 33
ndkVersion "21.4.7075529"
...
defaultConfig {
...
externalNativeBuild {
cmake {
...
arguments "-DANDROID_STL=c++_shared","-DANDROID_TOOLCHAIN=clang"
}
}
}
}
Just in case it is relevant, we also include the following line in our CMakeLists.txt:
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11")
So what are we doing wrong here? There should not be any reference to GNU libstdc++, nor to the gcc compiler. Can anyone please help? Many thanks in advance!
|
I found it for example in the com.google.android.gms:play-services-oss-licenses dependency itself.
I did it by running the <variant>OssLicensesTask multiple times, and using divide and conquer to comment out half of the dependencies that can potentially include that license and check the generated third_party_license_metadata file to see if it contains STL.
And then, if you look at the .aar file you can verify this. After you unpack it, there are third_party_licenses files:
And there is "STL": {"length": 680, "start": 3372} in the third_party_licenses.json that points to this part of third_party_licenses.txt:
STL:
SGI STL
The STL portion of GNU libstdc++ that is used with gcc3 and gcc4 is licensed
under the GPL, with the following exception:
# As a special exception, you may use this file as part of a free software
# library without restriction. Specifically, if other files instantiate
# templates or use macros or inline functions from this file, or you compile
# this file and link it with other files to produce an executable, this
# file does not by itself cause the resulting executable to be covered by
# the GNU General Public License. This exception does not however
# invalidate any other reasons why the executable file might be covered by
# the GNU General Public License.
I checked that if I leave all dependencies except for the com.google.android.gms:play-services-oss-licenses then STL is not listed, so in my case that was the only one, but it may be different for you.
|
73,782,520
| 73,790,154
|
How can I wait in reading from file between letters C++ qt
|
I wrote this code:
void StartToSendCommand(QString fileName, QPlainTextEdit *textEdit)
{
QFile file(fileName);
if (!file.open(QIODevice::ReadOnly | QIODevice::Text))
return;
QTimer * inputTimer=new QTimer(textEdit);
QTextStream in(&file);
QString line;
while (!in.atEnd()) {
line = in.readLine();
if (line==""||line[0]=="#")
continue;
qDebug()<<line;
//TO DO: make also a waiting time between letters.
inputTimer->start(GWaitingTimeBetweenCommand);
QApplication::processEvents();
QThread::sleep(2);`
}
inputTimer->deleteLater();
SendCommandByUsb(fileName, line);
}
and I want to do that when its read from the file its also make a wait for 1 second between any letter.
how can I make it?
|
If you want to wait for individual letters, you can also do this with a QTimer interval. Once the condition is met, you can submit the file.
Here would be a small example, oriented towards a typewriter but has the same effect:
.h
....
private slots:
void typeWriter();
private:
QString line;
QTimer *timer;
....
.cpp
//constructor
MainWindow::MainWindow(QWidget *parent)
: QMainWindow(parent)
, ui(new Ui::MainWindow)
{
ui->setupUi(this);
line = "C:/.../readme.txt";
timer = new QTimer(this);
connect(timer,&QTimer::timeout, this, &MainWindow::typeWriter);
timer->start(1000);
}
void MainWindow::typeWriter()
{
static int counter = 0;
if(counter < line.length())
{
counter ++;
ui->label->setText(line.mid(0, counter));
//SendCommandByUsb(fileName, line.mid(0, counter));
timer->setInterval(1000);
qDebug() << counter << " : " << line.length() << " : " << line.mid(0, counter);
}
if(counter == line.length())
{
qDebug() << "end";
timer->stop();
}
}
the length of the string is compared with the counter. it is counted until it reaches the length of the string. the string method QString::mid can be thought of as a slice. it is sliced from index 0 (i.e. the beginning of the string) to the end, the counter takes care of that.
The timer interval defines the interval between the next call.
Just for info (because that's how the question read):
If you want to open a file and always want to wait a second for a letter to be read, you have to consider that you don't have a valid file path. Well that wouldn't make sense because you have to commit the file before anything is read at all.
|
73,783,547
| 73,787,512
|
GDAL read several pictures from wmts server using same opened connection
|
I use C++ code to read pictures from WMTS server using DGAL.
First I initialize GDAL once:
...
OGRRegisterAll();
etc.
But new connection is opened every time I want to read new image (different urls):
gdalDataset = GDALOpen(my_url, GA_ReadOnly);
URL example: https://sampleserver6.arcgisonline.com/arcgis/rest/services/Toronto/ImageServer/tile/12/1495/1145
Unfortunately I didn't find a way to read multiply images by same connection.
Is there such option in GDAL or in WMTS?
Are there other ways to improve timing (I read thousands of images)?
|
While GDAL can read PNG files, it doesn't add much since those lack any geographical metadata.
You probably want to interact with the WMS server instead, not the images directly. You can for example run gdalinfo on the main url to see the subdatasets:
gdalinfo https://sampleserver6.arcgisonline.com/arcgis/services/Toronto/ImageServer/WMSServer?request=GetCapabilities&service=WMS
The first layer seems to have an issue, I'm not sure, but the other ones seem to behave fine.
I hope you don't mind me using some Python code, but the c++ api should be similar. Or you could try using the command-line utilities first (gdal_translate), to get familiar with the service.
See the WMS driver for more information and examples:
https://gdal.org/drivers/raster/wms.html
You can for example retrieve a subset and store it with:
from osgeo import gdal
url = r"WMS:https://sampleserver6.arcgisonline.com:443/arcgis/services/Toronto/ImageServer/WMSServer?SERVICE=WMS&VERSION=1.1.1&REQUEST=GetMap&LAYERS=Toronto%3ANone&SRS=EPSG:4326&BBOX=-79.454856,43.582524,-79.312167,43.711781"
bbox = [-79.35, 43.64, -79.32, 43.61]
filename = r'D:\Temp\toronto_subset.tif'
ds = gdal.Translate(filename, url, xRes=0.0001, yRes=0.0001, projWin=bbox)
ds = None
Which looks like:
import numpy as np
import matplotlib.pyplot as plt
ds = gdal.OpenEx(filename)
img = ds.ReadAsArray()
ds = None
mpl_extent = [bbox[i] for i in [0,2,3,1]]
fig, ax = plt.subplots(figsize=(5,5), facecolor="w")
ax.imshow(np.moveaxis(img, 0, -1), extent=mpl_extent)
Note that the data in native resolution for these type of services is often ridiculously large, so usually you want to specify a subset and/or limited resolution as the output.
|
73,783,695
| 73,784,365
|
C++: Template binding object and method using lambda expression
|
Due to the fact that std::functional uses heap memory when combining it with an std::bind I wanted to replace the std::bind with a lambda expression but I do not quite know how to do this. Is this even possible?
#include <iostream>
#include <functional>
#include <utility>
template<typename Signature>
class Callback;
template<typename R, typename... Args>
class Callback<R(Args...)> final
{
public:
Callback() noexcept : mFunc() {}
template<typename Obj, typename Method,
typename std::enable_if_t<std::is_invocable_r<R, Method, Obj, Args...>::value, int> = 0>
Callback(Obj& obj, Method method)
{
// That does not work
mFunc = [&obj](Args... args){ return obj.method(args); };
// mFunc = std::bind(method, obj, std::placeholders::_1, std::placeholders::_1); would work
}
R operator()(Args... args) const { return mFunc(args...); }
private:
std::function<R(Args...)> mFunc;
};
struct Foo
{
Foo() {}
void print(int a, int b)
{
std::cout << a << b << "\n";
}
};
int main()
{
Foo foo;
Callback<void(int, int)> cb(foo, &Foo::print);
cb(1,2);
}
Compiler throws the following error message:
main.cpp:19:46: error: expression contains unexpanded parameter pack 'args'
mFunc = [&obj](Args... args){ return obj.method(args); };
^ ~~~~
main.cpp:19:19: warning: lambda capture 'obj' is not used [-Wunused-lambda-capture]
mFunc = [&obj](Args... args){ return obj.method(args); };
|
If you really want to pass a member function pointer then you need to use syntax for calling the method via a member function pointer:
mFunc = [&,method](Args...x){ return (obj.*method)(x...);};
However, it would be simpler if you'd accept free callables and let the caller bind the object to a member function if necessary.
|
73,784,163
| 73,784,796
|
CreateWindowW always returns null
|
I'm new to C++ coding, and I'm trying to create a window using the Win32 API, but CreateWindowW() always return a NULL value. I tried to use CreateWindowEx() and CreateWindow(), but the same results.
Here is the code I used:
#include<windows.h>
LRESULT CALLBACK WindowProcedure(HWND, UINT, WPARAM, LPARAM);
INT WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance,PSTR lpCmdLine, INT nCmdShow)
{
WNDCLASSW wc = { 0 };
wc.hbrBackground = (HBRUSH)COLOR_WINDOW;
wc.hCursor = LoadCursor(NULL, IDC_CROSS);
wc.hInstance = hInstance;
wc.lpszClassName = L"myWindowClass";
wc.lpfnWndProc = WindowProcedure;
if (!RegisterClassW(&wc))
{
MessageBox(NULL, L"Regesteration of Window Class had Failed", L"Error", MB_ICONERROR);
return -1;
}
HWND x = CreateWindowW(L"myWindowClass",L"NEW Window",WS_OVERLAPPEDWINDOW|WS_VISIBLE,CW_USEDEFAULT,CW_USEDEFAULT,800,600,NULL,NULL,NULL,NULL);
if (!x)
{
MessageBox(NULL, L"Creation of Window Class had Failed", L"Error", MB_ICONERROR);
}
MSG msg = { 0 };
while (GetMessage(&msg, NULL, NULL, NULL))
{
TranslateMessage(&msg);
DispatchMessageW(&msg);
}
return 0;
}
LRESULT CALLBACK WindowProcedure(HWND hwnd, UINT msg, WPARAM wparam, LPARAM lparam)
{
switch (msg)
{
case WM_DESTROY:
PostQuitMessage(0);
break;
default:
DefWindowProcW(hwnd, msg, wparam, lparam);
}
return 0;
}
Can any one please tell me what I'm doing wrong?
|
The window procedure always returns 0, which ultimately cancels window creation. As outlined in the documentation for CreateWindowExW:
The CreateWindowEx function sends WM_NCCREATE, WM_NCCALCSIZE, and WM_CREATE messages to the window being created.
The first message, WM_NCCREATE, has the following in the documentation:
If the application returns FALSE, the CreateWindow or CreateWindowEx function will return a NULL handle.
FALSE has a value of 0, so when your window procedure returns 0, window creation is terminated. A call to GetLastError will usually also return 0 (i.e. no error) in this case.
You'll want to have your window procedure return the value returned by the call to DefWindowProcW instead, i.e.
default:
return DefWindowProcW(hwnd, msg, wparam, lparam);
|
73,784,199
| 73,786,884
|
Insert JSON object into an existing JSON object as a key-value-pair in Boost
|
I have a class hierarchy that looks like this: class A is composed of classes B and C.
B and C each have a function that returns boost::json::object, which looks like this in case of B: {"B": "someValue"}
A is supposed to compose each of these objects into a structure like this:
{
"A": {
"B": "someValue",
"C": "someOtherValue"
}
}
The problem I'm facing is, that the resulting object always looks like this:
{
"A": [
{ "B": "someValue" },
{ "C": "someOtherValue" }
]
}
As you can see, the objects from the subclasses are not added as key-value pairs, but as objects, each containing a single pair, into an array.
Here's a snippet to reproduce the issue:
#include <iostream>
#include <boost/json.hpp>
using namespace boost::json;
object makeObject(std::string key, int value)
{
object obj;
obj[key] = value;
return obj;
}
int main() {
object obj;
obj["main"] = {makeObject("test1", 123), makeObject("test2", 456)};
std::cout << obj << std::endl;
return 0;
}
According to the Quick Look documentation of boost, I would have expected this to not create an array with single-value objects in it.
What am I doing wrong here?
EDIT
I modified my example based on sehe's answer:
#include <iostream>
#include <boost/json.hpp>
auto makeProp(string_view name, value v)
{
return object::value_type(name, std::move(v));
}
int main() {
object obj;
obj["main"] = {makeProp("test1", 123), makeProp("test2", 456)};
std::cout << obj << std::endl;
return 0;
}
But this still generates undesired output, unless I made a mistake:
{"main":[["test1",123],["test2",456]]}
To elaborate further on my usecase for the helper function, I originally intended for this to be a quick writeup to recreate the issue I described with my class hierarchy above.
I made a longer program that shows the issue in a more understandable way hopefully:
#include <boost/json.hpp>
#include <iostream>
using namespace boost::json;
class B
{
public:
key_value_pair makeProp()
{
value v = {"property1", 1};
return object::value_type("B", std::move(v));
}
};
class C
{
public:
key_value_pair makeProp()
{
value v = {{"property1", 1}, {"property2", {1, 2, 3}}};
return object::value_type("C", std::move(v));
}
};
class A
{
public:
object makeProp()
{
B b;
C c;
object obj;
obj["A"] = {b.makeProp(), c.makeProp()};
return obj;
}
};
int main()
{
A a;
std::cout << a.makeProp() << std::endl;
return 0;
}
I hope this explains why I added the helper function, and makes it clearer what I want to achieve.
This returns:
{
"A": [
[
"B",
[
"property1",
1
]
],
[
"C",
{
"property1": 1,
"property2": [
1,
2,
3
]
}
]
]
}
While I would have expected:
{
"A": {
"B": {
"property1": 1
},
"C": {
"property1": 1,
"property2": [
1,
2,
3
]
}
}
}
|
makeObject returns single objects. You pass them as the initializer list to a json::value, so you should expect {makeObject(...), makeObject(...)} to create an array: docs
If the initializer list consists of key/value pairs, an object is created. Otherwise an array is created.
Instead, you want to make an initializer list of key-value pair, not objects:
#include <iostream>
#include <boost/json/src.hpp> // for COLIRU
namespace json = boost::json;
int main() {
// construct:
json::object obj{
{"main",
{
{"test1", 123},
{"test2", 456},
}},
};
std::cout << obj << std::endl;
// update:
obj["main"] = {{"B", "someValue"}, {"C", "someOtherValue"}};
std::cout << obj << std::endl;
}
See it Live On Coliru:
{"main":{"test1":123,"test2":456}}
{"main":{"B":"someValue","C":"someOtherValue"}}
BONUS
If you /need/ to have the helper function for some reason:
auto makeProp(std::string_view name, json::value v) {
return json::object::value_type(name, std::move(v));
}
|
73,784,496
| 73,785,522
|
Why does static_assert on a 'reference to const int' fail?
|
I'm now learning how to use static_assert. What I learned so far is that the argument of the static_assert has to be constant expression. For the following code, I don't know why the argument reference is not a constant expression: It is declared as const and initialized by integral constant 0:
int main()
{
const int &reference = 0;
static_assert(reference);
}
The error message from gcc states that the reference is not usable in a constant expression:
error: 'reference' is not usable in a constant expression.
Is there a standard rule that prohibits reference from being usable in constant expressions so that it is not a constant expression?
|
First of all, because you use static_assert(reference); instead of static_assert(!reference); the program is ill-formed whether or not the operand of static_assert is a constant expression, as the assertion will fail. The standard only requires some diagnostic for an ill-formed program. It doesn't need to describe the cause correctly. So for the following I will assume you meant static_assert(!reference);.
The initialization of reference is not a constant expression because the glvalue to which it would be initialized refers to an object which does not have static storage duration (it is a temporary bound to a reference with automatic storage duration), in violation of [expr.const]/11. (At least it seems to be the intention that a temporary lifetime-extended through a reference has the storage duration of the reference. This is not clearly specified though, see e.g. https://github.com/cplusplus/draft/issues/4840 and linked CWG issues.)
Therefore the reference is not constant-initialized (by [expr.cons]/2.2) and it is also not constexpr, so that the reference is not usable in constant expressions (by [expr.const]/4), which is however required of a reference which is referred to by an id-expression in a constant expression if its lifetime didn't start during the constant expression's evaluation (by [expr.const]/5.12). Declaring the reference constexpr only shifts the issue to the declaration failing for the same reason as above.
If you move the declaration of reference to namespace scope it will work (at least after resolution of CWG 2126 and CWG 2481).
|
73,784,828
| 73,796,366
|
C++ Builder 10.4 community edition => scoped_lock are missing (at least seems to be a path mess)
|
Just installed C++Builder 10.4 Community Edition. My app is a console multi-threaded app, and uses std::scoped_lock (C++17).
It seems that C++Builder chooses a <mutex> header file that does not define scoped_lock in C:\Program Files (x86)\Embarcadero\Studio\21.0\include\dinkumware64, where the <mutex> header file that is in C:\Program Files (x86)\Embarcadero\Studio\21.0\include\dinkumware64\Dinkum\threads actually does define them, but is not the one used during include resolution.
What am I missing? Has this ever been tested?
Launch C++Builder fresh from install, create a new console, multi-threaded application, take the pre-generated shim code for main() and add this code:
#pragma hdrstop
#pragma argsused
#include <mutex>
#ifdef _WIN32
#include <tchar.h>
#else
typedef char _TCHAR;
#define _tmain main
#endif
#include <stdio.h>
std::mutex m;
int _tmain(int argc, _TCHAR* argv[])
{
std::scoped_lock lock(m);
return 0;
}
And that will fail with an error:
no member named "std::scoped_lock" in namespace "std"
The application is 32 bits, debug. I've tried 64 bits as the <mutex> header is strangely located under dinkumware64/mutex, and debug no/debug, I've tried changing various options but no avail.
Now under dinkumware64/Dinkum/threads/, there is another "mutex" package that includes scoped_lock, but I have no idea why C++Builder selects it or not, and it's not in the std namespace anyway.
|
The standard library is located in dinkumware64 for 32-bit programs as well, so you should be looking there.
Problem is that scoped_lock is missing from the standard library.
You can easily implement this class by yourself by using std::lock, or just use std::lock_guard if you only have one mutex.
|
73,784,862
| 73,784,928
|
How to specify which function I want to use in a C++ class when a C function has the same name as my C++ function?
|
I am trying to write an i2c driver class. The class contains read() and write() methods.
The problem is that I use write function of unistd.h in my Drv_i2c class write() function (see below) and Qtcreator compiler chooses my class write function instead of unistd.h write function.
I know that I could just rename my class function but is it possible to specify that I want to use unistd.h write function? (Same for read function)
bool Drv_i2c::write(uint8_t p_i2c_address, uint8_t p_reg, const std::vector<uint8_t>
&p_data)
{
[...]
/* Checking that all data has been written */
if(write(this->s_fd_i2c, l_buffer, l_buf_length) != l_buf_length)
{
// ERROR HANDLING: i2c transaction failed
l_result = false;
}
[...]
}
|
You can use the scope operator "::" maybe
|
73,785,102
| 73,785,468
|
After compiling a simple C++ code in eclipse, it doesn't show the problems as described in my instruction
|
I am learning C++ along with a script. Eclipse is my IDE and I'm using MinGW64 as a compiler.
In the script there is following code written, which I am just supposed to copy and compile first:
Supposed code from script
My script says that as soon as I compile it, in the lower window under "Problems" there should be shown "0 times" and under "Console" there's supposed to be shown:
**** Build of configuration Debug for project HelloWorld ****
Info: Internal Builder is used for build
g++ -O0 -g3 -Wall -c -fmessage-length=0 -o main.o "..\\main.cpp"
g++ -o HelloWorld.exe main.o
Build Finished
But instead, when I compile the same code (that's my code):
#include <iostream>
int main()
{
std::cout << "Hello World!" << std::endl;
int max;
max = 10;
for (int var = 0; var < max; ++var)
{
std::cout << var << std:endl;
}
return 0;
}
I get following notifications under "Problems" and "Console":
Problems
Console
I'm definetly compiling on MinGW GCC but I don't know the reason why my Problem and Console notifications are differing from the script, I hope someone can help me.
|
You've got two problems.
The easy one. On line 12 you have a typo. It should say std::endl; rather than std:endl; You are missing a colon, it's just that.
The harder one. When I load the following code into Visual Studio on Windows, it compiles and runs correctly.
#include <iostream>
int main()
{
std::cout << "Hello World!" << std::endl;
int max;
max = 10;
for (int var = 0; var < max; ++var)
{
std::cout << var << std::endl;
}
return 0;
}
So the problem is your setup. My guess is that your include folder hasn't been set up properly. You need to tell the compiler where to look for iostream.
My gcc is at C:\tools\msys64\mingw64. The include folder is directly below this. Try which gcc on Linux or get-command gcc on Windows to locate your gcc system.
If you are on Windows, and you have enough memory in your computer you should consider Visual Studio. It's all in one really nice package, and it has a free version.
|
73,785,198
| 73,824,775
|
Range_Image of PCL crashes Application
|
I am using the precompiled/All-in-One PCL (PointCloudLibrary) in release-version 1.12.1 for Windows.
IDE: Visual Studio 2019
With that, I am already able to use the visualizer, so parts of the library are already working fine.
When I want to create a RangeImage-object however my program either runs into an infinite loop, not doing anything anymore or gets terminated by the "abort()"-function of the cpp-standard-library in some cases.
A minimal example to create this problem looks like this:
#include <pcl/range_image/range_image.h>
int main () {
pcl::RangeImage rangeImage;
return 0;
}
==== Extra Infos: ===============
While compilation no errors or warnings are displayed, but running the application in debug-mode (x64) gives the described problem. Running the program in release-mode (x64) gives a "forbidden memory access"-error coming from the std::vector library. x86 is not tested.
When the program terminates with the abort()-function, I get this error message in the console:
Assertion failed: (internal::UIntPtr(array) & (31)) == 0 && "this
assertion is explained here: " "http://eigen.tuxfamily.org/dox-
devel/group__TopicUnalignedArrayAssert.html" " **** READ THIS WEB PAGE
!!! ****", file C:\Program
Files\Eigen3\include\eigen3\Eigen\src/Core/DenseStorage.h, line 128
I went through the website that the error message recommends, but I was not able to solve the problem by that. I have set the C++-Standard to c++17 already.
============================
Has anyone run into this problem before and knows what could cause this issue?
Thanks for taking the time.
|
What fixed this problem was to build the pcl 1.12.1 new with instructions from this tutorial*1 (scroll down until you find the right version): https://gist.github.com/UnaNancyOwen/59319050d53c137ca8f3
Also in Visual Studio the project setting under "C/C++ > Codegeneration > Activate advanced Instructionset" had to be set to AVX2.
*1) You dont need to understand chinese to go through this tutorial. If you have build the PCL before, you will get all information you need from this.
|
73,786,289
| 73,786,419
|
class math operation overload for types which are also have cast operator overloaded
|
as a part of my uni task I need to write class for rational numbers, override math operator, compare operators, etc. But I also need to overload cast to short, int and long types. Here is simplified code for my class:
class RationalNumber {
long long numerator, divider;
public:
RationalNumber() : numerator(0), divider(1) {}
RationalNumber(long long numerator, long long divider = 1) : numerator(numerator), divider(divider) {}
// Let's take only one math operator
RationalNumber operator*(const RationalNumber& other) {
return { numerator * other.numerator, divider * other.divider };
}
// And one cast operator
operator int() {
return numerator / divider;
}
//...
};
The problem is, if I now want to multiply RationalNumber object by int, I'll get an error saying this operation is ambiguous:
int main() {
int integer = 2;
RationalNumber rational(1, 2);
rational * integer;
// Well, ambiguous:
// 1. Cast 'rational' to int and multiply. Or
// 2. Cast 'integer' to RationalNumber and RationalNumber::operator*
return 0;
}
Adding constructor specifically for single int -- dosen't work. The thing that Does work -- adding RationalNumber::operator*(int). Problem with this approach is: I need to override ALL math operations, compare operations, etc, for int since I have cast operator to int -- operator int.
Ok, but even if I'll do that, the second I try to use unsigned compiler will bombard me with tons of warnings about casting unsigned to int and potential data loss (thought it will compile):
class RationalNumber {
//...
public:
//...
RationalNumber operator*(int othNum) {
return { numerator * othNum, divider };
}
//...
};
int main() {
unsigned int uinteger = 2;
RationalNumber rational(1, 2);
rational * uinteger; // Well... AAAAAAA
return 0;
}
test.cpp: In function ‘int main()’:
test.cpp:29:14: warning: ISO C++ says that these are ambiguous, even though the worst conversion for the first is better than the worst conversion for the second:
29 | rational * integer;
| ^~~~~~~
test.cpp:14:18: note: candidate 1: ‘RationalNumber RationalNumber::operator*(int)’
14 | RationalNumber operator*(int othNum) {
| ^~~~~~~~
test.cpp:29:14: note: candidate 2: ‘operator*(int, unsigned int)’ (built-in)
29 | rational * integer;
|
To completely satisfy compiler I need to add RationalNumber::operator*(unsigned). Now let's remember that I need to overload 3 casting operators and a dozen of math/compare operators. Which means I need to create 6 extra functions for EACH operator overload.
So, can I somehow force compiler to always cast values in favor of my class?
|
Implicit conversions can cause problems and can have a negative effect on readability. If possible make the conversions explicit and keep implicit conversions for exceptional cases, when you are sure that an implicit conversion is appropriate. The call is no longer ambiguous if you make the conversion explicit:
explicit operator int() {
return numerator / divider;
}
Live Demo
It is one of the many cases where C++s defaults are a little odd. Conversions should be explicit by default and only in special cases they can be implicit.
|
73,787,458
| 73,787,521
|
Do you know what happend when i invoke copy constructor on self?
|
I do not know what happend when i invoke copy constructor on self. In my opinion, The copy constructor will allocate new memory space for the object, and then copy the data of the old object to the new memory space. So the address will change when i call A(a), but it not.
class A {
public:
A(const A& t) {
std::cout << "sss" << endl;
};
A() {};
};
int main() {
A a;
std::cout << &a << endl;
a = A(a);
std::cout << &a << endl;
}
|
a = A(a); does create a new A by invoke copy constructor.
But you then copy(assign) the value back to a. and drop the temporary.
the a address is never changed. (there is no way to change it btw)
|
73,787,588
| 73,787,688
|
Constexpr class function definition linking error
|
Function is declared in hpp file like this:
class StringProcessor
{
static constexpr const char* string_process(const char* initial_string, std::size_t string_length, const char* key, std::size_t key_length);
};
And defined in cpp like this:
constexpr const char* StringProcessor:: string_process(const char* initial_string, std::size_t string_length, const char* key, std::size_t key_length)
{
...
}
How do I call it, because following line throws Undefined symbols for architecture x86_64: "StringProcessor::string_process(char const*, unsigned long, char const*, unsigned long)", referenced from: _main in main.cpp.o error:
std::cout << StringProcessor::string_process("Test", 4, "Test", 4) << std::endl;
|
constexpr functions are implicitly inline and therefore need to be defined in every translation unit where they are used. That usually means that the definition should go into a header shared between the translation units using the function.
Also, in order for constexpr to be of any use, the function needs to be defined before its potential use in a constant expression. Therefore it usually only really makes sense to define it directly on the first declaration, i.e. here in the class definition in the header.
|
73,788,875
| 73,884,165
|
How to get proper stack trace despite catch and throw; in std library
|
I use C++17, GCC, Qt Creator with its integrated GDB debugger.
I have code that simplifies down to this:
#include <iostream>
#include <iomanip>
// Example-implementation
#define assert(Condition) { if (!(Condition)) { std::cerr << "Assert failed, condition is false: " #Condition << std::endl; } }
#include <execinfo.h>
#include <signal.h>
#include <unistd.h>
void printStackTrace()
{
constexpr int requestedFrameCount = 20;
void* frames[requestedFrameCount];
auto actualFrameCount = backtrace(frames, requestedFrameCount);
std::cout << "Stack trace (" << actualFrameCount << " of " << requestedFrameCount << " requested frames):" << std::endl;
backtrace_symbols_fd(frames, actualFrameCount, STDOUT_FILENO);
std::cout << "End of stack trace." << std::endl;
}
void signalHandler(int signalNumber)
{
std::cout << "Signal " << signalNumber << " (" << sys_siglist[signalNumber] << ") happened!" << std::endl;
assert(signalNumber == SIGABRT);
printStackTrace();
}
__attribute_noinline__ void someFunction()
{
throw std::invalid_argument("Bad things happened");
}
__attribute_noinline__ void someFunctionInTheStandardLibraryThatICantChange()
{
try
{
someFunction();
}
catch (...)
{
throw;
}
}
__attribute_noinline__ int main()
{
signal(SIGABRT, signalHandler);
someFunctionInTheStandardLibraryThatICantChange();
return 0;
}
someFunctionInTheStandardLibraryThatICantChange is a placeholder for this thing:
template<bool _TrivialValueTypes>
struct __uninitialized_copy
{
template<typename _InputIterator, typename _ForwardIterator>
static _ForwardIterator
__uninit_copy(_InputIterator __first, _InputIterator __last,
_ForwardIterator __result)
{
_ForwardIterator __cur = __result;
__try
{
for (; __first != __last; ++__first, (void)++__cur)
std::_Construct(std::__addressof(*__cur), *__first);
return __cur;
}
__catch(...)
{
std::_Destroy(__result, __cur);
__throw_exception_again;
}
}
};
The program's output looks something like this:
On standard output:
Signal 6 (Aborted) happened!
Stack trace (13 of 20 requested frames):
/foo/Test(_Z15printStackTracev+0x1c)[0xaaaab9886d30]
/foo/Test(_Z13signalHandleri+0xbc)[0xaaaab9886e94]
linux-vdso.so.1(__kernel_rt_sigreturn+0x0)[0xffff95f3a5c8]
/lib64/libc.so.6(gsignal+0xc8)[0xffff94e15330]
/lib64/libc.so.6(abort+0xfc)[0xffff94e02b54]
/lib64/libstdc++.so.6(_ZN9__gnu_cxx27__verbose_terminate_handlerEv+0x188)[0xffff950d9358]
/lib64/libstdc++.so.6(_ZN10__cxxabiv111__terminateEPFvvE+0xc)[0xffff950d70ac]
/lib64/libstdc++.so.6(_ZN10__cxxabiv112__unexpectedEPFvvE+0x0)[0xffff950d7100]
/lib64/libstdc++.so.6(__cxa_rethrow+0x60)[0xffff950d7428]
/foo/Test(_Z47someFunctionInTheStandardLibraryThatICantChangev+0x1c)[0xaaaab9886f10]
/foo/Test(main+0x1c)[0xaaaab9886f48]
/lib64/libc.so.6(__libc_start_main+0xe4)[0xffff94e02fac]
/foo/Test(+0x2774)[0xaaaab9886774]
End of stack trace.
On standard error:
terminate called after throwing an instance of 'std::invalid_argument'
what(): Bad things happened
Note how the stack trace goes directly from someFunctionInTheStandardLibraryThatICantChange to rethrow. someFunction was not inlined (call printStackTrace from someFunction if you don't trust me).
I can't change the library function, but I need to know where the exception was originally thrown. How do I get that information?
One possible way is to use the debugger and set a "Break when C++ exception is thrown" breakpoint. But that has the significant drawbacks that it only works when debugging, it's external to the program and it is only really viable if you don't throw a bunch of exceptions that you don't care about.
|
What @n.1.8e9-where's-my-sharem. suggested in the comments ended up working. When you throw an exception in C++, behind the scenes the function __cxa_throw is called. You can replace that function, look at the stack trace, then call the replaced function.
Here is a simple proof-of-concept:
#include <dlfcn.h>
#include <cxxabi.h>
typedef void (*ThrowFunction)(void*, void*, void(*)(void*)) __attribute__ ((__noreturn__));
ThrowFunction oldThrowFunction;
namespace __cxxabiv1
{
extern "C" void __cxa_throw(void* thrownException, std::type_info* thrownTypeInfo, void (*destructor)(void *))
{
if (oldThrowFunction == nullptr)
{
oldThrowFunction = (ThrowFunction)dlsym(RTLD_NEXT, "__cxa_throw");
}
// At this point, you can get the current stack trace and do something with it (e.g. print it, like follows).
// You can also set a break point here to have the debugger stop while the stack trace is still useful.
std::cout << "About to throw an exception of type " << thrownTypeInfo->name() << "! Current stack trace is as follows:" << std::endl;
printStackTrace();
std::cout << std::endl;
oldThrowFunction(thrownException, thrownTypeInfo, destructor);
}
}
Integrated with the example in the question, the output is as follows:
About to throw an exception of type St16invalid_argument! Current stack trace is as follows:
Stack trace (7 of 20 requested frames):
/foo/Test(_Z15printStackTracev+0x3c)[0x55570996b385]
/foo/Test(__cxa_throw+0xa1)[0x55570996b653]
/foo/Test(_Z12someFunctionv+0x43)[0x55570996b555]
/foo/Test(_Z47someFunctionInTheStandardLibraryThatICantChangev+0x12)[0x55570996b581]
/foo/Test(main+0x21)[0x55570996b6ac]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3)[0x7fb71560f0b3]
/foo/Test(_start+0x2e)[0x55570996b28e]
End of stack trace.
Signal 6 (Aborted) happened!
Stack trace (13 of 20 requested frames):
/foo/Test(_Z15printStackTracev+0x3c)[0x55570996b385]
/foo/Test(_Z13signalHandleri+0xbd)[0x55570996b50f]
/lib/x86_64-linux-gnu/libc.so.6(+0x46210)[0x7fb71562e210]
/lib/x86_64-linux-gnu/libc.so.6(gsignal+0xcb)[0x7fb71562e18b]
/lib/x86_64-linux-gnu/libc.so.6(abort+0x12b)[0x7fb71560d859]
/lib/x86_64-linux-gnu/libstdc++.so.6(+0x9e911)[0x7fb715893911]
/lib/x86_64-linux-gnu/libstdc++.so.6(+0xaa38c)[0x7fb71589f38c]
/lib/x86_64-linux-gnu/libstdc++.so.6(+0xaa3f7)[0x7fb71589f3f7]
/lib/x86_64-linux-gnu/libstdc++.so.6(__cxa_rethrow+0x4d)[0x7fb71589f6fd]
/foo/Test(_Z47someFunctionInTheStandardLibraryThatICantChangev+0x25)[0x55570996b594]
/foo/Test(main+0x21)[0x55570996b6ac]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf3)[0x7fb71560f0b3]
/foo/Test(_start+0x2e)[0x55570996b28e]
End of stack trace.
terminate called after throwing an instance of 'std::invalid_argument'
what(): Bad things happened
This can be improved in the following ways:
The printed exception type name can be demangled.
The actual exception object can be pulled out of the void* by examining the type_info.
The stack trace can be printed to a string (instead of the console). Note that this can be undesirable if an exception was thrown due to the process running out of memory. But it's an option for 99.9% of use cases.
The stack trace can be attached to the thrown exception object. Either by inserting your own class with an std::string field (to plonk the stack trace in) in the inheritance hierarchy of exceptions you are throwing (it's headache-inducing arcane magic, requires changing existing source code, is totally unportable, but works for me); or maybe by using thread-local storage, but I haven't touched that yet.
The attached stack trace can be grabbed and printed by the existing terminate handler.
If I find enough time and patience I'll add that to this answer, but I might die of frustration before that happens.
|
73,789,293
| 74,022,551
|
CLION detecting UNIX operating system on WINDOWS computer
|
I am trying to write a program that opens up a com port to communicate but works on both LINUX and WINDOWS. To do this, I am using the method outlined in many other sources:
#ifdef __unix__
#linux code
#else
#windows code
#endif
If I copy the code into VSCode, it functions correctly. However, when I do this in CLION, it is detecting the operating system as LINUX instead of WINDOWS and greying out the WINDOWS code. Additionally, when I run the code, it crashes because it is attempting to execute the LINUX based code. This is an example of what I have written:
#ifdef __unix__
int comPort;
struct termios tty;
#else
HANDLE comPort;
char settingString[128];
#endif
And it is executing the unix code. I understand that the comparison happens at compile time, but is there anything I am doing wrong for CLION?
|
I figured out the issue. My CLION toolchain was default set to WSL and I had to change it to MinGW.
|
73,789,689
| 73,789,730
|
Can G++ coexist side by side with Visual Studio?
|
I have Visual Studio 2013 Community installed. Now want to learn more about compiling, as well as being more Windows independent/portable, with Notepad/G++11/MinGW-64 (all within same Windows 10 Home 64bit platform for c++ audio applications). Can GCC and VS coexist side by side?
|
Yes. You can, of course, have both installed side-by-side. But you cannot expect to be able to link object files / libraries compiled by different compilers (or even different compiler versions) in most cases.
|
73,789,722
| 73,793,278
|
even using error_handler got uncaught x3::expectation_failure
|
As I try to rewrite my Spirit X3 grammar, I put some rules (e.g. char_ > ':' > int_ > '#') into a parser class like this one shown here:
struct item_parser : x3::parser<item_parser> {
using attribute_type = ast::item_type;
template <typename IteratorT, typename ContextT>
bool parse(IteratorT& first, IteratorT const& last, ContextT const& ctx, x3::unused_type,
attribute_type& attribute) const
{
skip_over(first, last, ctx);
auto const grammar_def = x3::lexeme[ x3::char_ > ':' > x3::int_ > '#' ];
auto const grammar = grammar_def.on_error(
[this](auto&f, auto l, auto const& e, auto const& ctx){
return recover_error(f, l, e, ctx); }
);
auto const parse_ok = x3::parse(first, last, grammar, attribute);
return parse_ok;
}
auto recover_error(auto&f, auto l, auto const& e, auto const&) const {
std::cout << "item_parser::recover_error: " << e.which() << '\n';
std::cout << "+++ error_handler in: " << excerpt(f, l) << "\n";
if (auto semicolon = find(f, l, ';'); semicolon == l) {
return x3::error_handler_result::fail;
} else { // move iter behind ';'
f = semicolon + 1;
std::cout << "+++ error_handler out: " << excerpt(f, l) << "\n";
return x3::error_handler_result::accept;
}
}
};
(https://godbolt.org/z/MhjYWbE6x) - which works with good input:
std::string const input = R"(
X := a:42#;
X := b:66#;
X := c:4711#;
)";
as expected. If I change one line in the input to e.g. X := b66#; (note the missing colon : inside the literal) it recovers from parser error as expected, but "failed finally" at the end (https://godbolt.org/z/3zabbsWsP).
Input
std::string const input = R"(
X := a:42#;
X := b66#;
X := c:4711#;
)";
with output:
item_parser::recover_error: ':'
+++ error_handler in: 'b66#;
X := c:4711#;
'
+++ error_handler out: '
X := c:4711#;
'
failure
For the return of parse() (aka parse_ok) I expected true since it's never returned x3::error_handler_result::fail. Nevertheless, I use an error counter in real.
If I further change the outer rule item_assign to x3::eps > "X" > ":=" > item > ';' the program is terminated (uncaught expectation_failure) which I haven't expected since it's guarded with an error_handler (tag), see https://godbolt.org/z/dGcYfbG3r
terminate called after throwing an instance of 'boost::wrapexcept<boost::spirit::x3::expectation_failure<__gnu_cxx::__normal_iterator<char const*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >'
what(): boost::spirit::x3::expectation_failure
Maybe I don't see my fault ... or I trigger a bug? Note, that I'm using rule's on_error() which mentioned at https://github.com/boostorg/spirit/issues/657 . Thanks in advance for help.
|
First off, you need to make up your mind. You're duplicating the error handling code (inconsistently), using a function object as the rule tag (error_handler), (but operator() is completely ignored there), complicating things because you are using the same tag on different rules.
Then you're also using the undocumented interface parser.on_error(f) - but it's probably undocumented for a reason. Finally, you're mixing that - for no apparent reason - with a custom parser, which you both want to pre-skip and behave as a lexeme.
Oh, and you're dragging in position_tagged, annotate_on_success and error_reporting, none of which you are actually using.
I'd simplify all the way. When you do, you may find that the real reason your stuff doesn't work as expected, is that you're telling the rule to accept result as-if success (error_handler_result::accept) but after a successful result ';' is required. Since you position the cursor after the semicolon, that's going to fail:
========= "
X := a:42#;
X := b66#;
X := c:4711#;
" =====
<grammar>
<try>\n X := a:42#;\n </try>
<item_assign>
<try>\n X := a:42#;\n </try>
<item>
<try> a:42#;\n X := b66</try>
<success>;\n X := b66#;\n </success>
<attributes>[a, 42]</attributes>
</item>
<success>\n X := b66#;\n </success>
<attributes>[a, 42]</attributes>
</item_assign>
<item_assign>
<try>\n X := b66#;\n </try>
<item>
<try> b66#;\n X := c:47</try>
+++ error_handler expected: ':'
+++ error_handler in: '66#;
X := c'
+++ error_handler out: '
X := c:471'
<success>\n X := c:4711#;\n</success>
<attributes>[b, 0]</attributes>
</item>
<fail/>
</item_assign>
<fail/>
</grammar>
failure
Simply put the error-handler on the item_assign rule - the one that you know how to recover for:
struct item_assign_tag : error_handler {};
auto const item
= x3::lexeme[x3::char_ > ':' > x3::int_ > '#'];
auto const item_assign //
= x3::rule<item_assign_tag, ast::item_type>{"item_assign"} //
= x3::eps >> "X" >> ":=" >> item >> ';';
auto const grammar //
= x3::skip(x3::space)[*item_assign >> x3::eoi];
Now the output becomes: Live On Compiler Explorer
<item_assign>
<try>X := a:42#;\n </try>
<success>\n X :</success>
<attributes>[a, 42]</attributes>
</item_assign>
<item_assign>
<try>\n X :</try>
+++ error_handler expected: ':'
+++ error_handler in: '66#;
'
+++ error_handler out: '
X :'
<success>\n X :</success>
<attributes>[b, 0]</attributes>
</item_assign>
<item_assign>
<try>\n X :</try>
<success>\n </success>
<attributes>[c, 4711]</attributes>
</item_assign>
<item_assign>
<try>\n </try>
<fail/>
</item_assign>
- a:42
- b:0
- c:4711
Now, in my opinion it would be better if b:0 wasn't in the output, so perhaps use retry instead of accept: Live On CE:
+++ error_handler expected: ':'
+++ error_handler in: '66#;
X := c:'
+++ error_handler out: '
X := c:4711'
- a:42
- c:4711
|
73,790,277
| 73,801,820
|
Calling a GNU ar script in Makefile
|
I'm having some trouble using GNU ar script from a Makefile. Specifically, I'm trying to follow an answer to How can I combine several C/C++ libraries into one?, but ar scripting isn't well supported by make, as most Makefiles use the ar command-line interface and not a script. ar needs each line to be terminated by a new line (\n). This is not the same as multiple shell commands which can use a semicolon as a separator.
The ar commands are:
ar -M <<CREATE libab.a
ADDLIB liba.a
ADDLIB libb.a
SAVE
END
ranlib libab.a
I'm trying to merge the libraries because I'm creating a static library in a Makefile which depends on another static library, but the second one is created with CMake.
foo.a: foo.o
$(AR) $(ARFLAGS) $@ $^
final_lib.c: foo.a the_cmake_library.a
$(AR) -M <<CREATE $@
ADDLIB foo.a
ADDLIB the_cmake_library.a
SAVE
END
ranlib $@
The above doesn't work because make is interpreting the ar commands as its own, so I'm getting a
make: CREATE: Command not found
If these were bash commands, I could use something like this answer, but that doesn't work:
ar -M <<CREATE libab.a; ADDLIB liba.a; ADDLIB libb.a; SAVE; END
ar doesn't have a command-line version of the ADDLIB command.
My current solution is:
final_lib.c: foo.a the_cmake_library.a
$(shell printf "EOM\nCREATE $@\nADDLIB foo.a\nADDLIB the_cmake_library.a\nSAVE\nEND\nEOM\n" > ARCMDS.txt)
$(AR) -M < ARCMDS.txt
ranlib $@
I find that very clumsy. Does anyone know of a better way to handle ar scripts in a Makefile? Thanks!
|
You don't need to use the shell function and you don't need to write it to a file. ar is taking from stdin anyway so why not just use a pipe?
Either:
final_lib.c: foo.a the_cmake_library.a
printf "EOM\nCREATE $@\nADDLIB foo.a\nADDLIB the_cmake_library.a\nSAVE\nEND\nEOM\n" | $(AR) -M
ranlib $@
Or something like:
final_lib.c: foo.a the_cmake_library.a
(echo EOM; \
echo "CREATE $@"; \
echo ADDLIB foo.a; \
echo ADDLIB the_cmake_library.a; \
echo SAVE; \
echo END; \
echo EOM) \
| $(AR) -M
ranlib $@
|
73,790,603
| 73,791,486
|
How to insert an element at some specific position in the list using classes?
|
#include<iostream>
using namespace std;
template<typename T>
class List
{
public:
T *values;
int capacity;
int counter;
public:
List()
{
values = NULL;
capacity = 0;
counter = 0;
}
List(int cap)
{
capacity = cap;
values = new T[cap];
counter = 0;
}
bool insert(T item)
{
if (isFull() == false)
{
values[counter] = item;
counter++;
return true;
}
return false;
}
bool insertAt(T item, int index)
{
if (isFull() == false && index < counter)
{
capacity++;
for (int i = capacity; i > index; i--)
values[i] = values[i - 1];
values[index] = item;
return true;
}
return false;
}
bool isFull()
{
if (counter == capacity)
{
return true;
}
return false;
}
void print()
{
for (int i = 0; i < capacity; i++)
{
cout << values[i] << " ";
}
}
};
int main()
{
List<int> obj1(5);
obj1.insert(1); //0
obj1.insert(2); //1
obj1.insert(3); //2
obj1.insert(4); //3
obj1.insertAt(3, 1);
obj1.values[1];
obj1.print();
}
Kindly look into this program I have to insert an element at given position. But when I run this program I am getting garbage at the end element of an array. Kindly check and let me know where is the problem? Please check the insertAt function I think this function has some logical error. I have added the main function when I call print function it give garbage at the last index
|
bool insertAt(T item, int index)
{
if (!isFull() && index < counter)
{
for (int i = counter; i > index; i--) {
values[i] = values[i - 1];
}
values[index] = item;
counter++;
return true;
}
return false;
}
void print()
{
for (int i = 0; i < counter; i++)
{
cout << values[i] << " ";
}
}
I have corrected these two function. Try these functions instead of the one you implemented
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.