question_id
int64 25
74.7M
| answer_id
int64 332
74.7M
| title
stringlengths 20
150
| question
stringlengths 23
4.1k
| answer
stringlengths 20
4.1k
|
|---|---|---|---|---|
73,334,227
| 73,364,666
|
Why is gcc not catching an exception from a multi target function?
|
I'm using the target attribute to generate different function implementations depending on the CPU architecture. If one of the functions throws an exception it doesn't get caught if I compile with gcc, but with clang it works as expected.
If there is only a single implementation of the function it does work for gcc as well.
Is this a bug in gcc?
Example (godbolt):
#include <stdexcept>
#include <iostream>
using namespace std;
__attribute__((target("default")))
void f() {
throw 1;
}
__attribute__((target("sse4.2,bmi")))
void f() {
throw 2;
}
int main()
{
try {
f();
}
catch(... )
{
std::cout << "Caught exception" << std::endl;
}
}
Output of gcc:
terminate called after throwing an instance of 'int'
Output of clang:
Caught exception
|
I reported this and a GCC developer confirmed it as a bug: link
For now a workaround seems to wrap the function and use the gnu::noipa attribute to disable interprocedural optimizations:
__attribute__((target("default")))
void f() {
throw 1;
}
__attribute__((target("sse4.2")))
void f() {
throw 2;
}
[[gnu::noipa]]
void f1()
{
f();
}
int main()
{
try {
f1();
}
catch(... )
{
return 0;
}
return 1;
}
The bug is now fixed in gcc's master branch and should be released with gcc version 13.
|
73,334,632
| 73,340,871
|
Are incremental builds possible with copied files not built on the machine?
|
I'm having trouble setting up incremental builds in Azure DevOps. There are too many variables with workspace cleaning to ensure that I don't have to do a full build every time.
I had a thought that I could just always copy the built files to a location outside of the agents' purview, and then copy those files into my release directory before each build.
Would that allow for an incremental build?
|
You probably can 'fool' the incremental logic but you would be working against the tooling.
For an actual incremental build you need to build in the same place.
In the context of Azure DevOps, that means building the same job of the same pipeline on the same agent. You can't let the build move around between agents or even between work folders of the same agent. (It also means that your agent and the state of the agent work folder must be persistent across the builds.)
You can make the job, stage, or pipeline 'sticky' to one dedicated agent by using demands and capabilities.
Decide what will be on your dedicated agent. Will it be the entire pipeline or just a stage of the pipeline or just a job of a stage?
For the dedicated agent, create a capability that represents the build. Using the name of the pipeline (or pipeline+stage or pipeline+stage+job depending) for the name of the capability is handy and self-documenting. You can create the capability in Azure DevOps as a 'user capability' of the agent.
Change your pipeline to add a demand on the custom capability. The demand can test if the custom capability exists. In a YAML pipeline the demands are configured in the pool definition.
This is an easier and less brittle approach then trying to outsmart the incremental logic.
With this approach, all builds will be done in series on the one agent. If the build takes a long time (which may be the motivation for building incrementally) and the build is tied to one agent, the 'throughput' of builds will be limited. If a build's duration is 1 hour, there will be a maximum of 8 builds in an 8 hour work day.
Tying specific builds to specific agents is not the intent in Azure DevOps. For a monolithic legacy codebase where there is no notion of semantic versioning and immutable interfaces, you may have little choice. But a better way is to use package management. Instead of one big build, have multiple smaller builds that produce packages that are used by other builds. The challenge is that packages will not work well without some attention and discipline around versioning and keeping published interfaces and contracts unchanged.
|
73,335,101
| 73,335,323
|
c++ task won't execute after while loop
|
C++ newbie here. I'm not sure how to describe this but the task outside of the while-loop won't execute immediately. I need to enter the input value again to get it done.
Here is my code:
#include <iostream>
using namespace std;
int main()
{
int fourDigitInt, firstDigit, secondDigit, thirdDigit, fourthDigit, i = 0;
cout << "Enter a 4-digit integer : ";
cin >> fourDigitInt;
while (fourDigitInt > 9999 || !(cin >> fourDigitInt))
{
i++;
if (i >= 3)
{
cout << "You do not seem to understand the program instruction.\nPlease try again later." << endl;
return -1;
}
else
{
cout << "Error: Please make sure you are entering a 4-digit integer" << endl;
cin.clear();
cin.ignore(4, '\n');
cout << "Enter a 4-digit integer : ";
}
}
firstDigit = fourDigitInt / 1000 % 10;
secondDigit = fourDigitInt / 100 % 10;
thirdDigit = fourDigitInt / 10 % 10;
fourthDigit = fourDigitInt / 1 % 10;
cout << "1st digit : " << firstDigit << endl;
cout << "2nd digit : " << secondDigit << endl;
cout << "3rd digit : " << thirdDigit << endl;
cout << "4th digit : " << fourthDigit << endl;
}
Here are some problems I encountered:
1)If I enter a string first, it doesn't have any problem.
2)But if I enter any number less than 9999, it won't execute the calculation unless I
enter it again.
3)If I enter a 5-digit number the endl won't work. It will just display Enter a 4-digit integer : Error: Please make sure you are entering a 4-digit integer which suppose to be a different line.
Where exactly did I do wrong? Thank you in advance.
|
The main issue here is in the while loop condition. It should just check for the value of the fourDigitInt variable as that is what is important.
Looking closer, you will also be able to notice the fact that the if case inside of the loop would check for the second iteration instead of the third. I fixed that as well by moving the i++ inside of the else block.
while (fourDigitInt > 9999 || fourDigitInt < 1000) {
if (i >= 3) {
cout << "You do not seem to understand the program (ironic coming for OC) instruction.\nPlease try again later." << endl;
return -1;
}
else {
i++;
cout << "Error: Please make sure you are entering a 4-digit integer" << endl;
cout << "Enter a 4-digit integer : ";
cin >> fourDigitInt;
}
}
Any other issues with your code that may occur are not related to this question.
|
73,335,380
| 73,335,780
|
Frequently insert and delete elements using std::vector
|
I have a game which could have a million "boxes". For convenience, I use std::vector<shared_ptr<Boxes>> to save them. But today, I want to break some "box" if the box is subjected to a certain impact, so I have to split the box into "two small boxes".
Here is the question - in the game, there are so many "boxes" that would break which involves the std::vector insert (new small boxes generated) and delete (old boxes split). And each "box" contains many information like mass, shape, volume, and so on. So, I have no idea about enhancing performance of the std::vector for frequent inserts and deletes. And it is difficult for me to change to another data structure that I have to keep using std::vector.
I had read some efficient trick to use std::vector like: use reserve() before inserting elements to avoid reallocating memory, or maybe move semantic can be used here?
|
Starting from an empty data structure:
std::vector<std::shared_ptr<Box>> boxes;
We can reserve capacity for 2 million boxes. This is probably too much, but allows for each box to be split once:
boxes.reserve(2_000_000);
Now, at the start of the game you can use push_back to fill up this vector.
At some later point, you decide you want to break a box in two.
You can replace the original box with one of the fragments, and push_back the other one:
void breakBox(int i) {
std::shared_ptr<Box> fragment1 = ...; // something that reads from boxes[i]
std::shared_ptr<Box> fragment2 = ...; // something that reads from boxes[i]
boxes[i] = fragment1;
boxes.push_back(fragment2);
}
Simply adding boxes is still O(1) as long as you stay under the 2M boundary. Otherwise you incur a copy of the full boxes vector and then you are good for quite a while again.
Finally, should you ever want to remove a box, you can use a trick mentioned in the comments: swap it with the last box and then free the last box. This is O(1).
void deleteBox(int i) {
std::swap(boxes[i], boxes.back());
boxes.pop_back();
}
|
73,336,245
| 73,336,277
|
const char* allows to modify the string?
|
I understand that using const char* is a modifiable pointer to a constant character. As such, I can only modify the pointer, but not the character. Because of this, I do not understand why I am allowed to do this:
const char* str{"Hello World"};
str = "I change the pointer and in turns it changes the string, but not really.";
How does this work? Is there somewhere in memory where all the characters are stored and I can just point to them as I wish? Furthermore, the adress of str does not change throughout this process. Since the only thing that can change is the address, I really don't understand what's going on.
Maybe part of the problem is that I try to understand this as if the string was an integer. If I do:
int number{3};
const int* p_number{&number};
*p_number = 4;
This is not valid, hence why I expect str to not by modifiable. In order words, where am I pointing so that "Hello World" becomes "I change the pointer and this changes the string"?
EDIT:
I get that I create a new string at another address, but when I do:
const char *str{"HelloWorld"};
std::cout << &str << std::endl;
str = "I create new string, but I get same address";
std::cout << &str << std::endl;
I always get the same address twice.
|
No string was replaced, you just reassigned str a new string which is stored on stack-memory.
Try this snippet, that the address was changed
#include <stdio.h>
int main(void)
{
const char *ptr = "Hello";
printf("Before: %p\n", ptr);
ptr = "World";
printf("After: %p\n", ptr);
}
Output [RESULT MAY VARY]:
Before: 0x402004
After: 0x402016
|
73,336,458
| 73,336,613
|
Can I compute values that require a special function during compilation of C++?
|
I appreciate I am being somewhat vague about what is exactly my issue, but I think that the fundamental question is clear. Please bear with me for a moment.
In brief, I have a static constexpr array of points which are used to find certain bounds I need to use. These bounds depend only on the array, so they can be precomputed. However, we want to be able to change these points and it is a pain to go and change every value every time we try to test something.
For example, let's say that I have the following setup:
The static constexpr array is
static constexpr double CHECK_POINTS[7] = { -1.5, -1.0, -0.5, 0.0, -0.5, 1.0, 1.5 };
and then in a function I'm calling, I have the following block of code:
std::vector<double> bounds = {0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0};
for(int i=0; i<bounds.size(); i++)
{
bounds[i] = std::exp(CHECK_POINTS[i]);
}
Clearly, the values of bounds can be computed during compilation. Is there anyway I can make gcc do that?
EDIT: The vector in my code block is not essential, an array will do.
|
constexpt std::vector does not work here, but you can use std::array.
std::exp is not constexpr
so you need to find constexpr alternatives
it would work in gcc as an extension.
static constexpr double CHECK_POINTS[7] = { -1.5, -1.0, -0.5, 0.0, -0.5, 1.0, 1.5 };
static constexpr auto vec = [](){
std::array bounds = {0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0};
for(int i=0; i<bounds.size(); i++)
{
bounds[i] = std::exp(CHECK_POINTS[i]);
}
return bounds;
}();
it's would compile fine with gcc https://godbolt.org/z/x5a9q9M1d
(constexpr std::exp is an gcc extension, thanks to @phuclv to point out)
|
73,336,690
| 73,336,691
|
Supertype of std::plus and std::minus / how can I use std::plus and std::minus in a single object?
|
I was trying to simplify a piece of code by removing redundant code that only differed in += and -=. My idea was to use std::plus and std::minus instead and thus combine the two methods into one.
Minimal code is:
#include <functional>
int main()
{
// true is actually some condition
std::binary_function<long, long, long> direction = true ? std::plus<long>() : std::minus<long>();
}
The error is
error C2446: ':': no conversion from 'std::minus' to 'std::plus'
I don't want to convert std::minus to std::plus, I want to convert everything to std::binary_function.
I tried to help the compiler using a static cast
std::binary_function<long, long, long> direction = true
? static_cast<std::binary_function<long, long, long>>(std::plus<long>())
: std::minus<long>();
which gives me
error C2440: 'static_cast': cannot convert from 'std::plus' to 'std::binary_function<long,long,long>'
Long question short: how can I use std::plus and std::minus in a single object?
Using C++14 in Visual Studio, but open for solutions in newer C++ versions as well.
|
The inheritance from std::binary_function<T, T, T> was removed in C++11. Also, std::binary_function changed in C++11 and changes in C++17 again.
You can use function instead:
std::function<long(long, long)> direction = true
? static_cast<std::function<long(long, long)>>(std::plus<long>())
: std::minus<long>();
and you can simplify more by omitting the type at the operations and using auto:
auto direction = true
? static_cast<std::function<long(long, long)>>(std::plus())
: std::minus();
|
73,337,020
| 73,337,829
|
Behaviour of simple multithread program on C++
|
I'm training on C++ and threads. I found the following code from this page and compiled it on my Ubuntu 20.04 machine:
// C program to show thread functions
#include <pthread.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
void* func(void* arg)
{
// detach the current thread
// from the calling thread
pthread_detach(pthread_self());
usleep(3*1000000);
printf("Inside the thread\n");
// exit the current thread
pthread_exit(NULL);
}
void fun()
{
pthread_t ptid;
// Creating a new thread
pthread_create(&ptid, NULL, &func, NULL);
printf("This line may be printed"
" before thread terminates\n");
// The following line terminates
// the thread manually
// pthread_cancel(ptid);
// Compare the two threads created
if(pthread_equal(ptid, pthread_self()))
printf("Threads are equal\n");
else
printf("Threads are not equal\n");
// Waiting for the created thread to terminate
pthread_join(ptid, NULL);
printf("This line will be printed"
" after thread ends\n");
pthread_exit(NULL);
}
// Driver code
int main()
{
fun();
return 0;
}
I just added a usleep in the thread function but the behavior doesn't change.
If I understand everything correctly the message "This line will be printed after thread ends" shall be always printed at the very end of the program, when thread ptid is ended.
But in reality, it often happens that this ending message is printed and then after 3 seconds (due to usleep call) it is printed the message "Inside the thread", seeming that thread ptid is still alive and running.
Without the usleep (as per original code) happened the same just without the 3s wait in the middle.
What's going wrong?
|
A spotted in the comments, the source of the main issue of your code is that you call pthread_detach.
The later pthread_join will just ignore the thread you detach.
At this point all thread "live their own life" which means the the only remaining synchronization mechanism is at the "exit" (when main returns). On exit, the compiler added some code that will wait for all threads to terminate.
So as threads are not synchronized in any way, what is happening is just a race condition.
When adding the usleep, you just delay the fist thread enough for the other one to have much time to finish first.
When no usleep, the race just produces random order of logs.
Worth to notice that stdout is bufferized, so the output may not be displayed at the time the printf call is done.
Remove the pthread_detach() will restore the synchro provided by pthread_join, so the log will appea in the order you expect (if I put aside the stdout bufferization pb)
|
73,338,291
| 73,338,420
|
How to use template template as template argument properly?
|
I have this code with error:
template <typename T1>
struct A {
template <typename T2>
struct B {
using type = char /* actually some class depending on T1 and T2 */;
};
template <typename T2>
using type = B<T2>;
};
template <template <typename> class TT>
struct C {
template <typename T>
using type = TT<T> /* actually something more complex */;
};
template <typename T>
struct D {
using type = typename C<typename A<T>::type>::type;
// ^
// g++: error: type/value mismatch at argument 1 in template parameter list
// for ‘template<template<class> class TT> struct C’
};
int main() {}
If I'm not mistaken, the problem is that A<T>::type is template <class> class, not simple typename as I declared. I tried remove keyword typename after A<T>::type or replace it on template or template <class> class, but it doesn't work.
How can I solve this problem? I don't want significantly edit code of structs and really don't want declare A with 2 template arguments or define B outside A or something else which significantly change A and B due to logic of program but ready consider any helpful idea :)
|
You need template, but in a different place:
using type = typename C<A<T>::template type>::type;
// ^~~~~~~~
IIRC, it becomes optional and deprecated in C++23 (that is, template followed by an identifier without <...> after it).
|
73,338,485
| 73,359,193
|
Sometimes Qt's paintGL does not draw an OpenGL edge of Bullet's Physics colliders
|
It works 50 by 50%:
I use a timer for redrawing:
void Widget::animationLoop()
{
m_deltaTime = m_elapsedTimer.elapsed() / 1000.f;
m_elapsedTimer.restart();
m_pWorld->stepSimulation(m_deltaTime, 8);
update();
}
I call collider's drawing (m_pWorld->debugDrawWorld();) like this:
void Widget::paintGL()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
m_pWorld->debugDrawWorld();
m_projViewMatrix = m_projMatrix * m_viewMatrix;
m_pPlayer->position = m_pPlayerCollider->getPosition();
m_pPlayer->rotation = m_pPlayerCollider->getRotation();
m_pPlayer->draw(m_projViewMatrix);
m_pGround->draw(m_projViewMatrix);
m_pPlatforms->draw(m_projViewMatrix);
}
I have the DebugDrawer class that I inherit from btIDebugDraw. I override the drawLine method to transform the 1x1x1 cube to make a segment to draw. I draw the segment by calling m_pColliderEdge->draw(m_projViewMatrix); in the drawLine method.
I tried to send a pointer to the QOpenGLWidget object to the DebugDrawer constructor:
DebugDrawer(QOpenGLWidget *widget, btDynamicsWorld *pWorld, ColliderEdge *pColliderEdge);
to keep it for makeCurrent but this did not help:
void DebugDrawer::drawLine(const btVector3 &from, const btVector3 &to, const btVector3 &color)
{
/* ... */
m_pWidget->makeCurrent();
m_projViewMatrix = projMatrix * viewMatrix;
m_pColliderEdge->draw(m_projViewMatrix);
}
|
I forgot to get the uMvpMatrix location in the ColliderEdge class:
ColliderEdge::ColliderEdge(QOpenGLShaderProgram *program,
const VertexBuffersData &vertexBuffers)
: m_pProgram(program)
{
m_vertPosBuffer = vertexBuffers.vertPosBuffer;
m_amountOfVertices = vertexBuffers.amountOfVertices;
m_pProgram->bind();
m_aPositionLocation = m_pProgram->attributeLocation("aPosition");
m_uMvpMatrixLocation = m_pProgram->uniformLocation("uMvpMatrix");
}
Sorry, guys. I will publish simple examples in the next time when I ask questions.
|
73,339,042
| 73,340,039
|
Is there a way to embed a web browser inside of an ImGui Window?
|
Question:
Hello, I am looking for a way to embed a web browser inside of an ImGui Window, kind of like a button or text control.
Is there a library, hacky workaround or something of that sort, that I could use without switching to a different UI library (because some features in the project still have regular imgui). The best solution would be if I could also control the website or at least read what the console outputs/execute a function in C++ when something in js (ex. button press) happens.
As said I wouldn't like to switch the library, but I know that it is very unlikely that something like this exists, as it is very specific and not in any way the best approach to a webbrowser inside of an app.
Desperate Rambling and Life Story:
It's kind of like if ImGui and Electron had a child. Frankenstein like project.
I planned to use Electron for this, but quickly realised that porting the C++ code isn't the ideal thing to do, especially with the size of the project + the speed and low level access I would loose when porting to Electron aren't worth it.
It is going to be Windows and only Windows I am dealing with, so I am desperate enough to result to embedding Internet Explorer if necessary. However WebKit, Firefox or Chromium would be best suited.
Kind Regards fellow coders!
|
Since you also are willing to accept a "hacky" approach to this, you can in programmer terms "borrow" some code from this repository which is a full CEF (Chromium Embedded Framework) implementation inside of ImGui. Found this while browsing the issues of Ocornut's ImGui.
Comment and repo on GitHub by hendradarwin
It is a bit old, since work on it stopped in 2020, however it is relatively new compared to other solutions, so you can easily implement this without too much hassle. If your C++ skills are good enough, you could even update it on par with CEF, although not really neccessary since most features should be supported.
Please keep in mind that this is not the best approach to this problem, the best would be to migrate to a UI library that has native controls (as mentioned Qt or GTK). However I understand that migration is never easy, so I hope your journey with ImGui goes well.
If you do decide to update it, please also contribute your work, either to the repo directly or just publish your updated fork. Hope this helps all others looking for a solution to this intricate problem.
Edit: I am unsure if you can "control" the browser but I believe you can execute javascript and get a value from any variable. It depends on what you really need, I don't know about console output either, however it should all be documented in the CEF Documentation.
|
73,339,565
| 73,339,663
|
build and pass list of types to variadic templates
|
I'm very new to meta-programming and I'm experimenting with some examples.
I've designed a variadic template class as follows:
template <typename TA, typename... TB>
class A
[...]
This could be instantiated simply by passing different types like
A<Class1, Class2, Class3> * a = &(A<Class1, Class2, Class3>::instance());
where the pack TB only contains Class2 and Class3.
Now, what I would like to achieve is to define in my header file a list of default types and build it gradually using #ifdef definitions. I'll make an example
// concept of list of types
template<class... Ts>
struct typelist{};
// concatenating two typelists
template<class... Ts, class... Us>
auto concat(typelist<Ts...>, typelist<Us...>) -> typelist<Ts..., Us...>;
static typelist<> defaultTypelist;
#ifdef MACRO1
defaultTypelist = concat(defaultTypelist, typelist<Class2>);
// or something like defaultTypelist.push_front(Class2);
#endif
#ifdef MACRO2
defaultTypelist = concat(defaultTypelist, typelist<Class3>);
// or something like defaultTypelist.push_front(Class3);
#endif
By doing that, in my main I would like to instantiate my object as follows:
A<Class1, defaultTypelist> * a = &(A<Class1, defaultTypelist>::instance());
The main problem I see here is that I'm not able to gradually append types to defaultTypelist since it was declared in the beginning at empty template typelist<>. The compiler also returns an error if I try to pass an empty typelist as second parameter.
This may be trivial but so far I was not able to solve this, mainly because, as a beginner in meta-programming, compiler errors are not very helpful to me.
For this project, I have to use C++14 and below.
|
Big fat warning: don't do this.
The reason I'm saying, it'll change during the code. As you gradually add types, you'll end up defaultTypeList having multiple meanings in different places of the code.
That said... Can it be done? Of course,
#define DEFAULT_TYPE_LIST typelist<>()
// ...
#ifdef MACRO1
static constexpr auto macro1_prev_dtl = DEFAULT_TYPE_LIST;
#undef DEFAULT_TYPE_LIST
#define DEFAULT_TYPE_LIST concat(macro1_prev_dtl, typelist<Class2>())
#endif
// ...
#ifdef MACRO2
static constexpr auto macro2_prev_dtl = DEFAULT_TYPE_LIST;
#undef DEFAULT_TYPE_LIST
#define DEFAULT_TYPE_LIST concat(macro2_prev_dtl, typelist<Class3>())
#endif
... then use DEFAULT_TYPE_LIST wherever you'd like to access the current state of the list.
|
73,339,664
| 73,339,874
|
target_include_directories not including INTERFACE library
|
# Adding a header-only library
set(INCLUDE_LOCATION "${PROJECT_SOURCE_DIR}/../include")
add_library(myLib INTERFACE)
target_include_directories(myLib INTERFACE "${INCLUDE_LOCATION}")
# Printing the include paths for the header only target
get_target_property(dirs myLib INCLUDE_DIRECTORIES)
foreach(dir ${dirs})
message(STATUS "dir='${dir}'")
endforeach()
The above cmake will show dir=NOTFOUND
There are no problems with ${INCLUDE_LOCATION}, I have checked it.
This works perfectly fine if I swap the library to executable.
I am not sure what I am missing.
|
Interface libraries don't have non-INTERFACE properties. INCLUDE_DIRECTORIES is simply invalid here.
You are looking for INTERFACE_INCLUDE_DIRECTORIES.
cmake_minimum_required(VERSION 3.24)
project(test)
set(INCLUDE_LOCATION "${PROJECT_SOURCE_DIR}/../include")
add_library(myLib INTERFACE)
target_include_directories(myLib INTERFACE "${INCLUDE_LOCATION}")
get_target_property(dirs myLib INTERFACE_INCLUDE_DIRECTORIES)
foreach(dir IN LISTS dirs)
message(STATUS "dir='${dir}'")
endforeach()
Prints:
-- dir='/home/reinking/test/../include'
|
73,339,793
| 73,339,872
|
How to understand this std::bind usage
|
I am trying to understand this usage of std::bind().
For this example:
std::bind(&TrtNodeValidator::IsTensorRTCandidate, &validator, std::placeholders::_1)
They are trying to bind the function TrtNodeValidator::IsTensorRTCandidate(). However, according to the definition of this API:
Status TrtNodeValidator::IsTensorRTCandidate(const Node* node);
It only accepts one parameter. Why do they still need &validator when there exits std::placeholders::_1?
|
TrtNodeValidator::IsTensorRTCandidate() is a non-static member function.
Aside from its explicit parameters, it requires a TrtNodeValidator* to become its implicit this parameter.
This usage:
std::bind(&TrtNodeValidator::IsTensorRTCandidate, &validator,
std::placeholders::_1)
will produce a callable object that then calls:
(&validator)->IsTensorRTCandidate(std::placeholders::_1)
|
73,340,077
| 73,340,918
|
Catch2 CLion error, "No tests were found"
|
I have a folder structure like
and I am trying to get Catch2 setup, my CMake files look like:
the topmost CMake:
cmake_minimum_required(VERSION 3.21)
project(throwaway)
set(CMAKE_CXX_STANDARD 14)
add_subdirectory(src)
add_subdirectory(tests)
add_executable(foo_main main.cpp)
target_link_libraries(foo_main PUBLIC foo_lib)
my src CMake:
add_subdirectory(foo)
my src/foo CMake:
add_library(foo_lib Foo.cpp)
my tests CMake:
add_subdirectory(foo)
add_executable(foo_test catch_runner.cpp)
target_link_libraries(foo_test PUBLIC
foo_test_lib)
my tests/foo CMake
add_library(foo_test_lib
FooTests.cpp)
target_link_libraries(foo_test_lib PUBLIC
foo_lib)
From there, I used Clion's Catch2 integration to set up my run config as
so nothing crazy here
However, I get this error:
I discovered that the error goes away if I edit the tests CMake into
add_executable(foo_test catch_runner.cpp foo/FooTests.cpp)
target_link_libraries(foo_test PUBLIC foo_lib)
and the test works as expected. But I obviously don't want to manually add each file into an executable, I want to be able to make a library that I can just slap the catch_runner into.
I have no clue why the test doesn't work when I link it as a library, but works when I add it manually. Any ideas?
|
Figured out it from the links at How do you add separate test files with Catch2 and CMake?
in my tests/foo CMake I changed it to
add_library(foo_test_lib OBJECT
FooTests.cpp)
target_link_libraries(foo_test_lib PUBLIC
foo_lib)
|
73,340,220
| 73,340,438
|
inline function in an array of functions, good idea for performance?
|
The question is in the title but it's not necessarily clear so here's a code example of what I wanted to do:
#include <iostream>
typedef void (*functions)(void);
inline void func_A()
{
std::cout << "I am A !" << std::endl;
}
inline void func_B()
{
std::cout << "I am B !" << std::endl;
}
int main()
{
functions funcs[2] = {func_A, func_B};
for (int i = 0; i < 2; i++)
funcs[i]();
return 0;
}
Is this a good idea compared to the interest of an inline function? Does the compiler handle this well?
Thanks in advance !
|
You can easily look at the output of the compilers to see how they handle your code. Here my results for current compilers with O2 optimization flags, see https://godbolt.org/z/ecjjz88hs.
Current MSVC seems to not optimize the calls at all. It doesn't even unroll the loop and therefore also doesn't determine which functions are actually indirectly called and also consequently can't inline the function calls.
GCC 12.1 does unroll the loop and turns the indirect function calls into direct function calls, but decides not to inline them.
Clang 14.0.0 does unroll the loop and also does inline the two calls to func_A and func_B.
Note that these results can easily change if you change the details of your example, in particular the number of functions called. It is also non-obvious that inlining the function calls is the best decision here. Compared to the cout << statements the function call overhead is negligible.
The inline keyword on the functions has no impact on these behaviors. The compilers considered above behave exactly the same with or without it.
inline may be a hint to the compiler that the function should be considered for inlining. But that is only a very secondary purpose of the keyword. Compilers will consider non-inline functions for inlining as well and mainly use internal heuristics to decide whether inlining is appropriate or not.
The main purpose of the inline keyword is to be able to define a function in a header file. An inline function definition may (in contrast to a non-inline one) appear in multiple translation units. (In fact an inline function must have its definition appear in any translation unit using that function.) This helps inlining by giving the compiler access to the definition of the function directly in the translation unit using it. Without link-time optimization (LTO) compilers can't really inline function calls across translation units.
If a function is meant to be used in only one translation unit anyway, then it should be marked static, not inline, so that it cannot conflict with other functions of the same name that are meant to be local to another translation unit.
|
73,340,368
| 73,340,376
|
how to force shutdown computer using cpp program
|
i have been working on a small project. i am making a timer for my little brother's PC so that he will not be able to use computer more than set time by me in a day.
The problem I am facing in my code is I have tried bunch of system commands to shut down computer automatically but when it runs the command it asks to wether close the running applications. but I want to force shut down all running apps. below is the code i am trying but it is not quite working out
system("C:\\Windows\\System32\\shutdown /s /t 0");
now this statement, when it is executed, asks for closing applications which are running or cancel shut down process
ExitWindowsEx(EWX_SHUTDOWN | EWX_FORCE,
SHTDN_REASON_MAJOR_OPERATINGSYSTEM |
SHTDN_REASON_MINOR_UPGRADE |
SHTDN_REASON_FLAG_PLANNED);
i got this above function from Microsoft docs but it doesn't work in my computer even it doesn't show any error in this statement. but even after executing it doesn't work. moreover visual studio offered its own suggestion of using a new API which is given below
InitiateSystemShutdownEx(NULL, NULL, 0, true, false, SHTDN_REASON_FLAG_USER_DEFINED);
this also doesn't work even if gets execute. i am using windows 11. and below i am giving the whole code where i reached yet
#include <iostream>
#include <stdlib.h>
#include<Windows.h>
#include<fstream>
#pragma warning(disable : 4996)
#pragma comment(lib, "user32.lib")
#pragma comment(lib, "advapi32.lib")
using namespace std;
void timer()
{
int hours=0;
int minutes=0;
int seconds=0;
ofstream tim;
while (seconds != 5)
{
tim.open("time.txt", ios::out | ios::trunc);
seconds++;
Sleep(1000);
if (seconds == 60)
{
minutes++;
seconds = 0;
}
if (minutes == 60)
{
hours++;
minutes = 0;
}
tim << hours << endl << minutes << endl << seconds;
tim.close();
}
}
int main()
{
timer();
ExitWindowsEx(EWX_SHUTDOWN | EWX_FORCE,
SHTDN_REASON_MAJOR_OPERATINGSYSTEM |
SHTDN_REASON_MINOR_UPGRADE |
SHTDN_REASON_FLAG_PLANNED);
cout << endl << endl;
return 0;
}
so please just any type of help would be appreciated.
|
You need to add the /f flag to force a shutdown.
|
73,340,416
| 73,340,452
|
When does a struct require a default constructor?
|
I wrote a struct with custom constructor designed to be a data member of a class:
struct HP
{
int max_hp;
int hp;
// HP(){}; it is required for the next class constructor function. Why?
HP(int max_hp) {
this->max_hp=max_hp;
this->hp=max_hp;
}
};
class Character
{
protected:
HP hp;
friend struct HP;
public:
Character(int hp);
};
Character reports: Error C2512: 'HP': no appropriate default constructor available
Character::Character(int hp){
this->hp = HP(hp);
}
Where did I implicitly initialize HP such that a default constructor is required? inline?
|
All class members are initialized before entering the body of the constructor.
Character::Character(int hp)
{ // already too late to initialize Character::hp
this->hp = HP(hp); // this is an assignment
}
Without a HP default constructor, Character(int hp); cannot initialize its hp member unless it can provide arguments to the HP constructor, and the only way to forward the arguments to the HP constructor is with a Member Initializer List.
Example:
struct HP{
int max_hp;
int hp;
HP(int max_hp): // might as well use member initializer list here as well
max_hp(max_hp),
hp(max_hp)
{
// no need for anything in here. All done in Member Initializer List
}
};
class Character
{
protected:
HP hp;
friend struct HP; // not necessary. HP has no private members.
public:
Character(int hp);
};
Character::Character(int hp):
hp(hp)
{
}
|
73,340,907
| 73,341,083
|
If C and C++'s double (and float) is IEEE 754-1985, then are the integer representations and Infinity, -0, NaN, etc, all left unused?
|
It appears that JavaScript's number type is exactly the same as C and C++'s double type, and both are IEEE 754-1985.
JavaScript can use IEEE 754 as integers but when the number becomes big or gets an arithmetic calculation such as divided by 10 or by 3, it seemed like it can switch into floating point mode. Now C and C++ only use IEEE 754 as double and therefore only use the floating point portion and do not use the "integer" portion. Therefore, do C and C++ left the integer representations unused?
(and C left the NaN, Infinite, -Infinite, -0 unused as I recalled never using them in C).
|
If that's the case, isn't it true that the IEEE 754's representations of [integers and some special values] were all unused, as C and C++ didn't have the capability of referencing them?
This notion appears as if it might stem from the fact that JavaScript uses the IEEE-754 binary64 format for all numbers and performs (or at least defines) bitwise operations by converting the binary64 format to an integer format for the actual operation. (For example, a bitwise AND in JavaScript is defined, via the ECMAScript specification, as the AND of the bits obtained by converting the operands to a 32-bit signed integer.)
C and C++ do not use this model. Floating-point and integer types are separate, and values are not kept in a common container. C and C++ evaluate expressions based on the types of the operands and do so differently for integer and floating-point operations. If you have some variable x with a floating-point value, it has been declared as a floating-point type, and it behaves that way. If some variable y has been declared with an integer type, it behaves as an integer type.
C and C++ do not specify that IEEE 754 is used, except that C has an optional annex that specifies the equivalent of IEEE 754 (IEC 60559), and C and C++ implementations may choose to conform use IEEE-754 formats and to conform to it. The IEEE-754 binary64 format is overwhelmingly used for double by C and C++ implementations, although many do not fully conform to IEEE-754 in their implementation.
In the binary64 format, the encoding as a sign bit S, an 11-bit “exponent” code E, and a 52-bit “significand code,” F (for “fraction,” since S for significand is already taken for the sign bit). The value represented is:
If E is 2047 and F is not zero, the value represented is NaN. The bits of F may be used to convey supplemental information, and S remains an isolated sign bit.
If E is 2047 and F is zero, the value represented is +∞ or −∞ according to whether S is 0 or 1.
If E is neither 0 nor 2047, the value represented is (−1)S•(1 + F/252)•2E−1023.
If E is zero, the value represented is (−1)S•(0 + F/252)•21−1023. In particular, when S is 1 and F is 0, the value is said to be −0, which is equal to but distinguished from +0.
These representations include all the integers from −253−1 to +253−1 (and more), both infinities, both zeros, and NaN.
If a double has some integer value, say 123, then it simply has that integer value. It does not become an int and is not treated as an integer type by C or C++.
But from (-253 - 1) to (253 - 1), that's a lot of numbers unused…
There are no encodings unused in the binary64 format, except that one might consider the numerous NaN encodings wasted. Indeed many implementations do waste them by making them inaccessible or hard to access by programs. However, the IEEE-754 standard leaves them available for whatever purposes users may wish to put them to, and there are people who use them for debugging information, such as recording the program counter where a NaN was created.
|
73,340,921
| 73,340,979
|
How to visualize layout of C++ struct/class
|
What is the best way to visualize the memory layout of a C++ class/struct, compiled by GCC?
I added the GCC switch -fdump-lang-class to my C++ compile options but it didn't output anything to stdout, nor did I notice any files .class files created.
I just want to see the size/offsets of class/data members.
|
You can use pahole. It is a swiss army tool for this kind of things.
https://manpages.ubuntu.com/manpages/impish/man1/pahole.1.html
For example, say you have this test.cpp file
struct A {
int i;
char c;
double x;
};
void doit( A& a ) {
a.i = 1;
a.c = 0;
a.x = 1.0;
}
Let's compile it into a shared library
$ g++ test.cpp -shared -ggdb -o libtest.so
$ ls -l libtest.so
-rwxrwxr-x 1 awesome awesome 16872 Aug 12 20:15 libtest.so
Then run pahole on the binary (with debug information)
$ pahole libtest.so
struct A {
int i; /* 0 4 */
char c; /* 4 1 */
/* XXX 3 bytes hole, try to pack */
double x; /* 8 8 */
/* size: 16, cachelines: 1, members: 3 */
/* sum members: 13, holes: 1, sum holes: 3 */
/* last cacheline: 16 bytes */
};
You can also provide a class name with the -C <classname> argument
$ pahole -C A libtest.so
struct A {
int i; /* 0 4 */
char c; /* 4 1 */
/* XXX 3 bytes hole, try to pack */
double x; /* 8 8 */
/* size: 16, cachelines: 1, members: 3 */
/* sum members: 13, holes: 1, sum holes: 3 */
/* last cacheline: 16 bytes */
};
|
73,340,981
| 73,341,008
|
Smart pointers still refers to raw pointer even though reset is applied
|
Please find the code attached which I took from https://www.geeksforgeeks.org/auto_ptr-unique_ptr-shared_ptr-weak_ptr-2/ for testing the smart pointers.
// C++ program to demonstrate shared_ptr
#include <iostream>
#include <memory>
class A {
public:
void show()
{
std::cout << "A::show()" << std::endl;
}
};
int main()
{
std::shared_ptr<A> p1(new A);
std::cout << p1.get() << std::endl;
p1->show();
std::shared_ptr<A> p2(p1);
p2->show();
std::cout << p1.get() << std::endl;
std::cout << p2.get() << std::endl;
// Returns the number of shared_ptr objects
// referring to the same managed object.
std::cout << p1.use_count() << std::endl;
std::cout << p2.use_count() << std::endl;
// Relinquishes ownership of p1 on the object
// and pointer becomes NULL
p1.reset();
std::cout << p1.get() << std::endl;
std::cout << p2.use_count() << std::endl;
std::cout << p2.get() << std::endl;
p1->show();
p2->show();
std::cout << p1.get() << std::endl;
std::cout << p2.use_count() << std::endl;
std::cout << p2.get() << std::endl;
return 0;
}
p1 is reset and no pointer is assigned to it (Found as per p1.get()). After that, When I am calling p1->show() function, it shows the output as A::show(). How it is possible? is it a same case on raw pointers as well?
output:
0x24dc5ef1790
A::show()
A::show()
0x24dc5ef1790
0x24dc5ef1790
2
2
0
1
0x24dc5ef1790
A::show()
A::show()
0
1
0x24dc5ef1790
|
It works because you are not using any member from the object otherwise it would segfault. But a sanitizer catches this very easily:
$ g++ -ggdb -O0 -fsanitize=undefined,address shared.cpp -o shared
$ ./shared
0x602000000010
A::show()
A::show()
0x602000000010
0x602000000010
2
2
0
1
0x602000000010
shared.cpp:33:13: runtime error: member call on null pointer of type 'struct element_type'
A::show()
A::show()
0
1
0x602000000010
Change the code to add one member variable to print like
class A {
public:
void show()
{
std::cout << "A::show() " << value << std::endl;
}
int value;
};
And it segfaults
$ g++ -ggdb -O0 shared.cpp -o shared
$ ./shared
0x560a3dde4eb0
A::show() 0
A::show() 0
0x560a3dde4eb0
0x560a3dde4eb0
2
2
0
1
0x560a3dde4eb0
Segmentation fault (core dumped)
|
73,341,125
| 73,341,165
|
Disable GCC narrowing conversion errors
|
I have code from over 20 years in C/C++ and one technique used to handle variable data sizes was to let automatic type conversion handle it.
For example:
#define MY_STATUS_UNDEFINED (-1)
Then if it was compared/used against a int64_t it was auto expanded to -1LL, for uint64_t to 0xFFFFFFFFFFFFFFFF, for uint32_t to 0xFFFFFFFF, int16_t to -1, uint16_t to 0xFFFF, etc..
However, now I'm getting errors with the newer gcc versions that complain about narrowing conversion. In this case it was a switch statement with a UINT variable.
switch (myuint) {
case MY_STATUS_UNDEFINED:
break;
}
What is the recommended way to get it to not error out?
What is the easiest way to get it to not error out?
What is the cleanest way so it doesn't error out and doesn't give any warning message?
Essentially, I want it to do the auto type conversion properly but no warning or errors.
|
Using the following brief example:
#include <cstdint>
#include <iostream>
#define MY_STATUS_UNDEFINED (-1)
void bar()
{
std::cout << "It works\n";
}
void foo(uint32_t n)
{
switch (n) {
case MY_STATUS_UNDEFINED:
bar();
break;
}
}
int main()
{
foo(0xFFFFFFFF);
return 0;
}
You are seeing the following error from gcc:
error: narrowing conversion of ‘-1’ from ‘int’ to ‘unsigned int’ [-Wnarrowing]
Now, pay careful attention to the error message. gcc is telling the exact option to shut this off. See the [-Wnarrowing] annotation? That's the compilation flag that's responsible for producing this error message.
To turn it off, stick a "no-" in front of it:
-Wno-narrowing
All gcc diagnostics use this convention. Now the shown code will compile, run, and produce the expected result:
It works!
Add -Wno-narrowing to your global compilation options.
You should really consider this to be only a temporary band-aid solution. These compilation errors are exactly what you want. They're telling you about real or potential problems. -Wall -Werror -Wsuggest-override -Wreturn-type are my favorite compilation options.
|
73,341,181
| 73,341,214
|
C++ Vector of Objects, are they all named temp?
|
New to C++ OOP, I recently learned about classes and objects. I created a straightforward class and menu-driven program that adds a temp object to a vector of movies. I have a quick question that I can't quite understand.
Am I just pushing multiple "temp" objects into the vector?
In my head i'm visualizing this as vector my_movies = {temp, temp, temp}; and continously adding 'temp' objects until the user is done. Is this the right way to picture it?
#include <iostream>
#include <vector>
using namespace std;
class Movie
{
private:
string name;
public:
string get_name() { return name; }
void set_name(string n) { name = n; }
};
void menu() {
cout << "1. Add movie" << endl;
cout << "2. Show Movies" << endl;
cout << "3. Quit" << endl;
}
int getChoice(int &choice) {
cout << "Enter you choice: ";
cin >> choice;
return choice;
}
int main() {
vector<Movie> my_movies;
int choice = 0;
string name;
do {
menu();
getChoice(choice);
switch (choice) {
case 1: {
Movie temp;
cout << "Set user name: ";
cin >> name;
temp.set_name(name);
my_movies.push_back(temp);
break;
}
case 2: {
for (auto &mv : my_movies)
cout << mv << endl;
break;
}
}
} while (choice != 3);
return 0;
}
|
In your case, when you are calling push_back it will copy your "temp" object, which is a local object on the stack. It will be copied into a new object which is stored on the heap, held by the vector object. The vector will store these as an array internally (the default vector with the default allocator etc).
It's also possible to "move" the object (under C++11 and later), if you understand the difference, but doing push_back(std::move(temp)), which generally gives better performance. In your case it would avoid copying the string member "name", and move it instead, avoiding a new allocation for the string inside the Movie in the vector.
See here for more details on push_back
Appends the given element value to the end of the container.
The new element is initialized as a copy of value.
https://en.cppreference.com/w/cpp/container/vector/push_back
If you are just talking about the name of the movie, it will be what ever is entered from cin. Objects don't have names themselves. The local variable name "temp" is just what you see when you write the code, but is just used to tell the compiler which object is being used - the object itself doesn't have a name form the compilers perspective.
|
73,341,560
| 73,341,770
|
For C/C++, when people say code is insecure, does it mean the application will crash, or it can be abused to launch cyber attack?
|
I have seen in many instances when people say codes are "insecure".
Accessing an array beyond bound is "insecure".
Malloc without free is insecure.
Dangling pointer is "insecure".
No bound checking user input is "insecure".
In the above example, I understand in the fourth instances, under specific context, such as if you are writing your code to check the user input against a password database, your code can be abused to cause a buffer overflow and allow fraudulent user to authenticate. However I don't understand how your code can be abused in the other cases to endanger the user's computer.
Can I know when people say "insecure", is there any chance they really mean either your program can be abused for an attacker's gain, or your program can crash?
|
Invocation of undefined behavior is always insecure. The first item (array access our of bounds) is automatically in that category. The fourth is just an example of the first, conditional on user interaction (those pesky users). I.e. the potential to overreach an input buffer has the potential to invoke undefined behavior, and therefore insecure.
That leaves the middle two.
Failing to free dynamic memory will eventually lead to operational failure, as the OS will (usually) terminate such a program, under its rules, not yours. When and how this happens isn't the concern; that it can happen at all is a big concern. This is an ingredient of "crappy code", but moreover has the potential for a security issue. Whatever conditions are required to repeat the leak need only be done with extreme prejudice until such time as the OS tears down the application, and with that you have a DoS (denial of service) accomplishment.
Dangling pointers are just UB laying in-wait. They are yet-more ingredients in crappy code. Just sitting there, they don't do anything. However, they present an opportunity for such a problem when they are dereferenced. Once that transpires it joins the first and fourth items in the basket of UB, and is therefore automatically "insecure". It can also become problematic when the value of the pointer itself is treated as a state unto its own, though that is highly situational and rarer than the simple dereference workflow.
The short answer is: all UB is automatically insecure. Two of the four items present instantly fit into that basket. A third potentially fits into that basket under the right usage conditions. The fourth (failing to free memory) is a flat-out bug that will eventually lead to process termination outside the purview of you, the code author, but is enhanced if exploitable to promote a DoS conclusion.
The super short answer: Don't write code that invokes UB, and don't write crappy code in-general.
|
73,342,306
| 73,342,375
|
Can the boost::asio timer object be deleted before the corresponding callback is called?
|
Should the timer object exist until the task is completed?
I mean the following:
boost::asio::io_service io;
{
boost::asio::steady_timer timer(io, std::chrono::seconds(5));
timer.async_wait(someCallback);
} // the timer object is deleted here
io.run();
Is this allowed and does it lead to undefined behavior?
|
The destructor simply cancels any pending waits so your callback will be called with the boost::asio::error::operation_aborted error code.
|
73,342,473
| 73,342,598
|
Is there a shorthand method to writing move constructors?
|
I have a class with one strong pointer and a lot of object members. Writing copy and move constructors for an object like this involves a lot of tedious copy/pasting, however...
Is there any way to shorten this, without giving up my beautiful naked pointer? Like if I could perform the default generated move operation, with just one extra instruction to nullify the naked pointer afterwards?
class Example {
public:
Example()
: m_multiplexDevice(getDevice(B737M_mpDevice, true))
{
engageDevice(m_multiplexDevice);
}
Example(Example && other)
: m_multiplexDevice (other.m_multiplexDevice )
, m_pilotStack (std::move(other.m_pilotStack ))
, m_pasStack (std::move(other.m_pasStack ))
, m_pitchAttackLimits (std::move(other.m_pitchAttackLimits))
, m_yawToPitchBalance (std::move(other.m_yawToPitchBalance))
, m_engineBalance (std::move(other.m_engineBalance ))
{
other.m_multiplexDevice = NULL;
}
Example & operator=(Example && other) {
if (this != &other) {
// ignore that this is incorrect (not properly destroying in assignment),
// working with client code that kinda sucks
m_multiplexDevice = other.m_multiplexDevice;
m_pilotStack = std::move(other.m_pilotStack );
m_pasStack = std::move(other.m_pasStack );
m_pitchAttackLimits = std::move(other.m_yawToPitchBalance );
m_yawToPitchBalance = std::move(other.m_yawToPitchBalance );
m_engineBalance = std::move(other.m_engineBalance );
m_multiplexDevice = NULL;
}
return *this;
}
Example(const Example & other) =delete;
Example & operator=(const Example & other) =delete;
~Example() {
if (m_multiplexDevice)
disengageDevice(m_multiplexDevice);
delete m_multiplexDevice;
}
private:
char STRONG * m_multiplexDevice;
std::vector<uint32> m_pilotStack;
std::vector<uint32> m_pasStack;
std::vector<uint32> m_pitchAttackLimits;
std::vector<uint32> m_yawToPitchBalance;
std::vector<uint32> m_engineBalance;
// ... etc
};
|
Use a unique_ptr with a custom deleter.
#include <memory>
#include <vector>
#include <cstdint>
#define STRONG /* ??? */
void disengageDevice(char STRONG*);
#define B737M_mpDevice 1
char STRONG *getDevice(int, ...);
void engageDevice(char STRONG *);
class Example {
public:
Example()
: m_multiplexDevice(
getDevice(B737M_mpDevice, true),
deconstruct_m_multiplexDevice)
{
engageDevice(m_multiplexDevice.get());
}
static void deconstruct_m_multiplexDevice(char STRONG *p) {
if (p) {
disengageDevice(p);
delete p;
}
}
private:
std::unique_ptr<
char STRONG,
decltype(&deconstruct_m_multiplexDevice)
> m_multiplexDevice;
std::vector<uint32_t> m_pilotStack;
std::vector<uint32_t> m_pasStack;
std::vector<uint32_t> m_pitchAttackLimits;
std::vector<uint32_t> m_yawToPitchBalance;
std::vector<uint32_t> m_engineBalance;
// ... etc
};
|
73,342,779
| 73,361,229
|
How alternative deductions can yield more than one possible "deduced A"?
|
Per [temp.deduct.call]/5
These alternatives ([temp.deduct.call]/4) are considered only
if type deduction would otherwise fail. If they yield more than one
possible deduced A, the type deduction fails. [ Note: If a
template-parameter is not used in any of the function parameters of a
function template, or is used only in a non-deduced context, its
corresponding template-argument cannot be deduced from a function call
and the template-argument must be explicitly specified. — end note ]
My question:
How these alternative deductions can yield more than one possible "deduced A"?
Please, support the answer with an example that triggers this case.
|
template<typename>
struct B {};
struct D : B<int>, B<double> {};
template<typename T>
void f(B<T>);
int main()
{
f(D{});
}
[temp.deduct.call]/(4.3):
If P is a class and P has the form simple-template-id, then the transformed A can be a derived class D of the deduced A.
applies here, and 2 deduced As are possible: B<int> or B<double>. So the type deduction fails.
Bonus example: interaction with overload resolution:
template<typename>
struct A {};
template<typename>
struct B {};
struct D1 : A<int>, B<int> {};
struct D2 : A<int>, B<int>, B<double> {};
template<typename T>
void f(A<T>); // 1
template<typename T>
void f(B<T>); // 2
int main()
{
f(D1{}); // error, ambiguous between 1 and 2
f(D2{}); // calls specialization of 1, because deduction for 2 fails
}
|
73,342,924
| 73,343,003
|
reference type as type traits / concept argument
|
In the specification-mandated implementation of the concept std::uniform_random_bit_generator, it is required that invoking operator() on an instance of type G satisfying this concept should return the same type as G::min() and G::max().
Why is std::same_as<std::invoke_result_t<G&>> used instead of
std::same_as<std::invoke_result_t<G>>? What the difference?
|
std::invocable<G&> checks whether an lvalue of type G can be invoked (without arguments). std::invocable<G> checks whether a rvalue of type G can be invoked (without arguments).
std::invoke_result_t is equivalently the corresponding return type.
In other words this guarantees that the generator can be declared as a variable and then invoked, e.g.
G g{/*args*/};
auto res = g();
But it does not guarantee that a temporary of type G can be invoked directly, e.g.
auto res = G{/*args*/}();
This is not how a random number generator is commonly used.
|
73,342,965
| 73,343,391
|
Concept and templates no longer run with g++-11
|
The following code :
#include <cstdio>
#include <string>
#include <concepts>
template<typename T, typename KEY, typename JSON_VALUE, typename...KEYS>
concept json_concept = requires(T t, int index, std::string& json_body, KEY key, JSON_VALUE value, KEYS... keys)
{
{ t.template get_value<T>(keys...) } -> std::same_as<T>;
{ t.template get_value<T>(key) } -> std::same_as<T>;
{ t.template get_value<T>(index, keys...) } -> std::same_as<T>;
{ t.set_cache(keys...) } -> std::same_as<void>;
{ t.get_list_size() } -> std::same_as<int>;
{ t.init_cache() } -> std::same_as<void>;
{ t.load(json_body) } -> std::same_as<void>;
{ t.contains(key) } -> std::same_as<bool>;
{ t.set_root_value(value) } -> std::same_as<void>;
{ t.release() } -> std::same_as<JSON_VALUE>;
};
class JsonUtil {
public:
template<json_concept JSON_OPERATOR>
JsonUtil(std::string body, JSON_OPERATOR& json_impl) {
json_impl.load(body);
}
template<typename T, json_concept JSON_OPERATOR, typename KEY>
T get_value(JSON_OPERATOR& json_impl, KEY&& key) {
return json_impl.template get_value<T>(std::forward<KEY>(key));
}
template<typename T, json_concept JSON_OPERATOR, typename... KEYS>
T get_value(JSON_OPERATOR& json_impl, KEYS&& ... keys) {
return json_impl.template get_value<T>(std::forward<KEYS>(keys)...);
}
template<json_concept JSON_OPERATOR, typename... KEYS>
void set_cache(JSON_OPERATOR& json_impl, KEYS&& ... keys) {
json_impl.set_cache(std::forward<KEYS>(keys)...);
}
template<json_concept JSON_OPERATOR>
void init_cache(JSON_OPERATOR& json_impl) {
json_impl.init_cache();
}
};
class IJson {
public:
template<typename T, typename KEY>
T get_value(KEY&& key) {
return T();
}
void load(std::string body) { };
};
int main()
{
IJson dd = IJson{};
JsonUtil jsoncpp(std::string("dummy"), dd);
}
This code used to run with clang++-10. When I started using g++-11 it fails to compile with following error:
conc.cpp:26:14: error: wrong number of template arguments (1, should be at least 3)
26 | template<json_concept JSON_OPERATOR>
| ^~~~~~~~~~~~~~~~~~~
conc.cpp:4:9: note: provided for ‘template<class T, class ... KEYS, class KEY, class JSON_VALUE> concept json_concept’
4 | concept json_concept = requires(T t, int index, std::string& json_body, KEYS... keys, KEY key, JSON_VALUE value)
What is the problem and how do I solve it please. Is the solution really to provide all templates types each time I invoke a function part of the concept even though not all template argument would be used?
The g++ version is 11.1.0
|
You are not using concepts with template parameters correctly. Clang is totally wrong in accepting your code. Its support for concepts doesn't seem to be mature enough.
If you have:
template <typename A, typename B, typename C>
concept foo = ...
then the correct usage of foo is with two template type arguments:
template <foo<int, char> Z> class bar ...
template <typename X, foo<X,X> Z> class baz...
Otherwise the concept has no idea what B and C are.
See this passage in the draft standard.
For this reason you cannot have a template parameter pack in the middle of the concept. Move it to the end:
template<typename T, typename KEY, typename JSON_VALUE, typename...KEYS>
concept json_concept = ...
Then something like this should work:
template<typename K, typename V, typename ... Ks,
json_concept<K, V, Ks> JSON_OPERATOR>
JsonUtil(std::string body, JSON_OPERATOR& json_impl) {
json_impl.load(body);
}
But this is just the beginning of the problem.
You write
t.template get_value<T>(key)
but this makes no sense. T is the name of the type that satisfies json_concept, in your example IJson. So you are passing IJson as the template parameter of IJson::get_value<IJson>. This is not how get_value is supposed to be used. You should be using something like
value = t.template get_value<JSON_VALUE>(key);
It is unclear what the other two overloads of get_value correspond to in IJson. IJson cannot satisfy json_concepts with all the requirements, whatever syntax you use to express them. It simply doesn't have all the required overloads.
It is also unclear how one can have such a parameterized concept where only part of the parameters participate in some requirements. Consider this:
template<json_concept JSON_OPERATOR<???>>
JsonUtil(std::string body, JSON_OPERATOR& json_impl) {
json_impl.load(body);
}
The concept requires you to provide key, keys, and value, but none of those is used in load and there is no good way to provide them in this context. You could use some defaults but this is sweeping the dirt under the rug.
|
73,343,345
| 73,347,449
|
What does "double + 1e-6" mean?
|
The result of this cpp is 72.740, but the answer should be like 72.741
mx = 72.74050000;
printf("%.3lf \n", mx);
So I found the solution on website, and it told me to add "+1e-7" and it works
mx = 72.74050000;
printf("%.3lf \n", mx + 1e-7);
but I dont know the reason in this method, can anyone explain how it works?
And I also try to print it but nothing special happens..., and it turn out to be 72.7405
mx = 72.74050003;
cout << mx + 1e-10;
|
To start, your question contains an incorrect assumption. You put 72.7405 (let's assume it's precise) on input and expect 72.741 on output. So, you assume that rounding in printf will select higher candidate of possible twos. Why?
Well, one could consider this is your task, according to some rules (e.g. fiscal norms for rounding in bills, in taxation, etc.) - this is usual. But, when you use standard de facto floating of C/C++ on x86, ARM, etc., you should take the following specifics into account:
It is binary, not decimal. As result, all values you showed in your example are kept with some error.
Standard library tends to use standard rounding, unless forced to use another method.
The second point means that default rounding in C floating is round-to-nearest-ties-to-even (or, shortly, half-to-even). With this rounding, 72.7405 will be rounded to 72.740, not 72.741 (but, 72.7415 will be rounded to 72.742). To ask for rounding 72.7405 -> 72.741, you should have installed another rounding mode: round-to-nearest-ties-away-from-zero (shortly: round-half-away). This mode is request, to refer to, in IEEE754 for decimal arithmetic. So, if you used true decimal arithmetic, it would suffice.
(If we don't allow negative numbers, the same mode might be treated as half-up. But I assume negative numbers are not permitted in financial accounting and similar contexts.)
But, the first point here is more important: inexactness of representation of such values can be multiplied by operations. I repeat your situation and a proposed solution with more cases:
Code:
#include <stdio.h>
int main()
{
float mx;
mx = 72.74050000;
printf("%.6lf\n", mx);
printf("%.3lf\n", mx + 1e-7);
mx *= 3;
printf("%.6lf\n", mx);
printf("%.3lf\n", mx + 1e-7);
}
Result (Ubuntu 20.04/x86-64):
72.740501
72.741
218.221497
218.221
So you see that just multiplying of your example number by 3 resulted in situation that the compensation summand 1e-7 gets not enough to force rounding half-up, and 218.2215 (the "exact" 72.7405*3) is rounded to 218.221 instead of desired 218.222. Oops, "Directed by Robert B. Weide"...
How the situation could be fixed? Well, you could start with a stronger rough approach. If you need rounding to 3 decimal digits, but inputs look like having 4 digits, add 0.00005 (half of least significant digit in your results) instead of this powerless and sluggish 1e-7. This will definitely move half-voting values up.
But, all this will work only if result before rounding have error strictly less than 0.00005. If you have cumbersome calculations (e.g. summing hundreds of values), it's easy to get resulting error more than this threshold. To avoid such an error, you would round intermediate results often (ideally, each value).
And, the last conclusion leads us to the final question: if we need to round each intermediate result, why not just migrate to calculations in integers? You have to keep intermediate results up to 4 decimal digits? Scale by 10000 and do all calculations in integers. This will also aid in avoiding silent(*) accuracy loss with higher exponents.
(*) Well, IEEE754 requires raising "inexact" flag, but, with binary floating, nearly any operation with decimal fractions will raise it, so, useful signal will drown in sea of noise.
The final conclusion is the proper answer not to your question but to upper task: use fixed-point approaches. The approach with this +1e-7, as I showed above, is too easy to fail. No, don't use it, no, never. There are lots of proper libraries for fixed-point arithmetic, just pick one and use.
(It's also interesting why %.6f resulted in printing 72.740501 but 218.221497/3 == 72.740499. It suggests "single" floating (float in C) gets too inaccurate here. Even without this wrong approach, using double will postpone the issue, masking it and disguising as a correct way.)
|
73,343,501
| 73,343,912
|
Replacing even digits in string with given string
|
I know how to replace all occurrences of a character with another character in string (How to replace all occurrences of a character in string?)
But what if I want to replace all even numbers in string with given string? I am confused between replace, replace_if and member replace/find functions of basic_string class, because signature of functions require old_val and new_val to be same type. But old_val is char, and new_val is string. Is there any effective way to do this, not using multiple loops?
e.g. if the input string is
"asjkdn3vhsjdvcn2asjnbd2vd"
and the replacement text is
"whatever"
, the result should be
"asjkdn3vhsjdvcnwhateverasjnbdwhatevervd"
|
You can use std::string::replace() to replace a character with a string. A working example is below:
#include <string>
#include <algorithm>
#include <iostream>
#include <string_view>
void replace_even_with_string(std::string &inout)
{
auto is_even = [](char ch)
{
return std::isdigit(static_cast<unsigned char>(ch)) && ((ch - '0') % 2) == 0;
};
std::string_view replacement_str = "whatever";
auto top = std::find_if(inout.begin(), inout.end(), is_even) - inout.begin();
for (std::string::size_type pos{};
(pos = (std::find_if(inout.begin() + pos, inout.end(), is_even) - inout.begin())) < inout.length();
pos += replacement_str.length() - 1)
{
inout.replace(pos, 1, replacement_str.data());
}
}
int main()
{
std::string test = "asjkdn3vhsjdvcn2asjnbd2vd";
std::cout << test << std::endl;
replace_even_with_string(test);
std::cout << test << std::endl;
}
|
73,343,631
| 73,343,983
|
std::map::reverse_iterator doesn't work with C++20 when used with incomplete type
|
I noticed that the use of std::map::reverse_iterator in the below example doesn't work with C++20 but works with C++17 in all compilers.
Demo
Demo MSVC
#include <map>
class C; //incomplete type
class Something
{
//THIS WORKS IN C++17 as well as C++20 in all compilers
std::map<int, C>::iterator obj1;
//THIS DOESN'T WORK in C++20 in all compilers but works in C++17 in all compilers
std::map<int, C>::reverse_iterator obj2;
};
int main()
{
Something s;
return 0;
}
My question is what changed in C++20 so(such) that the use of std::map::reverse_iterator stopped working in all C++20 compilers.
|
It works by chance pre-C++20, by standard it's UB to use incomplete types in std containers (with the exception of vector, list and forward_list since C++17). See here. Thus, it may work and may stop working at any time, but basically anything can happen and it should not be relied on.
If being able to store an incomplete type is a hard requirement for your use case, you may want to check boost::container.
|
73,343,857
| 73,347,357
|
Can't write to the video memory from a function with c++ (OS dev)
|
I want to make a simple function that prints a char to the screen:
unsigned char *_videoMEM = (unsigned char*)0xb8000;
int c_pos = 0;
void printf(char c){
//var 1
_videoMEM[c_pos++] = (char)c;
_videoMEM[c_pos++] = 0x0f;
//var 2
*((char*)0xb8000 + c_pos++) = c;
*((char*)0xb8000 + c_pos++) = 0x0f;
//none of the above work
}
the function executes (i increment a variable and print it in main()) but it didn't wrote to video memory, when i try to do the same thing but in the main() function it works and idk why or how:
FULL CODE:
unsigned char *_videoMEM = (unsigned char*)0xb8000;
int c_pos = 0;
char z = '0';
void printf(char c){
_videoMEM[c_pos++] = (char)c;
_videoMEM[c_pos++] = 0x0f;
}
//z+5
extern "C" void start(){
printf(z++);
printf(z++);
printf(z++);
_videoMEM[c_pos++] = z++;
_videoMEM[c_pos++] = 0x0f;
}
It should print to the screen 0123 but it prints 3
|
soooo i found the problem, i accidentally misplaced 0x1010010 (int the GDT descriptor) and that needed to be binary BUT, i found another problem, somehow, unsigned char *_videoMEM = (unsigned char*)0xb8000; dose not init the pointer so the solve whoud be like this:
char *BASE = 0;
int pos = 0;
void print(int color, const char *str){
while(*str != '\0'){
BASE[pos++] = *str;
str++;
BASE[pos++] = color;
}
return;
}
//z+5
extern "C" void start(){
BASE = (char*)0xb8000;
print(0x0F, "Hello World!\0");
return;
}
|
73,343,887
| 73,344,715
|
Reading byte from Output Register of I/O Expander via I2C
|
The below Arduino code snippet shows a function that should return a byte read from the Output Register of an I/O Expander TCA9535 via I2C. I oriented my code at the TCA9535 Datasheet Figure 7-8, seen here:
https://i.stack.imgur.com/GgNAQ.png.
However, calling readOutputRegister() always returns 255.
uint8_t readOutputRegister(){
Wire.beginTransmission(0x20); // Set Write mode (RW = 0)
Wire.write(0x02); // Read-write byte Output Port 0
// Repeated START
Wire.beginTransmission(0x21); // Set Read mode (RW = 1)
uint8_t res = Wire.read();
// Stop condition
Wire.endTransmission();
return res;
}
Here is the link for the datasheet of the TCA9535 I/O Expander I am using: https://www.ti.com/lit/ds/symlink/tca9535.pdf
|
First, from Wire.endTransmission() reference page.
This function ends a transmission to a peripheral device that was begun by beginTransmission() and transmits the bytes that were queued by write().
For Wire library, many people think Wire.write() would send the data, it is actually not, it only put the data on a queue. Data will only get send when you call Wire.endTransmission().
So you need to call Wire.endTransmission() after Wire.write(0x02).
Secondly, to read the data, it needs to call Wire.requestFrom() and also need to check if data has received with Wire.available() before reading the data.
Wire.beginTransmission(0x20); // Set Write mode (RW = 0)
Wire.write(0x02); // Read-write byte Output Port 0
if (Wire.endTransmission() != 0) {
// error
}
else {
Wire.requestFrom(0x20, 1); // request 1 byte from the i2c address
if (Wire.available()) {
uint8_t c = Wire.read();
}
}
|
73,344,127
| 73,373,790
|
Division using right shift operator gives TLE while normal division works fine
|
I'm trying to submit this leetcode problem Pow(x,n) using iterative approach.
double poww(double x, int n)
{
if (n == 0)
return 1;
double ans = 1;
double temp = x;
while (n)
{
if (n & 1)
{
ans *= temp;
}
temp *= temp;
n = (n >> 1);
}
return ans;
}
double myPow(double x, int n)
{
if (n == 0)
return 1.0;
if (n < 0)
return 1 / poww(x, abs(n));
return poww(x, n);
}
This code is giving time limit exceed error but when I change the right shift operator >> with normal division operator, the code works just fine.
Working code with division operator
double poww(double x, int n)
{
if (n == 0)
return 1;
double ans = 1;
double temp = x;
while (n)
{
if (n & 1)
{
ans *= temp;
}
temp *= temp;
n /= 2;
}
return ans;
}
double myPow(double x, int n)
{
if (n == 0)
return 1.0;
if (n < 0)
return 1 / poww(x, abs(n));
return poww(x, n);
}
I don't know what I'm missing here.
|
The problem is the given input range for "n".
Let us look at the constraints again:
The problem is the smallest number for n, which is -2^31 and that is equal to -2147483648.
But the valid range for an integer is -2^31 ... 2^31-1 which is -2147483648 ... 2147483647.
Then you try to use the abs function on -2147483648. But since there is no positive equivalent for that in the integer domain (on your machine), the number stays negative. And then you get the wrong result, because your n will be negative in your "poww" function.
Presumably on your machine long is the same as int, so, a 4 byte variable. If you change your interface to use a long long variable, it will work. Result maybe "inf" for big numbers or 0.
Please check the below code:
#include <iostream>
#include <cmath>
long double poww(double x, long long n)
{
if (n == 0)
return 1;
long double ans = 1;
long double temp = x;
while (n)
{
if (n & 1)
{
ans *= temp;
}
temp *= temp;
n = (n >> 1);
}
return ans;
}
long double myPow(double x, long long n)
{
if (n == 0)
return 1.0;
if (n < 0)
return 1 / poww(x, std::llabs(n));
return poww(x, n);
}
int main() {
std::cout << myPow(1, -2147483648) << '\n';
}
|
73,344,188
| 73,344,630
|
Android oboe glitch/noise/distortion
|
I'm trying to use oboe in my audio/video communication app, and I'm trying the onAudioReady round-trip callback as in the oboe guide: https://github.com/google/oboe/blob/main/docs/FullGuide.md
Now I'm frustrating:
If the read directly write into the *audioData, the sound quality is perfect, i.e.:
auto result = recordingStream->read(audioData, numFrames, 0);
But if I add a buffer between them, there is significant noise/glitch which is very undesirable:
auto result = recordingStream->read(buffer, numFrames, 0);
std::copy(buffer, buffer + numFrames, static_cast<int16_t *>(audioData));
By inspecting log, this buffering action is done within 1ms, suppose won't hurt?
Both 1 and 2 also use PCM_I16 audio format, buffer is int16_t * with size of numFrames.
Hopefully someone can point out what's wrong to cause this? Sorry I'm lack of audio processing and c++ knowledge.
|
I've figured it out because the channel is stereo, samples per frames are 2, i.e.:
auto result = recordingStream->read(buffer, numFrames, 0);
std::copy(buffer, buffer + numFrames * 2, static_cast<int16_t *>(audioData));
|
73,345,030
| 73,345,267
|
Copy-elision in direct initialization from braced-init-list
|
In the following program the object A a is directly initialized from braced-init-list {A{}}:
#include <iostream>
struct A {
int v = 0;
A() {}
A(const A &) : v(1) {}
};
int main() {
A a({A{}});
std::cout << a.v;
}
MSVC and GCC print 0 here meaning that copy-elision takes place. And Clang prints 1 executing the copy-constructor.
Online demo: https://gcc.godbolt.org/z/1vqvf148z
Which compiler is right here?
|
Which compiler is right here?
I think that clang is right in using the copy constructor and printing 1 for the reason(s) explained below.
First note that A a({A{}}); is direct-initialization as can be seen from dcl.init#16.1:
The initialization that occurs:
16.1) for an initializer that is a parenthesized expression-list or a braced-init-list,
16.2) for a new-initializer,
16.3) in a static_cast expression ([expr.static.cast]),
Now, dcl.init#17.6 is applicable here:
17.6) Otherwise, if the destination type is a (possibly cv-qualified) class type:
17.6.1) If the initializer expression is a prvalue and the cv-unqualified version of the source type is the same class as the class of the destination, the initializer expression is used to initialize the destination object.
[ Example: T x = T(T(T())); calls the T default constructor to initialize x.
— end example
]
17.6.2) Otherwise, if the initialization is direct-initialization, or if it is copy-initialization where the cv-unqualified version of the source type is the same class as, or a derived class of, the class of the destination, constructors are considered.
The applicable constructors are enumerated ([over.match.ctor]), and the best one is chosen through overload resolution ([over.match]).
Then:
17.6.2.1) If overload resolution is successful, the selected constructor is called to initialize the object, with the initializer expression or expression-list as its argument(s).
(emphasis mine)
This means that the copy constructor(which is the selected constructor here) will be used/called to initialize the object named a with the expression-list as its argument and since in your copy ctor's member initializer list you're initializing a.v to 1, the output printing 1 of clang is correct.
|
73,345,258
| 73,353,134
|
Why is identifier "MutexType" is undefined?
|
Attempting to use the boost library to create a system wide mutex from the docs
#include <boost/interprocess/sync/interprocess_mutex.hpp>
#include <boost/interprocess/sync/scoped_lock.hpp>
#include <boost/interprocess/sync/named_mutex.hpp>
using namespace boost::interprocess;
MutexType mtx;
int main()
{
return 0;
}
I am running visual studio code and compiling my code using msys2-MINGW64 environment like so g++ mutexes.cpp -lboost_system but this does not seem to work and I am getting this error in the bash console
mutexes.cpp:8:1: error: 'MutexType' does not name a type
8 | MutexType mtx;
| ^~~~~~~~~
|
The linked documentation specifically means "any mutex type":
//Let's create any mutex type:
MutexType mutex;
It follows it up with a concrete, more elaborate Anonymous mutex example and Named mutex example.
Which types are eligible is documented at scoped_lock:
scoped_lock is meant to carry out the tasks for locking, unlocking, try-locking and timed-locking (recursive or not) for the Mutex. The Mutex need not supply all of this functionality. If the client of scoped_lock does not use functionality which the Mutex does not supply, no harm is done
In practice the minimal useful interface required will be that of the standard library BasicLockable concept although many mutex implementations also model Lockable and Mutex concepts.
|
73,345,683
| 73,345,811
|
What are the optimal data structures for implementing a hidden layer neural network with backpropagation in C++?
|
Apologies if this seems like a duplicate post, but I am wondering what the optimal data structures for implementing and storing a simple hidden layer neural network with weights and biases and backpropagation in C++ are.
Off the top of my head I was thinking about the following:
Linked list
Pointer array
These two seem mostly equivalent to me for this purpose.
I also often see people using 3D arrays/vectors to store the weights and biases, but this seems wasteful to me, since you're either limited to a neural network that has the same number of nodes at each layer, or you're storing a lot of zero-entries in your 3D array for node connections that don't exist.
|
One option I see is doing it like this: Have one linear array for the nodes and one for all the edges. A sketch:
struct Node {
std::size_t edgeBegin;
std::size_t edgeEnd;
};
struct Edge {
std::size_t to;
float weight;
};
struct Layer {
std::size_t layerBegin;
std::size_t layerEnd;
};
struct Network {
std::vector<Node> nodes;
std::vector<Edge> edges;
std::array<Layer,3> layers;
};
after populating this structure it might look like this:
nodes: [n0, n1, n2, n3, n4, n5, n6, n7]
layers: [(0, 2), (2, 5), (5, 8)]
-> input layer has two nodes, hidden has three, output layer has three
where each node points to a section in edges, holding the edges of that particular node.
By doing it like this, you have a high chance to be cache-local and you only have to request dynamic memory twice if you set up the initialisation of the network correctly.
This assumes that the network will not change (i.e. new nodes being created while using it, and no new edges being created).
|
73,345,926
| 73,347,079
|
How to create a HICON from base64?
|
I am converting a picture from a base64 string into an HICON that could work when registering a new class:
WNDCLASSEX wc{};
wc.hIcon = < here >;
I got the base64_decode() function here: base64.cpp
#include <windows.h> // GDI includes.
#include <objidl.h>
#include <gdiplus.h>
using namespace Gdiplus;
#pragma comment (lib,"Gdiplus.lib")
Gdiplus::GdiplusStartupInput gdiplusStartupInput;
ULONG_PTR gdiplusToken;
Gdiplus::GdiplusStartup(&gdiplusToken, &gdiplusStartupInput, NULL);
std::string base64 = "......";
std::string decodedImage = base64_decode(base64);
DWORD imageSize = decodedImage.length();
HGLOBAL hMem = ::GlobalAlloc(GMEM_MOVEABLE, imageSize);
LPVOID pImage = ::GlobalLock(hMem);
memcpy(pImage, decodedImage.c_str(), imageSize);
IStream* pStream = NULL;
::CreateStreamOnHGlobal(hMem, FALSE, &pStream);
Gdiplus::Image image(pStream);
int wd = image.GetWidth();
int hgt = image.GetHeight();
auto format = image.GetPixelFormat();
Bitmap* bmp = new Bitmap(wd, hgt, format);
auto gg = std::unique_ptr<Graphics>(Graphics::FromImage(bmp));
gg->Clear(Color::Transparent);
gg->DrawImage(image, 0, 0, wd, hgt);
...
I couldn't find a way to get the HICON from the Image, so I was trying to convert the Image to Bitmap.
I'm getting these two errors on the line gg->DrawImage(image, 0, 0, wd, hgt);
C2664 'Gdiplus::Status Gdiplus::Graphics::DrawImage(Gdiplus::Image *,Gdiplus::REAL,Gdiplus::REAL,Gdiplus::REAL,Gdiplus::REAL)': cannot convert argument 1 from 'Gdiplus::Image' to 'Gdiplus::Image *'
E0304 no instance of overloaded function "Gdiplus::Graphics::DrawImage" matches the argument list
|
cannot convert argument 1 from 'Gdiplus::Image' to 'Gdiplus::Image *'
DrawImage() expects a pointer to an Image object, but you are passing it the actual object instead.
Change this statement:
gg->DrawImage(image, 0, 0, wd, hgt);
To this instead:
gg->DrawImage(&image, 0, 0, wd, hgt); // <-- note the added '&' ...
|
73,346,040
| 73,346,651
|
Is there any way to know how much space a function takes (or possibly can take) in CPU cache?
|
I'm just starting to learn about CPU cache in depth and I want to learn how to estimate a functions instruction size in CPU cache for curiosity reasons.
So far I learned it's not very easy to monitor L1 cache by surfing in SO and Google. But surprisingly I couldn't find any posts explaining my question.
If it's not possible, at least knowing when someone should worry about filling L1/L2 caches and not would be good to know.
Thanks.
|
Can you measure it?
Yes. Take a look at the output of a disassembler or measure the size increase of the library.
Should you worry about it?
Absolutely not. The executable code is usually tiny. If you're going through it once, even if we're talking GBs it's going to be fast. The usual way to make things slow is loops and recursion and usually such functions tend to be focused and small. On most system the several MBs of L1 cache should cover anything interesting code wise.
The usual source of cache dependent speedups is memory access patterns. If your tiny loop skips all over the memory, it will be a lot slower than a gigantic function that accesses things more or less linearly or is very predictable.
Another source of bad performance in the code tends to be branch predictions. Incorrectly predicting the outcome of a branch causes a stall in the CPU. Get enough of those the the performance will suffer.
Both of those are usually the last drops of performance to be squeezed out of a system. Make sure the code works correctly first, then go and try to find performance improvements, usually starting with algorithm and data structure optimizations around the hottest bits of code (the most executted).
|
73,346,429
| 73,347,923
|
How to get a pointer to the bytes of a uint32_t
|
I'm trying to create a complete uint32_t using vector of uint8_t bytes. It should be filled iteratively. It can happen in any of the following ways:
1 byte and 3 bytes.
2 bytes and 2 bytes.
4 bytes.
3 bytes and 1 byte.
uint32_t L = 0;
uint32_t* LPtr = &L;
std::vector<uint8_t> data1 = {0x1f, 0x23};
memcpy(LPtr, data1.data(), 2);
EXPECT_EQ(0x231f, L);
Above works fine (first two bytes). But following is not (with the two sets of bytes).
uint32_t L = 0;
uint32_t* LPtr = &L;
std::vector<uint8_t> data1 = {0x1f, 0x23};
std::vector<uint8_t> data2 = {0x3a, 0xee};
memcpy(LPtr, data1.data(), 2);
memcpy(LPtr, data2.data(), 2);
EXPECT_EQ(0x231f, L);
EXPECT_EQ(0x231fee3a, L);
The issue I feel is LPtr does not point to the next byte that should be filled next. I tried LPtr+2 which is not pointing to individual byte of uint32_t.
This should be done using memcpy and output should go to uint32_t. Any idea to get this sorted?
Endianness is not an issue as of now. It can be corrected once the uint32_t is completed (when eventually 4 bytes get copied).
Any help is appreciated!
|
The problem is you're using a pointer to uint32_t so incrementing it won't make it iterate by 1 byte, only by 4 bytes. Here is a version which populates all bytes of L, but it's still messing with endianness:
uint32_t gimmeInteger(std::vector<uint8_t> data1, std::vector<uint8_t> data2)
{
assert((data1.size() == 2));
assert((data2.size() == 2));
uint32_t L = 0;
uint8_t* LPtr = reinterpret_cast<uint8_t*>(&L);
memcpy(LPtr, data1.data(), 2);
memcpy(LPtr+2, data2.data(), 2);
return L;
}
|
73,346,507
| 73,346,547
|
CMake not finding include file I specified
|
I am trying to use CMake to debug a JUCE distortion project I'm working on but I can't get the CMakeLists.txt file find the header file JuceHeader.h so it can build and debug the project.
So here's what the file structure looks like:
distort (CMakeLists located here)
|
|_______Source
| |
| |________PluginEditor.cpp and .h, PluginProcessor.cpp and .h
|
|_______JuceLibraryCode
|
|________JuceHeader.h (and more)
and the txt file in its entirety:
cmake_minimum_required(VERSION 3.0.0)
project(Distort VERSION 0.1.0)
include(CTest)
enable_testing()
set(HEADER_FILES /home/wolf/vst/distort/JuceLibraryCode)
set(SOURCES Source/PluginProcessor.cpp Source/PluginEditor.cpp ${HEADER_FILES}/JuceHeader.h)
add_executable(Distort ${SOURCES})
target_include_directories(Distort PRIVATE home/wolf/vst/distort/JuceLibraryCode/)
set(CPACK_PROJECT_NAME ${PROJECT_NAME})
set(CPACK_PROJECT_VERSION ${PROJECT_VERSION})
include(CPack)
And it gives an error saying that it cannot find JuceHeader.h
And finally, the error output:
[build] /home/wolf/vst/distort/Source/PluginProcessor.h:11:10: fatal error: 'JuceHeader.h' file not found
[build] #include <JuceHeader.h>
[build] ^~~~~~~~~~~~~~
[build] In file included from /home/wolf/vst/distort/Source/PluginEditor.cpp:9:
[build] /home/wolf/vst/distort/Source/PluginProcessor.h:11:10: fatal error: 'JuceHeader.h' file not found
[build] #include <JuceHeader.h>
Any help would be greatly appreciated! :)
|
I suppose home/wolf/vst/distort/JuceLibraryCode/ should be an absolute path, but relative one was given. But the absolute path is not a solution.
target_include_directories(Distort PRIVATE ${CMAKE_CURRENT_SOURCE_DIR}/JuceLibraryCode)
Or just like bellow
target_include_directories(Distort PRIVATE JuceLibraryCode)
|
73,346,834
| 73,347,405
|
Running C++ code asynchronously in a C# program
|
I wrote some backend code in C++ and I wrote a frontend for it in C#. What I want it to do is run the backend code in the background so I can do other things like update a progress bar, but when I click the "Start" button, the program hangs until it's finished running the backend code.
C# code:
[DllImport("backend.dll", CallingConvention = CallingConvention.Cdecl)]
public static extern int executeBackend();
private async void startButton_Click(object sender, EventArgs e)
{
startButton.Enabled = false;
await StartProgram();
}
private async Task StartProgram()
{
int status = 0;
status = executeBackend(); //This is causing the UI to deadlock
startButton.Enabled = true;
}
backend.dll C++ code:
extern "C" {
__declspec(dllexport) int executeBackend()
{
int statusCode = 0;
//Do stuff in the background
return statusCode;
}
}
If I comment out the call to run executeBackend and replace it with await Task.Delay(5000);, the UI doesn't deadlock. How would I fix this?
|
You can wrap the call to executeBackend in a Task to prevent the UI from locking up.
var status = await Task.Run(() => executeBacked());
I also think you're confused about what the async keyword actually does. I think it might be prudent for you to read up on how Asynchronous Programming works in dotnet.
|
73,347,657
| 73,348,005
|
A shared pointer to a section of memory belonging to another shared pointer
|
I know shared pointers are implied to share the same memory. But what if my shared pointer points to an element which is not the first in the memory of another shared pointer?
Consider a raw pointer example:
int* array = new int[10];
int* segment = &array[5];
Can I make the same thing with array and segment being shared pointers? Will they count references in this case?
|
std::shared_ptr has an aliasing constructor for exactly this kind of situation:
template< class Y >
shared_ptr( const shared_ptr<Y>& r, element_type* ptr ) noexcept;
The aliasing constructor: constructs a shared_ptr which shares ownership information with the initial value of r, but holds an unrelated and unmanaged pointer ptr. If this shared_ptr is the last of the group to go out of scope, it will call the stored deleter for the object originally managed by r. However, calling get() on this shared_ptr will always return a copy of ptr. It is the responsibility of the programmer to make sure that this ptr remains valid as long as this shared_ptr exists, such as in the typical use cases where ptr is a member of the object managed by r or is an alias (e.g., downcast) of r.get()
For example:
auto array = std::make_shared<int[]>(10);
auto segment = std::shared_ptr<int>(array, &array[5]);
|
73,347,714
| 73,347,954
|
Errors when passing pointer to structs c++
|
I am a beginner in C++, so I'm sure the error is from my misunderstanding of pointers.
I was making a Poker game, where card is a struct with the suit and the value, and each player's hand is filled with cards, and the middle is filled with cards. I passed references of the arrays to a function by reference, but I think something is wrong with how I set up the pointers to the arrays.
Here is a simplified version of my code:
#include <iostream>
enum suit {hearts, diamonds, spades, clubs};
enum value {two, three, four, five, six, seven, eight, nine, ten, jack, queen, king, ace};
struct card
{
value v;
suit s;
};
static const int num_players = 4;
void get_winner(const card ** hands[num_players][2], const card * mid_cards[5])
{
for (int i = 0; i < 5; i++) {
std::cout << "Middle card " << i + 1 << ": " << * mid_cards[i][0] << ", " << * mid_cards[i][1] << "\n";
}
for (int i = 0; i < num_players; i++) {
std::cout << "Player " << i + 1 << "hand: " << * hands[i][0] << ", " << * hands[i][1] << "\n";
}
}
int main()
{
card hands[num_players][2];
card mid_cards[5];
// fill hands and mid_cards with card structs
hands[0][0] = card {jack, hearts};
hands[0][1] = card {nine, spades};
hands[1][0] = card {ace, clubs};
hands[1][1] = card {ace, hearts};
hands[2][0] = card {three, diamonds};
hands[2][1] = card {seven, spades};
hands[3][0] = card {eight, diamonds};
hands[3][1] = card {nine, clubs};
mid_cards[0] = card {five, clubs};
mid_cards[1] = card {king, hearts};
mid_cards[2] = card {queen, spades};
mid_cards[3] = card {two, clubs};
mid_cards[4] = card {ace, hearts};
get_winner(&hands, &mid_cards);
}
Here are the error messages:
main.cpp:17:51: error: indirection requires pointer operand
('const card' invalid)
...<< "Middle card " << i + 1 << ": " << * mid_cards[i][0] << ", " << * mid...
^ ~~~~~~~~~~~~~~~
main.cpp:17:80: error: indirection requires pointer operand
('const card' invalid)
...i + 1 << ": " << * mid_cards[i][0] << ", " << * mid_cards[i][1] << "\n";
^ ~~~~~~~~~~~~~~~
main.cpp:42:2: error: no matching function for call to 'get_winner'
get_winner(&hands, &mid_cards);
^~~~~~~~~~
main.cpp:14:6: note: candidate function not viable: no known conversion from
'card (*)[4][2]' to 'const card **(*)[2]' for 1st argument
void get_winner(const card ** hands[num_players][2], const card * mid_cards[5])
^
3 errors generated.
|
In your function's parameters, you are using the wrong syntax to accept the arrays by pointer. And, you are declaring the wrong element type for the arrays (you claim card* pointers, but the arrays actual hold card objects instead). And, your function is treating the mid_cards parameter like it is a pointer to a 2-dimentional array when the actual array in main() is 1-dimensional instead.
Also, you do not have an operator<< defined for your card struct, but you are trying to print card values to cout.
Try this instead:
#include <iostream>
enum suit {hearts, diamonds, spades, clubs};
enum value {two, three, four, five, six, seven, eight, nine, ten, jack, queen, king, ace};
struct card
{
value v;
suit s;
};
static const int num_players = 4;
static const char* suits[] = {"Hearts", "Diamonds", "Spades", "Clubs"};
static const char* values[] = {"2", "3", "4", "5", "6", "7", "8", "9", "10", "Jack", "Queen", "King", "Ace"};
std::ostream& operator<<(std::ostream &out, const card &c)
{
out << values[c.v] << " of " << suits[c.s];
return out;
}
void get_winner(const card (*hands)[num_players][2], const card (*mid_cards)[5])
{
for (int i = 0; i < 5; i++) {
std::cout << "Middle card " << i + 1 << ": " << (*mid_cards)[i] << "\n";
}
for (int i = 0; i < num_players; i++) {
std::cout << "Player " << i + 1 << "hand: " << (*hands)[i][0] << ", " << (*hands)[i][1] << "\n";
}
}
int main()
{
card hands[num_players][2];
card mid_cards[5];
// fill hands and mid_cards with card structs
hands[0][0] = card {jack, hearts};
hands[0][1] = card {nine, spades};
hands[1][0] = card {ace, clubs};
hands[1][1] = card {ace, hearts};
hands[2][0] = card {three, diamonds};
hands[2][1] = card {seven, spades};
hands[3][0] = card {eight, diamonds};
hands[3][1] = card {nine, clubs};
mid_cards[0] = card {five, clubs};
mid_cards[1] = card {king, hearts};
mid_cards[2] = card {queen, spades};
mid_cards[3] = card {two, clubs};
mid_cards[4] = card {ace, hearts};
get_winner(&hands, &mid_cards);
}
Online Demo
Alternatively, pass the arrays by reference instead of pointer, eg:
#include <iostream>
enum suit {hearts, diamonds, spades, clubs};
enum value {two, three, four, five, six, seven, eight, nine, ten, jack, queen, king, ace};
struct card
{
value v;
suit s;
};
static const int num_players = 4;
static const char* suits[] = {"Hearts", "Diamonds", "Spades", "Clubs"};
static const char* values[] = {"2", "3", "4", "5", "6", "7", "8", "9", "10", "Jack", "Queen", "King", "Ace"};
std::ostream& operator<<(std::ostream &out, const card &c)
{
out << values[c.v] << " of " << suits[c.s];
return out;
}
void get_winner(const card (&hands)[num_players][2], const card (&mid_cards)[5])
{
for (int i = 0; i < 5; i++) {
std::cout << "Middle card " << i + 1 << ": " << mid_cards[i] << "\n";
}
for (int i = 0; i < num_players; i++) {
std::cout << "Player " << i + 1 << "hand: " << hands[i][0] << ", " << hands[i][1] << "\n";
}
}
int main()
{
card hands[num_players][2];
card mid_cards[5];
// fill hands and mid_cards with card structs
hands[0][0] = card {jack, hearts};
hands[0][1] = card {nine, spades};
hands[1][0] = card {ace, clubs};
hands[1][1] = card {ace, hearts};
hands[2][0] = card {three, diamonds};
hands[2][1] = card {seven, spades};
hands[3][0] = card {eight, diamonds};
hands[3][1] = card {nine, clubs};
mid_cards[0] = card {five, clubs};
mid_cards[1] = card {king, hearts};
mid_cards[2] = card {queen, spades};
mid_cards[3] = card {two, clubs};
mid_cards[4] = card {ace, hearts};
get_winner(hands, mid_cards);
}
Online Demo
|
73,347,788
| 73,347,837
|
Error when calling QFlags::testFlag() with bitwise OR of flags
|
I am trying to test if one or both of 2 flags are set, using a single testFlag command as follows:
myFlags.testFlag(QIODevice::ReadWrite | QIODevice::Append)
which generates this compiler error:
error: no viable conversion from 'QFlags\<QIODevice::OpenMode::enum_type\>' (aka 'QFlags\<QIODevice::OpenModeFlag\>') to 'QIODevice::OpenModeFlag'
Why does this not work? Shouldn't ORing two flags just create another flag? Both QIODevice::ReadWrite and QIODevice::Append are of type QIODevice:OpenModeFlag.
Also it is worth noting that QIODevice::OpenMode type is a typedef for QFlags<QIODevice::OpenModeFlag>, and QIODevice::OpenModeFlag is an enumerated type.
Sample code is shown below:
#include <QTextStream>
#include <QIODevice>
int main(int argc, char *argv[])
{
QTextStream cout(stdout);
QIODevice::OpenMode myFlags(QIODevice::ReadWrite);
if (myFlags.testFlag(QIODevice::ReadWrite | QIODevice::Append))
cout << "true";
else
cout << "false";
}
|
The flags should be tested by one.
bool QFlags::testFlag(Enum flag) const is declared to accept a single flag, for multiple flags it would accept int. It is also obvious from the name "testFlag" not "testFlags".
if (myFlags.testFlag(QIODevice::ReadWrite ) && myFlags.testFlag(QIODevice::Append))
cout << "true";
else
cout << "false";
There is the alternative function bool QFlags::testFlags(QFlags<T> flags) const
if (myFlags.testFlags({QIODevice::ReadWrite, QIODevice::Append}))
cout << "true";
else
cout << "false";
|
73,348,154
| 73,348,180
|
Value initialization of template object using new
|
I have a class with a pointer to T
template <typename T>
class ScopedPtr
{
T* ptr_;
public:
ScopedPtr();
~ScopedPtr();
//Copy and move ctors are removed to be short
}
template <typename T>
ScopedPtr<T>::ScopedPtr() : ptr_(new T{})
{}
And I want to use it as follows:
struct Numeric_t
{
int integer_;
double real_;
Load_t(int integer = 0, double real = 0) : integer_(integer), real_(real)
{}
};
struct String_t
{
std::string str_;
String_t(std::string str) : str_(str);
{}
};
int main()
{
ScopedPtr<Numeric_t> num{1, 2.0};
ScopedPtr<String_t> str{"Hello, world"};
}
Is there some way to define general ScopedPtr ctor to make Numeric_t and String_t value initialized?
Like so:
ScopedPtr<T>::ScopedPtr(//some data, depending on type//) : ptr_(new T{//some data//})
{}
|
The usual perfect forwarding comes to mind:
template <typename T>
template <typename... Args>
ScopedPtr<T>::ScopedPtr(Args&&... args) : ptr_(new T{std::forward<Args>(args)...})
{}
|
73,348,522
| 73,348,567
|
Which operation is not trivial here?
|
Compilers agree, that the below X and Y are default-constructible, but not trivially (demo).
#include <type_traits>
struct X { int x {}; };
struct Y { int y = 0; };
static_assert(std::is_default_constructible_v<X>);
static_assert(std::is_default_constructible_v<Y>);
static_assert(!std::is_trivially_default_constructible_v<X>);
static_assert(!std::is_trivially_default_constructible_v<Y>);
Why are they not trivial? According to cppreference.com (see is_trivially_constructible) a non-trivial operation must have been called during default-construction. Which one is that?
|
https://en.cppreference.com/w/cpp/language/default_constructor#Trivial_default_constructor says:
The default constructor for class T is trivial (i.e. performs no action) if all of the following is true:
...
T has no non-static members with default initializers. (since C++11)
...
|
73,348,816
| 73,349,078
|
How to properly satisfy all linker dependencies in following pybind11 project?
|
I am porting a large library to pybind11 for python interface. However I have stuck where I seemingly am either getting multiple declaration error at linker stage, or during python import I get symbol not found error. The simplified project structure is as given below
CMake file
cmake_minimum_required(VERSION 3.5)
# set the project name
project(tmp VERSION 1.0)
find_package(EnvModules REQUIRED)
env_module(load python39)
find_package(PythonInterp REQUIRED)
include_directories(${PYTHON_INCLUDE_DIRS})
include_directories(pybind11/include)
add_subdirectory(pybind11)
pybind11_add_module(TMP py_modules.cpp
SubConfiguration.cpp)
# pybind11_add_module(TMP py_modules.cpp)
Source files:
py_modules.cpp
#include "pybind11/pybind11.h"
#include "calculateStress.h"
namespace py = pybind11;
PYBIND11_MODULE(TMP, m){
py::class_<Stencil>(m, "Stencil")
.def(py::init<double &>());
py::class_<SubConfiguration>(m, "SubConfiguration")
.def(py::init<Stencil &>());
}
calculateStress.h
#ifndef CALCULATESTRESS_H_
#define CALCULATESTRESS_H_
#include "SubConfiguration.h"
#endif /* CALCULATESTRESS_H_ */
SubConfiguration.h
#ifndef SRC_SUBCONFIGURATION_H_
#define SRC_SUBCONFIGURATION_H_
#include "Stencil.h"
class SubConfiguration
{
public:
double& parent;
SubConfiguration(Stencil& stencil);
};
#endif /* SRC_SUBCONFIGURATION_H_ */
SubConfiguration.cpp
#include "SubConfiguration.h"
SubConfiguration::SubConfiguration(Stencil& stencil) :
parent(stencil.parent)
{}
Stencil.h
#ifndef SRC_STENCIL_H_
#define SRC_STENCIL_H_
class Stencil {
public:
double parent;
Stencil(double&);
};
Stencil::Stencil(double& parent) : parent(parent) { }
#endif /* SRC_STENCIL_H_ */
I think all import guards are properly set.
Now when I give pybind11_add_module(TMP py_modules.cpp) compile option, I get the TMP module but while importing I get error undefined symbol: _ZN16SubConfigurationC1ER7Stencil
Whereas with pybind11_add_module(TMP py_modules.cpp SubConfiguration.cpp) I get following error
/usr/bin/ld: CMakeFiles/TMP.dir/SubConfiguration.cpp.o (symbol from plugin): in function `Stencil::Stencil(double&)':
(.text+0x0): multiple definition of `Stencil::Stencil(double&)'; CMakeFiles/TMP.dir/py_modules.cpp.o (symbol from plugin):(.text+0x0): first defined here
/usr/bin/ld: CMakeFiles/TMP.dir/SubConfiguration.cpp.o (symbol from plugin): in function `Stencil::Stencil(double&)':
(.text+0x0): multiple definition of `Stencil::Stencil(double&)'; CMakeFiles/TMP.dir/py_modules.cpp.o (symbol from plugin):(.text+0x0): first defined here
collect2: error: ld returned 1 exit status
make[2]: *** [CMakeFiles/TMP.dir/build.make:99: TMP.cpython-39-x86_64-linux-gnu.so] Error 1
make[1]: *** [CMakeFiles/Makefile2:96: CMakeFiles/TMP.dir/all] Error 2
make: *** [Makefile:84: all] Error 2
How do I properly set it up?
|
The problem is that Stencil.h gets included by both py_modules.cpp and SubConfiguration.cpp. The include guards will only protect against double includes from a single compilation unit (cpp file).
Remove the implementation of the Stencil constructor from Stencil.h and move it into a new file Stencil.cpp as
#include "Stencil.h"
Stencil::Stencil(double& parent) : parent(parent) { }
Then add the new file to your module
...
add_subdirectory(pybind11)
pybind11_add_module(TMP py_modules.cpp SubConfiguration.cpp Stencil.cpp )
UPDATE (to reflect comments)
If you have a templated class, you do not need necessarily to include the implementation in the header. You can use explicit template instantiation as in this example:
In the header doit.h
template< typename T >
T addsome( T t );
In the body doit.cpp
#include "doit.h"
// Add explicit template instantiations for the types you care
// #include <doit.h>
template<>
int addsome<int>( int t ) {
return t+1;
}
template<>
double addsome<double>( double t ) {
return t+2;
}
In main.cpp or where you use it
int main() {
int res = addsome<int>( 1 ); // This will link against the compiled body doit.cpp
}
|
73,349,477
| 73,349,689
|
Segmentation fault occurs when variable set to indexed parameter
|
I have a poker game where an array of the players' hands and an array of the cards in the middle are passed as arguments to a function. In the function get_winner, I can loop over and print the cards in the arrays (2nd and 3rd for loops), but if I set a variable to an element of an array and print it, I get a segmentation fault. Here is a simplified version of my code:
#include <iostream>
using std::cout, std::endl;
static const int num_players = 4;
static const char* SUITS[] = {"Hearts", "Diamonds", "Spades", "Clubs"};
static const char* VALUES[] = {"2", "3", "4", "5", "6", "7", "8", "9", "10", "Jack", "Queen", "King", "Ace"};
enum suit {hearts, diamonds, spades, clubs};
enum value {two, three, four, five, six, seven, eight, nine, ten, jack, queen, king, ace};
struct card
{
value v;
suit s;
};
std::ostream& operator<<(std::ostream &out, const card &c)
{
out << VALUES[c.v] << " of " << SUITS[c.s];
return out;
}
int get_winner(const card (*hand)[num_players][2], const card (*commun_cards)[5])
{
// this loop causes an error when printing the temp card variable
for (int i = 0; i < 5; i++) {
auto temp_card = commun_cards[i];
cout << *temp_card << " " << (*temp_card).s << " " << (*temp_card).v << endl;
}
// these loops work fine and print the players' hands and community cards
for (int i = 0; i < num_players; i++) {
cout << (*hand)[i][0] << ", " << (*hand)[i][1] << endl;
}
for (int i = 0; i < 5; i++) {
cout << (*commun_cards)[i] << endl;
}
return 0;
}
int main()
{
card hands[num_players][2];
card commun_cards[5];
// fill hands and commun_cards with card structs
hands[0][0] = card {jack, spades};
hands[0][1] = card {nine, spades};
hands[1][0] = card {ace, clubs};
hands[1][1] = card {ace, hearts};
hands[2][0] = card {three, diamonds};
hands[2][1] = card {seven, spades};
hands[3][0] = card {eight, diamonds};
hands[3][1] = card {nine, clubs};
commun_cards[0] = card {five, clubs};
commun_cards[1] = card {king, hearts};
commun_cards[2] = card {queen, spades};
commun_cards[3] = card {two, spades};
commun_cards[4] = card {ace, hearts};
for (int i = 0; i < num_players; i++) {
cout << "Player " << i << " cards: " << hands[i][0] << ", " << hands[i][1] << endl;
}
cout << "\nCommunity cards:\n";
for (int i = 0; i < 5; i++) {
cout << commun_cards[i] << endl;
}
get_winner(&hands, &commun_cards);
}
|
Your function's 1st loop is accessing the commun_cards array incorrectly. Its 3rd loop is accessing the array correctly. Why would you expect the 1st loop to need to access the elements any differently than the 3rd loop just because it wants to save each element to a variable?
Since you are passing in each array by pointer, you must dereference each pointer before you can then index into the elements of each array. Your 2nd and 3rd loops are doing that. Your 1st loop is not.
Your 1st loop is expecting commun_cards[i] to yield a pointer to each element, but that is simply not true, which is why *temp_card then crashes. If you really want a pointer to each element, you need to use this instead:
auto *temp_card = &(*commun_cards)[i];
// alternatively:
// auto *temp_card = (*commun_cards) + i;
cout << *temp_card << " " << temp_card->s << " " << temp_card->v << endl;
Otherwise, use a reference to each element instead of a pointer:
auto &temp_card = (*commun_cards)[i];
// alternatively:
// auto &temp_card = *((*commun_cards) + i);
cout << temp_card << " " << temp_card.s << " " << temp_card.v << endl;
If you change the function to accept the arrays by reference instead of by pointer (which I highly suggest you do), then the function would become this:
int get_winner(const card (&hand)[num_players][2], const card (&commun_cards)[5])
{
for (int i = 0; i < 5; i++) {
auto &temp_card = commun_cards[i];
cout << temp_card << " " << temp_card.s << " " << temp_card.v << endl;
}
for (int i = 0; i < num_players; i++) {
cout << hand[i][0] << ", " << hand[i][1] << endl;
}
for (int i = 0; i < 5; i++) {
cout << commun_cards[i] << endl;
}
return 0;
}
...
get_winner(hands, commun_cards);
|
73,349,982
| 73,350,008
|
Why do I need std::endl to reproduce input lines I got with getline()?
|
I am a newbie learning C++ to read or write from a file. I searched how to read all contents from a file and got the answer I can use a while loop.
string fileName = "data.txt";
string line ;
ifstream myFile ;
myFile.open(fileName);
while(getline(myFile,line)){
cout << line << endl;
}
data.txt has three lines of content and output as below.
Line 1
Line 2
Line 3
but if I remove "endl" and only use cout<<line; in the curly bracket of the while loop, the output change to :
Line 1Line 2Line 3
From my understanding, the while loop was executed 3 times, what's the logic behind it?
|
endl means "end line" and it does two things:
Move the output cursor to the next line.
Flush the output, in case you're writing to a file this means the file will be updated right away.
By removing endl you are writing all the input lines onto a single output line, because you never told cout to go to the next line.
|
73,350,449
| 73,350,700
|
How to get error.h in visual studio or equivalent?
|
I have large C++ project that I inherited and am trying to transfer it from Linux to Visual Studio on Windows. Managed to link required libraries, but one build error just baffles me.
In Linux, someone was including a header <error.h> everywhere, and I can't even find a documentation page to see what it is. First I thought is part of standard library, but I am now beginning to see it's a specific header for Linux OS.
So, how do I include it in Visual studio? I can't even find what it is and am tempted to just rearrange code to use stdexcept header, as the only thing these codes do is abort compilation and printout error messages by using some error(...) function from error.h.
|
error.h is a header from the GNU C Library. It is not specific to Linux, it is specific to glibc.
It is documented right here (search on the page for error.h).
The functions declared in error.h should not be used in portable code, so getting rid of code that uses them is not a bad idea. Alternatively, it is not difficult to implement them.
|
73,350,679
| 73,355,513
|
what atomicity does compare_exchange_weak provide?
|
quote from https://en.cppreference.com/w/cpp/atomic/atomic/compare_exchange
bool compare_exchange_weak( T& expected, T desired,
std::memory_order success,
std::memory_order failure ) noexcept;
Atomically compares the object representation (until C++20)value representation (since C++20) of *this with that of expected, and if those are bitwise-equal, replaces the former with desired (performs read-modify-write operation). Otherwise, loads the actual value stored in *this into expected (performs load operation).
I'm having a hard time understanding the word Atomically mentioned above. Because expected and desired are all of type T, so read & write operation on these values are not atomic, if underlying atomic value and expected are not equal, Is the operation of loading *this to expected also atomic? or does Atomically only applies to operation related to underlying atomic value?
|
Because expected and desired are all of type T, so read & write operation on these values are not atomic
That's true. The load of expected (and the store, if it happens) is not an atomic operation. Therefore, if this call to compare_exchange_weak is potentially concurrent with any other operation that accesses expected, the program has a race condition (unless both operations are only reads). The load of desired is also not an atomic operation, although this doesn't matter because it's a local variable in the compare_exchange_weak function (no other thread can see it).
However, the operation on *this is atomic. This means that it will not race with any other potentially concurrent atomic operation on *this; if there are two potentially concurrent atomic operations on *this, the behaviour will be as if one thread or the other "gets there first", without causing any undefined behaviour. Furthermore, it will be either an atomic read-modify-write operation (on success) or an atomic load operation (on failure). An atomic read-modify-write operation has the additional constraint that the entry that it creates in the atomic variable's modification order immediately follows the entry from which the read took its value (no other modifications can "intervene").
|
73,350,721
| 73,350,835
|
How to make destructor wait until other thread's job complete?
|
I have one main thread that will send an async job to the task queue on the other thread. And this main thread can trigger a destroy action at any time, which could cause the program to crash in the async task, a piece of very much simplified code like this:
class Bomb {
public:
int trigger;
mutex my_mutex;
};
void f1(Bomb *b) {
lock_guard<std::mutex> lock(b->my_mutex); //won't work! Maybe b have been destructed!
sleep(1);
cout<<"wake up.."<<b->trigger<<"..."<<endl;
}
int main()
{
Bomb *b = new Bomb();
b->trigger = 1;
thread t1(f1, b);
sleep(1);
//lock here won't work
delete b;//in actual case it is triggered by outside users
t1.join();
return 0;
}
The lock in f1 won't work since the destructor can be called first and trying to read mutex will crash. Put lock in destructor or before the delete also won't work for the same reason.
So is there any better way in this situation? Do I have to put mutex in the global scope and inside destructor to solve the issue?
|
In code, my comment looks like this :
#include <future>
#include <mutex>
#include <iostream>
#include <chrono>
#include <thread>
// do not use : using namespace std;
class Bomb
{
public:
void f1()
{
m_future = std::async(std::launch::async,[this]
{
async_f1();
});
}
private:
void async_f1()
{
using namespace std::chrono_literals;
std::lock_guard<std::mutex> lock{ m_mtx };
std::cout << "wake up..\n";
std::this_thread::sleep_for(1s);
std::cout << "thread done.\n";
}
std::future<void> m_future;
std::mutex m_mtx;
};
int main()
{
{
std::cout << "Creating bomb\n";
Bomb b; // no need to use unecessary new
b.f1();
}
std::cout << "Bomb destructed\n";
return 0;
}
|
73,352,462
| 73,352,799
|
Can a template function taking class object instantiate that object with it's constructors arguments?
|
Let's say I have a template function taking a class object:
template<class T>
void Foo(T obj);
and a class definition as follows:
class Bar
{
public:
Bar(int a, bool b): _a(a), _b(b) {}
private:
int _a;
bool _b;
};
Is there a way to make the following code compile?
Foo<Bar>(5,false);
Foo<Bar>({5,false}); // i know this works, just wondering if i can remove the brackets somehow.
|
Yes, this can be done with variadic templates and forwarding, and has many standard examples, like std::make_unique.
In your case it would be:
template<class T, class ...Args>
void Foo(Args &&...args)
{
T obj { std::forward<Args>(args)... };
// use obj
}
|
73,353,152
| 73,357,934
|
Why can std::move_iterator advertise itself as a forward (or stronger) iterator, when it dereferences to an rvalue reference?
|
According to cppreference, std::move_iterator sets its ::iterator_category to the category of its underlying iterator1.
But I reckon it can be an input/output iterator at best, since for forward iterators reference must be an lvalue reference, while move_iterator sets reference (and the return type of operator*) to an rvalue reference2.
Is this a blatant mistagging of the iterator with a wrong category?
Being able to do this for my own iterators is undoubtedly convenient. Is there any reason I shouldn't do this, if even the standard library does so?
1 But anything stronger than random_access_iterator_tag is truncated to random_access_iterator_tag, which is weird, since contiguous_iterator_tag is only supposed to be used for ::iterator_concept.
2 Or leaves it untouched if it's not a reference, but then the underlying iterator shouldn't advertise itself as a forward iterator either.
|
On one hand, the cppreference article on forward iterator requirements was wrong (already fixed by someone). reference must be any reference (& or &&), not specifically lvalue reference (&). Meaning move_iterator does conform.
But on the other hand, auto-determining ::iterator_category uses different wording, which only permits lvalue references:
concept cpp17-forward-iterator = ... && is_lvalue_reference_v<iter_reference_t<I>>
This looks like a standard defect. I've wrote to lwgchair@gmail.com, waiting for a response.
Sources for forward iterators being able to use any kind of reference:
[forward.iterators]/1.3:
if X is a mutable iterator, reference is a reference to T; if X is a constant iterator, reference is a reference to const T
See also LWG1211 (from 2009), which raised the same issue, and was resolved by N3066 (in 2010), which changed the wording from "lvalue reference" to "any reference". (Thanks @康桓瑋 for the links).
|
73,353,164
| 73,405,716
|
How to create a menubar in SFML application?
|
I'm trying to write an SFML 2.5.1 programm, but I have faced an issue, I can't find in the internet how to create working programm menubar, witch is located upper window (or upper screen in Mac OS), something like this:
All I have found in the internet is a Titlebar, only first two options in menubar: File and Edit, and they are don't work for some reason, don't reacting on clicking, I can't find out why.
Help me please
|
You can hide the original title bar and make your own using classes (button or titlebar_button, call it as u want) which is just a rectangle shape with the on_Click function.
https://en.sfml-dev.org/forums/index.php?topic=24051.0 - Answer
https://www.sfml-dev.org/documentation/1.6/namespacesf_1_1Style.php - Style documentation (I couldn't find one for 2.5.1, but it isn't that deprecated, in new versions, there is also a Style::Deafult)
Example:
sf::Window my_Window(sf::VideoMode(640U,480U),"w",sf::Style::None)
However, you have to keep in mind, that your window won't have any border and thus won't be resizeable nor closeable until you implement this.
|
73,353,235
| 73,356,076
|
Qt , how to create QApplication without argc and argv
|
Hey so I need to export a qt application as a .dll and , so I dont want any arguments like argc and argv , but QApplication needs them , so i tried this
int main()
{
int c=1;
char** v = (char**)("ApplicationName");
QApplication app(c,v);
MainWindow window;
window.show();
return app.exec();
}
but I get segfault from QtCore... Can someone help me bypass the segfault and create the QApplication without needing argc and argv?
This didnt solve the problem cause it needs argv defined ...
QApplication app(argc, argv)
|
Try this:
int main()
{
char* args[] = { (char*)"AppName" };
QApplication app(1,args);
MainWindow window;
window.show();
return app.exec();
}
|
73,353,308
| 73,354,501
|
Asynchronous Destruction and RAII in C++
|
According to RAII when I destroy the object, its resources are deallocated. But what if the destruction of the object requires asynchronous operations?
Without using RAII I can call close method of my class that will call necessary async operations and will keep shared_ptr to my object (using shared_from_this) in order to gracefully process callbacks from the async operations. After calling close I can remove my pointer to the object since I do not need it anymore - but I know that the object will not be removed until the async operations are executed.
But how can I achieve this using RAII? One of the possible solutions can be using a wrapper that when destructed will call close method of by object. But.. will it mean that my classes are RAII classes?
class Resource: public std::enable_shared_from_this<Resource> {
int id;
public:
close() {
some_async_functin([t = shared_from_this()] {
std::cout << "Now I am closed " << t->id << std::endl;
}
}
}
My solution:
class ResourceWrapper {
std::shared_ptr<Resource> res;
~ResourceWrapper() {
res.close();
}
}
|
An object o to be asynchronously destroyed with respect to the thread, T, in which it was created cannot itself be managed via RAII, because destruction of stack-allocated objects is inherently synchronous. If o is managed via the RAII model then thread T will execute its destructor when the innermost block containing its declaration terminates, until the destructor terminates.
If you do not want to occupy T with releasing o's resources (yet ensure that they are in fact released), then you must delegate the resource release to another thread. You can do that either
directly, by creating o dynamically with new and dispatching the corresponding free asynchronously (e.g. via std::async), or
indirectly with RAII, by having o's destructor dispatch the resource cleanup for separate asynchronous release (again, via std::async or similar).
The latter is indirect because it requires a separate object or objects to represent the resource(s) being cleaned up until the cleanup is complete -- especially so if there are callbacks involved or similar. You face the same issue with these objects that you do with o itself. There is nothing inherently wrong with this indirection, by the way. RAII still provides the benefits it usually does.
|
73,353,459
| 73,353,577
|
C++ standard conforming method to assign address of program memory to pointer
|
How to assing internal process memory address to pointer object in C++ via standard conforming method?
For example, this is Undefined behavior, coz its dont defined in C++ Standard:
CInterpretator* pInterpretatorObj= reinterpret_cast<CInterpretator*>(0x1000FFFF);
or this, without reinterpret_cast, but with same effect:
CInterpretator* pInterpretatorObj= static_cast<CInterpretator*>(static_cast<void*>(0x1000FFFF));
Maybe using .asm file with public function who return address of object, and using this "foreign" function in C++ program code for get address is standard conforming or not?
But for me its very ugly. Maybe there are good methods for this.
|
There is no standards-compliant way of doing this. Standard C++ does not have a notion of memory layout, nor of particular integers being meaningful when casted to pointers (other than those which came from casting pointers to integers).
The good news is, “undefined behavior” is undefined by the standard. Implementations are free to offer guarantees that certain types of otherwise-UB code will do something meaningful. So if you want guaranteed correctness, rather than “just happened to work”, you won’t get that from the Standard but you may get it from your compiler documentation.
For literally all C++ compilers I know of, using reinterpret_cast as you’ve done here will do what you expect it to do.
|
73,353,503
| 73,355,194
|
How to decode a picture converted to base64 using CryptStringToBinary?
|
My doubt is about how to use the value returned by the API to reconstruct the image.
Also, does the way I'm creating the Bitmap preserve the picture transparency?
Gdiplus::GdiplusStartupInput gdiplusStartupInput;
ULONG_PTR gdiplusToken;
Gdiplus::GdiplusStartup(&gdiplusToken, &gdiplusStartupInput, NULL);
LPCWSTR base64 = L"";
DWORD dwSkip;
DWORD dwFlags;
DWORD dwDataLen;
CryptStringToBinary(
base64,
_tcslen(base64),
CRYPT_STRING_BASE64,
NULL,
&dwDataLen,
&dwSkip,
&dwFlags);
DWORD imageSize = dwDataLen;
HGLOBAL hMem = ::GlobalAlloc(GMEM_MOVEABLE, imageSize);
LPVOID pImage = ::GlobalLock(hMem);
memcpy(pImage, ???? , imageSize);
IStream* pStream = NULL;
::CreateStreamOnHGlobal(hMem, FALSE, &pStream);
Gdiplus::Image image(pStream);
image.GetWidth();
int wd = image.GetWidth();
int hgt = image.GetHeight();
auto format = image.GetPixelFormat();
Bitmap* bmp = new Bitmap(wd, hgt, format);
auto gg = std::unique_ptr<Graphics>(Graphics::FromImage(bmp));
gg->Clear(Color::Transparent);
gg->DrawImage(&image, 0, 0, wd, hgt);
HICON hIcon;
bmp->GetHICON(&hIcon);
pStream->Release();
GlobalUnlock(hMem);
GlobalFree(hMem);
wc.hIcon = hIcon;
|
My doubt is about how to use the value returned by the API to reconstruct the image.
You are calling CryptStringToBinary() only 1 time, to calculate the size of the decoded bytes. You are even allocating memory to receive the decoded bytes. But, you are not actually decoding the base64 string to produce the bytes. You need to call CryptStringToBinary() a second time for that, eg:
LPCWSTR base64 = L"...";
DWORD dwStrLen = static_cast<DWORD>(wcslen(base64)); // or: 0
DWORD dwDataLen = 0;
if (!CryptStringToBinaryW(
base64,
dwStrLen,
CRYPT_STRING_BASE64,
NULL,
&dwDataLen,
NULL,
NULL))
{
// error handling...
}
HGLOBAL hMem = ::GlobalAlloc(GMEM_MOVEABLE, dwDataLen);
if (!hMem)
{
// error handling...
}
LPVOID pImage = ::GlobalLock(hMem);
if (!CryptStringToBinaryW(
base64,
dwStrLen,
CRYPT_STRING_BASE64,
(BYTE*) pImage,
&dwDataLen,
NULL,
NULL))
{
// error handling...
}
::GlobalUnlock(hMem);
// use hMem as needed...
::GlobalFree(hMem);
|
73,353,632
| 73,353,746
|
Is there a way to create a data type that can holds both integers and strings in C++?
|
Is there a way to create a data type that can holds both integers and strings in C++?
For example, create a data type with name sti that I can define both integers and strings variables with it:
sti a = 10; //this is integer
sti b = "Hello"; //and this is string
|
You can use std::variant to achieve this which is C++17 feature. Refer this link to know more.
Following is code sample. See it working here:
#include <iostream>
#include <variant>
#include <string>
int main()
{
using sti = std::variant<int, std::string>;
sti a = 10;
sti b = "Hello";
std::cout<<std::get<int>(a) << " | " <<
std::get<0>(a) <<std::endl; // Same as above
std::cout<<std::get<std::string>(b) << " | " <<
std::get<1>(b) <<std::endl;// Same as above
return 0;
}
|
73,353,669
| 73,353,716
|
C++ Template class member function that returns the same template class data type
|
I'm trying to make a proof-of-concept class template that makes a 2D vector. I'm trying to make a member function that returns a "flipped" version of the vector were x becomes y and vice versa. I want the function to return the Vector2 template class data type. This is my syntax:
Class:
template<class T>
class Vector2
{
private:
T m_x;
T m_y;
public:
Vector2();
Vector2(const T& x, const T& y);
T getX();
T getY();
void setX(const T& x);
void setY(const T& y);
template <class U> friend Vector2<U> getFlippedCopy();
};
The syntax for the flip function:
template <class T>
Vector2<T> Vector2<T>::getFlippedCopy()
{
Vector2<T> vectorCopy;
vectorCopy.setX(m_y);
vectorCopy.setY(m_x);
return vectorCopy;
}
However, I get an error:
classes.hpp:51:12: error: no declaration matches ‘Vector2<T> Vector2<T>::getFlippedCopy()’
51 | Vector2<T> Vector2<T>::getFlippedCopy()
| ^~~~~~~~~~
classes.hpp:51:12: note: no functions named ‘Vector2<T> Vector2<T>::getFlippedCopy()’
classes.hpp:4:7: note: ‘class Vector2<T>’ defined here
4 | class Vector2
| ^~~~~~~
Also, where is a good resource to properly learn templates and all their complexities? I find lots of YouTube videos don't go beyond the basics or skim over complicated stuff...
|
You are trying to define a member function named getFlippedCopy but you haven't declared such a function.
I suspect that you've made a mistake by instead declaring a free friend function with the same name. I suggest making it a member function instead, which should then be const qualified:
template<class T>
class Vector2 {
//...
// note: not a function template:
Vector2 getFlippedCopy() const;
};
template <class T>
Vector2<T> Vector2<T>::getFlippedCopy() const {
Vector2<T> vectorCopy;
vectorCopy.setX(m_y);
vectorCopy.setY(m_x);
return vectorCopy;
}
|
73,353,689
| 73,353,781
|
Segmentation fault (core dumped) not able to debug the code for binary search with duplicates problem?
|
the problem is to return the lowest index of the element on a sorted list with duplicates.
but my code is giving segmentation error.I am not able to identify the error in code.
int binary_search(const vector<int> &a, int left, int right, int x)
{
// write your code here
if (right - left == 0)
return right;
while (right >= left)
{
int mid = right - left / 2;
if (a[mid] == x)
return binary_search(a, left, mid, x);
else if (a[mid] > x)
right = mid - 1;
else
left = mid + 1;
}
return -1;
}
int main()
{
int n;
std::cin >> n;
vector<int> a(n);
for (size_t i = 0; i < a.size(); i++)
{
std::cin >> a[i];
}
int m;
std::cin >> m;
vector<int> b(m);
for (int i = 0; i < m; ++i)
{
std::cin >> b[i];
}
for (int i = 0; i < m; ++i)
{
// replace with the call to binary_search when implemented
std::cout << binary_search(a, 0, (int)a.size() - 1, b[i]) << ' ';
}
}
|
When you find the result a[mid] == x, store it & keep searching to the left portion for the lowest index.
int binary_search(const vector<int> &a, int left, int right, int x)
{
// write your code here
if (right - left == 0)
return right;
int idx = -1;
while (right >= left)
{
int mid = right - left / 2;
// modified
if (a[mid] == x) {
idx = mid;
right = mid - 1;
}
else if (a[mid] > x)
right = mid - 1;
else
left = mid + 1;
}
return idx;
}
P.S: You might want to check the way you're calculating mid value!
Usually, mid = (left + right) / 2 or mid = left + (right - left) / 2 to avoid overflow.
|
73,354,202
| 73,354,295
|
Why does the pointer exist even after the unique_ptr to which the pointer is assigned goes out of scope?
|
I recently started learning about smart pointers and move semantics in C++. But I can't figure out why this code works. I have such code:
#include <iostream>
#include <memory>
using namespace std;
class Test
{
public:
Test()
{
cout << "Object created" << endl;
}
void testMethod()
{
cout << "Object existing" << endl;
}
~Test()
{
cout << "Object destroyed" << endl;
}
};
int main(int argc, char *argv[])
{
Test* testPtr = new Test{};
{
unique_ptr<Test> testSmartPtr(testPtr);
}
testPtr->testMethod();
return 0;
}
My output is:
Object created
Object destroyed
Object existing
Why does row testPtr->testMethod() work? Doesn't unique_ptr delete the pointer assigned to it on destruction if the pointer is an lvalue?
Edit: I learned from the comments that this method doesn't check if the pointer exists. If so, is there a way to check if the pointer is valid?
Edit: I learned that I shouldn't do anything with invalid pointers. Thank you for all your answers and comments.
Edit: Even this code works:
#include <iostream>
#include <memory>
using namespace std;
class Test
{
public:
Test(int num) :
number{ num }
{
cout << "Object created" << endl;
}
void testMethod(int valueToAdd)
{
number += valueToAdd;
cout << "Object current value: " << number << endl;
}
~Test()
{
cout << "Object destroyed" << endl;
}
private:
int number;
};
int main(int argc, char *argv[])
{
Test* testPtr = new Test(42);
{
unique_ptr<Test> testSmartPtr(testPtr);
}
testPtr->testMethod(3);
return 0;
}
I think it's because the compiler optimized it. Anyway, I really shouldn't do anything with invalid pointers.
|
You do not need a std::unique_ptr to write code with the same issue
int main(int argc, char *argv[])
{
Test* testPtr = new Test{};
delete testPtr;
testPtr->testMethod(); // UNDEFINED !!!
return 0;
}
The output is the same as yours here https://godbolt.org/z/8bocKGj1M, but it could be something else entirely. The code has undefined behavior. You shall not dereference an invalid pointer.
If you had actually used members of the object in testMethod() some faulty output or a crash is more likely, but also not guaranteed. Looking ok is the worst incarnation of undefined behavior.
Your code demonstrates nicely why you should ban raw new completely. At least you should call new only as parameter to the smart pointers constructor, or even better, use std::make_unique. Its basically just a wrapper around a constructor call via new and its main purpose is to let you write code that is free of new:
int main(int argc, char *argv[])
{
auto testPtr = std::make_unique<Test>();
testPtr->testMethod();
return 0;
}
Even then you can access the raw pointer and do wrong stuff. Smart pointers help with ownership, but they are not fool-proof.
|
73,354,331
| 73,369,231
|
when using ImGui with Glut it does not show any objects
|
I'm coding a rendering engine in C++ with OpenGL and GLUT and trying to integrate ImGUI into my engine, but I have a problem. it either renders the gui and only the background (no objects), or it only renders the objects and background (no GUI). This code:
glutDisplayFunc(renderScene);
glutIdleFunc(renderScene);
glutReshapeFunc(sizeChange);
glutSpecialUpFunc(releaseKey);
glutSpecialFunc(pressKey);
IMGUI_CHECKVERSION();
ImGui::CreateContext();
ImGuiIO& io = ImGui::GetIO(); (void)io;
ImGui::StyleColorsLight();
ImGui_ImplGLUT_Init();
ImGui_ImplGLUT_InstallFuncs();
ImGui_ImplOpenGL2_Init();
creates this:
and this code:
IMGUI_CHECKVERSION();
ImGui::CreateContext();
ImGuiIO& io = ImGui::GetIO(); (void)io;
ImGui::StyleColorsLight();
ImGui_ImplGLUT_Init();
ImGui_ImplGLUT_InstallFuncs();
ImGui_ImplOpenGL2_Init();
glutDisplayFunc(renderScene);
glutIdleFunc(renderScene);
glutReshapeFunc(sizeChange);
glutSpecialUpFunc(releaseKey);
glutSpecialFunc(pressKey);
creates this:
|
The problem is that GLUT callback handlers in both of your examples are set both manually (glut...Func) and by ImGui via ImGui_ImplGLUT_InstallFuncs. The latter sets default ImGui handlers for many GLUT callbacks (see the source), in particular glutReshapeFunc is used to set current window resize callback to ImGui_ImplGLUT_ReshapeFunc (source), which sets internal ImGui display size according to given parameters.
When in the second example glutReshapeFunc(sizeChange) gets called, ImGui handler gets unset, so it doesn't get called during window resize (which, in particular, happens also before the first display according to the GLUT documentation), which leaves internal ImGui state not set properly. In the first example, the situation is reversed - your reshape handler sizeChange gets unset and replaced by ImGui one, so initialization and rendering of your cube doesn't go as intended because sizeChange never gets called.
To resolve this, you can omit ImGui_ImplGLUT_InstallFuncs and use your GLUT handlers while, apart from custom logic, calling default ImGui handlers from them if they are present and their functionality is needed (you can see what default handlers exist by inspecting the sources linked above). E.g. you can change sizeChange like this:
void sizeChange(int w, int h)
{
// Your logic
// ...
ImGui_ImplGLUT_ReshapeFunc(w, h);
}
If you want key presses and releases to be registered by ImGui as well as your own code, you may change releaseKey and pressKey in a similar way by adding, respectively, ImGui_ImplGLUT_SpecialUpFunc and ImGui_ImplGLUT_SpecialFunc calls to them.
|
73,354,972
| 73,355,028
|
How to cross link libraries in CMake
|
In a c++ CMake project I have an executable main and two libraries lib1 and lib2. A function in lib1 needs a function from lib2 and visa versa. Also, lib1 only contains .h files. The main executable will use both libraries. When I try and "make" the project, I get an error:
error: redefinition of ‘void lib1()’.
The file structure looks somewhat like this
/path/to/my/project
├── CMakeLists.txt # Project directory
├── main.cpp
├── Lib1
│ ├── ...files (.h only)...
│ ├── CMakeLists.txt # lib1 cmake
├── Lib2
│ ├── ...source files (.cpp & .h)...
│ ├── CMakeLists.txt # lib2 cmake
The CMakeLists.txt in the Project directory includes the following:
add_executable(${PROJECT_NAME} main.cpp)
add_subdirectory(Lib1)
add_subdirectory(Lib2)
target_link_libraries(${PROJECT_NAME}
lib2
lib1
)
The CMakeLists.txt in the Lib1 directory includes the following:
add_library(lib1 INTERFACE)
target_include_directories(lib1
INTERFACE ${CMAKE_CURRENT_SOURCE_DIR}
)
target_link_libraries(lib1 INTERFACE
lib2
)
The CMakeLists.txt in the Lib2 directory includes the following:
add_library(lib2 ${SOURCES} ${HEADERS}) # SOURCES and HEADERS set in lines above
target_include_directories(lib2
INTERFACE ${CMAKE_CURRENT_SOURCE_DIR}
)
target_link_libraries(lib2
lib1
)
If I had to guess, the issue is it is trying to import lib1 headers twice. Once from lib2 and once in my main executable. How do I link the libraries so that isn't an issue?
|
I dont think it's anything related to cmake. Although convoluted (I'd do it in another way but hey it's your code) I think you are defining the body of a function in lib1 where it should reside in a cpp file.
Make that function lib1 inline.
inline void lib1() {
...
}
or alternatively defined it in the header and implement it in a body file
//lib1.h
void lib1();
Then
//lib1.cpp
#include "lib1.h"
void lib1() {
...
}
|
73,355,092
| 73,383,655
|
How to convert/use the run time variable in compile time expression?
|
I have the following situation (live code : https://gcc.godbolt.org/z/d8jG9bs9a):
#include <iostream>
#include <type_traits>
#define ENBALE true // to enable disable test solutions
enum struct Type : unsigned { base = 0, child1, child2, child3 /* so on*/ };
// CRTP Base
template<typename Child> struct Base {
void doSomething() { static_cast<Child*>(this)->doSomething_Impl(); }
private:
Base() = default;
friend Child;
};
struct Child1 : public Base<Child1> {
void doSomething_Impl() { std::cout << "Child1 implementation\n"; }
};
struct Child2 : public Base<Child2> {
void doSomething_Impl() { std::cout << "Child2 implementation\n"; }
};
struct Child3 : public Base<Child3> {
void doSomething_Impl() { std::cout << "Child3 implementation\n"; }
};
// ... so on
class SomeLogicClass
{
Type mClassId{ Type::base };
Child1 mChild1;
Child2 mChild2;
Child3 mChild3;
public:
Type getId() const { return mClassId; }
void setId(Type id) { mClassId = id; } // run time depended!
#if ENBALE // Solution 1 : simple case
/*what in C++11?*/ getInstance()
{
switch (mClassId)
{
case Type::child1: return mChild1;
case Type::child2: return mChild2;
case Type::child3: return mChild3;
default: // error case!
break;
}
}
#elif !ENBALE // Solution 2 : SFINAE
template<Type ID>
auto getInstance() -> typename std::enable_if<ID == Type::child1, Child1&>::type { return mChild1; }
template<Type ID>
auto getInstance() -> typename std::enable_if<ID == Type::child2, Child2&>::type { return mChild2; }
template<Type ID>
auto getInstance() -> typename std::enable_if<ID == Type::child3, Child3&>::type { return mChild3; }
#endif
};
void test(SomeLogicClass& ob, Type id)
{
ob.setId(id);
#if ENBALE // Solution 1
auto& childInstance = ob.getInstance();
#elif !ENBALE // Solution 2
auto& childInstance = ob.getInstance<ob.getId()>();
#endif
childInstance.doSomething(); // calls the corresponding implementations!
}
int main()
{
SomeLogicClass ob;
test(ob, Type::child1);
test(ob, Type::child2);
test(ob, Type::child3);
}
The problem is that the child class selection (to which the doSomething_Impl() must be called), should be taken place by deciding upon a run time variable mClassId of the SomeLogicClass.
The only two possible solutions I can think of are a normal switch case and SFINAE the member functions, as described in the above minimal example. As noted in the comments in the above code, both can't get work, for the reasons
Solution 1: the member function must have a unique return type
Solution 2: SFINAE required a compile time expression to decide which overload to be chosen.
Update
The std::variant(as mentioned by @lorro) would be the easiest solution here. However, require C++17 support.
However, I would like to know if we got some way around which works under the compiler flag c++11?
Note: I am working with a code-base, where external libs such as boost, can not be used, and the CRTP class structure is mostly untouchable.
|
Since you are restricted to C++11 and are not allowed to use external libraries such as boost::variant, an alternative would be to reverse the logic: Do not attempt to return the child type but instead pass in the operation to perform on the child. Your example could become this (godbolt):
#include <iostream>
#include <type_traits>
enum struct Type : unsigned { base = 0, child1, child2, child3 /* so on*/ };
// CRTP Base
template<typename Child> struct Base {
void doSomething() { static_cast<Child*>(this)->doSomething_Impl(); }
private:
Base() = default;
friend Child;
};
struct Child1 : public Base<Child1> {
void doSomething_Impl() { std::cout << "Child1 implementation\n"; }
};
struct Child2 : public Base<Child2> {
void doSomething_Impl() { std::cout << "Child2 implementation\n"; }
};
struct Child3 : public Base<Child3> {
void doSomething_Impl() { std::cout << "Child3 implementation\n"; }
};
// ... so on
class SomeLogicClass
{
Type mClassId{ Type::base };
Child1 mChild1;
Child2 mChild2;
Child3 mChild3;
// ... child3 so on!
public:
Type getId() const { return mClassId; }
void setId(Type id) { mClassId = id; } // run time depended!
template <class Func>
void apply(Func func)
{
switch (mClassId){
case Type::child1: func(mChild1); break;
case Type::child2: func(mChild2); break;
case Type::child3: func(mChild3); break;
default: // error case!
break;
}
}
};
struct DoSomethingCaller
{
template <class T>
void operator()(T & childInstance){
childInstance.doSomething();
}
};
void test(SomeLogicClass& ob, Type id)
{
ob.setId(id);
ob.apply(DoSomethingCaller{});
// Starting with C++14, you can also simply write:
//ob.apply([](auto & childInstance){ childInstance.doSomething(); });
}
int main()
{
SomeLogicClass ob;
test(ob, Type::child1);
test(ob, Type::child2);
test(ob, Type::child3);
}
Notice how the new function apply() replaces your getInstance(). But instead of attempting to return the child type, it accepts some generic operation that it applies to the correct child. The functor passed in needs to cope (i.e. compile) with all possible child types. Since all of them have a doSomething() method, you can simply use a templated functor (DoSomethingCaller). Before C++14, unfortunately, it cannot simply be a polymorphic lambda but needs to be a proper struct (DoSomethingCaller) outside of the function.
If you care to do so, you can also restrict DoSomethingCaller to the CRTP base class Base<T>:
struct DoSomethingCaller
{
template <class T>
void operator()(Base<T> & childInstance){
childInstance.doSomething();
}
};
which might make it a bit more readable.
Depending on how strict the "no external libraries" restriction is, maybe only boost is not allowed but a single external header (that can be simply included in the code base as any other header file) would be possible? If yes, you might want to also have a look at variant-lite. It aims to be a C++98/C++11 compatible replacement for std::variant.
|
73,355,369
| 73,358,075
|
What does (import) in .wat mean?
|
The Problem
I have been fiddeling around with wasm all day, now (import $import0 "env" "_Znaj" (param i32) (result i32)) popped up in my .wat. And it breaks my code.
The Error Message
The exact error I get is:
Uncaught (in promise) LinkError: import object field '_Znaj' is not a Function
JavaScript implementation
This is how I try And use it:
const importObject = {
env: {
__memory_base: 0,
__table_base: 0,
memory: new WebAssembly.Memory({initial: 1})
}
}
fetch('wasmPrimes.wasm').then((response) =>
response.arrayBuffer()
).then((bytes) =>
WebAssembly.instantiate(bytes, importObject)
).then((results) => {
primes = results.instance.exports._Z10wasmPrimesi(amt);
});
Pic & Pastebin
Here is a pic of my c++ code and the .wat (pls follow the link I dont have 10 rep jet)
Or heres a pastebin of the entire .wat code: https://pastebin.com/6wm7sHLG
C++ Source for the wasm Module
And heres the c++ for anyone, who didn't wanna open the pic:
int* wasmPrimes(int amt)
{
int num = 1;
int* primes = new int[amt];
if (amt > 0) primes[0] = 2;
amt --;
for (int i = 0; i < amt; i++)
{
bool prime = false;
while (!prime)
{
num++;
prime = true;
for (int j = 0; j < i+1; j++)
{
if (num%primes[j]==0)
{
prime = false;
break;
}
}
}
primes[i+1] = num;
}
return primes;
}
|
_Znaj. Is the array allocator used by the new operator when creating new arrays. At least some compiler do it that way. And whatever the compiler was you used it did the same. This _Znaj is then linked dynamically or statically. In this case it is dynamic for whatever reason but on most online WASM compilers it is put inside the importObject which you overwrite or which was not supplied by the environment. Which is speculative because I don not know what kind of environment you used.
|
73,355,546
| 73,356,425
|
Error passing Eigen matrix to a function in C++: "no instance of overloaded function matches the argument list"
|
I have a script that is working fine, but when I try to write a function with the same script I get the error "no instance of overloaded function "aapx" matches the argument list".
I know that an Eigen::Matrix should always be passed by reference to a function so I did and I thought maybe the issue is I am initializing a Eigen::Matrix<std::complex<double> and passing a Eigen::MatrixXcd but that doesn't seem to fix the error either.
The function
void aapx(const Eigen::Ref<const Eigen::MatrixXcd>& uh, Eigen::Ref<const Eigen::MatrixXcd>& vh, Eigen::Ref<const Eigen::MatrixXcd>& ph){
//some calculations
}
The main code looks like:
static const int nx = 10;
static const int ny = 10;
Eigen::Matrix<std::complex<double>, (ny+1), (nx)> uh;
uh.setZero();
Eigen::Matrix<std::complex<double>, (ny+1), (nx)> vh;
vh.setZero();
Eigen::Matrix<std::complex<double>, (ny+1), nx> ph;
ph.setZero();
aapx(uh,vh,ph); //ERROR
I can post the full code if necessary. The full error:
no instance of overloaded function "aapx" matches the argument list
argument types are: (Eigen::Matrix<std::_Complex<double>, 11, 10, 0, 11, 10>, Eigen::Matrix<std::_Complex<double>, 11, 10, 0, 11, 10>, Eigen::Matrix<std::_Complex<double>, 11, 10, 0, 11, 10>)
|
Your third argument, ph needs to be
Eigen::Ref<const Eigen::MatrixXcd> ph
(without the reference) and not
Eigen::Ref<const Eigen::MatrixXcd>& ph
because ph is already a reference. You do not want to change the reference, you want to change the matrix.
Check out this answer: Correct usage of the Eigen::Ref<> class
|
73,355,569
| 73,355,641
|
Extract data from templated derived class using virtual base class function
|
I have a Device object that can have 1 or more State objects. I do not want to limit what sort of state the State objects can describe so I've templated the value of the State objects. Since I want each Device to keep a collection of these State objects, I've derived them from a GenericState class.
I'd like to be able to interact with pointers to this GenericState class to read and write from the templated value, however templated virtual functions are not supported. My solution was to declare a purely virtual 'visiter' function in the base class that takes in a function with a void* argument. The templated derived class implements the virtual function and calls the passed in function with the value of the State. With this approach, I leave it up to the caller to choose what to do with the value (cast to a type, read, write, etc). Code snippets below.
My question is:
What are the issues with this sort of approach aside from it seeming 'hacky' and not the most readable?
Is there another, better approach to accomplish my end goal? I've looking into std::variant and std::any but std::variant doesn't seem as generic and std::any seems inappropriate. I've also considered static/dynamic casting but I'm not sure about their overhead.
My Device class that 'owns' a collection of states:
class Device {
std::list<std::unique_ptr<GenericState>> states;
...
}
The GenericState and State classes themselves:
class GenericState {
public:
std::string name;
virtual ~GenericState() = default;
virtual void visitValue(std::function<void (void*)> func) = 0;
protected:
GenericState(std::string name): name(name) {}
};
template<typename T>
class State: public GenericState {
protected:
T value;
public:
State(const std::string& name, const T& value): GenericState(name), value(value) {}
const std::string getName() {return name; }
const T getValue() { return value; }
// Calls the provided function with a reference to the value for read/write
void visitValue(std::function<void (void*)> func) override {
func(&value);
}
};
An example of how a GenericState pointer can be used to read/write the State's value:
State state = State(name, 5.0);
GenericState* genericState = &state;
// state.value = 5.0
double newVal = 0.2;
genericState->visitValue([&newVal](void* val){*(double *)val = newVal;});
// state.value = 0.2
double testVal = 0.0;
genericState->visitValue([&testVal](void* val){testVal = *(double *)val;});
// testVal = 0.2
Thanks.
|
My - maybe opinionated - observation is that people overuse virtual, maybe because of how C++ is usually taught. virtual is very useful if you have to provide 30-years forward compatibility and module load without restart in a telco system; it's less useful when concrete types are known, esp. when you recompile the entire project after each change.
Here the caveat is, you'll lose all guarantees that type safety gives you. At each point where you call visitValue(), you'll have to specify the type of the value, because that void* hides it. The idea of type safety is, you should have type checks (thik of it as 'unit tests' in forms of type specifications) and these should fail if you miss it. Now, the key observation is, you will likely write genericState->visitValue(); way more often than the number of States. So you likely rather list the states once, e.g. in a variant.
As soon as you have it in a variant, you can simply visit it and have the concrete type - from that point, you have type safety. All this you win for listing all the states once.
using GenericStateVar = std::variant<State, State2, State3>;
GenericStateVar genericState(State(name, 5.0));
double newVal = 0.2;
std::visit([&](auto&& state) { // you might limit the capture here
state.setValue(newVal); // consider making value public
}, genericState);
double testVal = 0.0;
std::visit([&](auto&& state) { // you might limit the capture here
testVal = state.getValue(); // consider making value public
}, genericState);
It's so simple with variant, nothing else is needed. Also, variant tends to be somewhat faster that virtual, as the former is aware of all possible types (technically, it's a switch vs. function pointer).
|
73,355,693
| 73,355,870
|
How to pass raw pointer of unique_ptr to a function that takes in unique_ptr?
|
#include <iomanip>
#include <iostream>
#include <memory>
#include <string>
#include <type_traits>
#include <utility>
class Res {
std::string s;
public:
Res(std::string arg) : s{ std::move(arg) } {
std::cout << "Res::Res(" << s << ");\n";
}
~Res() {
std::cout << "Res::~Res();\n";
}
private:
friend std::ostream& operator<< (std::ostream& os, Res const& r) {
return os << "Res { s = " << r.s << "; }";
}
};
// ptr is used just to read the content of the ptr.
// No writing is done. fun2 calls another api
// lets say api_fun that requires unique_ptr
void fun2(std::unique_ptr<Res>& uniq_ptr){
// api_fun(uniq_ptr);
std::cout << uniq_ptr.get() << '\n';
}
void fun1(Res* ptr){
//std::unique_ptr<Res> tt(ptr);
//fun2(std::move(tt));// this deletes the mem twice.
}
int main()
{
std::unique_ptr<Res> up(new Res("Hello, world!"));
fun1(up.get());
// up will be used here too
}
I am working on a project that has a unique_ptr variable lets say up. This unique_ptr
up is passed to a function fun1 as raw pointer. Now inside fun1 I have to call function fun2 but it takes unique_ptr.
As mentioned in the comment of fun2. This function only reads the content of the pointer no modification is done.
How do I pass unique_ptr to fun2 from a raw pointer?
Ps: Is there a solution without modifying the api definition?
Edit: fun2 can takes std::unique_ptr&
|
instead of passing the address with get() you must release the ownership with release()
void api_fun(std::unique_ptr<Res> const&);
void fun2(std::unique_ptr<Res>& uniq_ptr){
api_fun(uniq_ptr);
std::cout << uniq_ptr.get() << '\n';
}
void fun1(Res* ptr){
std::unique_ptr<Res> tt(ptr);
fun2(tt);
tt.release();
}
int main()
{
std::unique_ptr<Res> up(new Res("Hello, world!"));
auto p = up.release();
fun1(p);
up.reset(p);
std::cout << "All good" << std::endl;
}
but these fun1 and fun2 are not fun at all for anyone who is going to work with it later ;)
Surprisingly it looks exception-safe.
|
73,355,758
| 73,355,804
|
Why does default constructor only work with class pointers?
|
I've been messing around with a default constructor example inspired by this answer. This one works fine:
class Foo {
public:
int x;
Foo() = default;
};
int main() {
for (int i = 0; i < 100; i++) {
Foo* b = new Foo();
std::cout << b->x << std::endl;
}
But when I try this out with a class instance on the stack it does not! For example, if I instantiate the class like Foo b I get an error saying uninitialised local variable 'b' used.
However, when I change the constructor to Foo() {}, instantiation with Foo b works fine, and I see random garbage from b.x as expected.
Why doesn't a default constructor work when I instantiate a class on the stack?
I compiled the code with MVSC C++17:
|
It's because x is uninitialized. Reading uninitialized variables makes your program have undefined behavior which means it could stop "working" any day. It may also do something odd under the hood that you don't realize while you think everything is fine.
These all zero-initialize x:
Foo* b = new Foo();
Foo* b = new Foo{};
Foo b{};
while these don't:
Foo *b = new Foo;
Foo b;
MSVC may not catch and warn about all cases where you leave a variable uninitialized and read it later. It's not required to do so - which is why you may get the warning in one case but not the other.
|
73,355,940
| 73,644,905
|
Removing the desired access from the pre operation routine in kernel mode, leaves the process in eternal suspension
|
I am new to kernel and c++ development, but I am trying to develop a handler test in which the PROCESS_SUSPEND_RESUME flag of OperationInformation->Parameters->CreateHandleInformation.DesiredAccess can be removed in the pre-operation routine for a specific process (notepad.exe )
if (OperationInformation->Operation == OB_OPERATION_HANDLE_CREATE)
{
OperationInformation->Parameters->CreateHandleInformation.DesiredAccess &= ~0x0001;
OperationInformation->Parameters->CreateHandleInformation.DesiredAccess &= ~0x0800;
KdPrint(("[OperationInformation->Operation]: OB_OPERATION_HANDLE_CREATE\r\n"));
}
I can remove the flag 0x0001 which is PROCESS_TERMINATE and 0x0800 which is PROCESS_SUSPEND_RESUME but when the process is created it stays suspended forever.
My goal for the test is to run notepad.exe normally and prevent the process from being suspended or terminated.
I am using Visual Studio 2019 on the host computer.
The physical target PC and host are running Windows 10 1909
|
I found an easy solution. Simply run notepad.exe and then instruct kernel driver to remove these flag (0x0800). Notepad.exe could not be suspended or resumed
|
73,356,262
| 73,357,095
|
C++ Why are my Objects being destroyed, and sometimes twice in a row when using range based for loop
|
I'm a beginner programmer looking to understand why my objects are being deleted, sometimes even twice. I'm trying to avoid creating them on the heap for this project as that is a more advance topic I will try at a later time.
Whats causing the book1, book2 etc.. objects to be instantly deleted?
Also it outputs the same book object being deleted twice in a row sometimes for some reason?
The same effect happens when using 'case 3: ' in the switch statement through the menu.
#include <iostream>
#include <vector>
using namespace std;
class Book
{
public:
// Constructor Prototype
Book(string author = "Auth", string title = "Title", string publisher = "Pub", float price = 1.99, int stock = 1);
// Destructor Prototype
~Book();
// Methods
string get_title() const
{
return title;
}
private:
string author;
string title;
string publisher;
float price;
int stock;
};
// Constructor definition
Book::Book(string author, string title, string publisher, float price, int stock)
: author{author}, title{title}, publisher{publisher}, price{price}, stock{stock}
{
cout << "Book: " << title << " was created." << endl;
}
// Destructor definition
Book::~Book()
{
cout << "Book: " << title << " has been destroyed." << endl;
}
// Functions
void showMenu();
void getChoice();
void userChoice(int choice);
void showBooks(vector<Book> bookList);
// Global Vector
vector<Book> bookList;
int main()
{
// Creating Book objects for testing purposes
Book book1("Marcel Proust", "In Search of Lost Time", "Pub", 14.99, 5);
Book book2("James Joyce", "Ulysses", "Pub", 25.99, 4);
Book book3("Miguel de Cervantes", "Don Quixote", "Pub", 35.99, 3);
Book book4("Gabriel Garcia Marquez", "One Hundred Years of Solitude", "Pub", 100.99, 2);
Book book5("F. Scott Fitzgerald", "The Great Gatsby", "Pub", 49.99, 1);
// Pushing book1-5 into the vector of Book objects
bookList.push_back(book1);
bookList.push_back(book2);
bookList.push_back(book3);
bookList.push_back(book4);
bookList.push_back(book5);
while(true)
{
showMenu();
getChoice();
}
return 0;
}
void showMenu()
{
cout << "\tMENU" << endl;
cout << "1. Purchase Book" << endl;
cout << "2. Search for Book" << endl;
cout << "3. Show Books available" << endl;
cout << "4. Add new Book" << endl;
cout << "5. Edit details of Book" << endl;
cout << "6. Exit" << endl;
}
void getChoice()
{
int choice;
cout << "Enter your choice (1-6): ";
cin >> choice;
userChoice(choice);
}
void userChoice(int choice)
{
switch(choice)
{
case 1:
cout << "Case 1 called." << endl;
break;
case 2:
cout << "Case 2 called." << endl;
break;
case 3:
cout << "Case 3 called." << endl;
showBooks(bookList);
break;
case 4:
cout << "Case 4 called." << endl;
break;
case 5:
cout << "Case 5 called." << endl;
break;
case 6:
cout << "Case 6 called." << endl;
exit(0);
}
}
void showBooks(vector<Book> bookList)
{
for (Book bk : bookList)
{
cout << "Book: " << bk.get_title() << endl;
}
}
Edit:
Why does it delete the same object 2-3 times in a row sometimes?
output (in this case it was "In Search of Lost Time"):
Book: In Search of Lost Time was created.
Book: Ulysses was created.
Book: Don Quixote was created.
Book: One Hundred Years of Solitude was created.
Book: The Great Gatsby was created.
Book: In Search of Lost Time has been destroyed.
Book: In Search of Lost Time has been destroyed.
Book: Ulysses has been destroyed.
Book: In Search of Lost Time has been destroyed.
|
As per the various suggestions in the comments I have compiled the program to suit your needs. It will work without any unwanted allocations.
Changes I made are:
1.Reserving the vector to 5 elements.
2. Emplacement.
3. Passing the vector as a reference to the function and 4. Taking the vector elements as a reference while iteration.
#include <iostream>
#include <string>
#include <vector>
using namespace std;
class Book
{
public:
// Constructor Prototype
Book(string author = "Auth", string title = "Title", string publisher = "Pub", float price = 1.99f, int stock = 1);
// Destructor Prototype
~Book();
// Methods
string get_title() const
{
return title;
}
private:
string author;
string title;
string publisher;
float price;
int stock;
};
// Constructor definition
Book::Book(string author, string title, string publisher, float price, int stock)
: author( author ), title( title ), publisher( publisher ), price( price ), stock( stock )
{
cout << "Book: " << title << " was created." << endl;
}
// Destructor definition
Book::~Book()
{
cout << "Book: " << title << " has been destroyed." << endl;
}
// Functions
void showMenu();
void getChoice();
void userChoice(int choice);
void showBooks(vector<Book>& bookList);
// Global Vector
vector<Book> bookList;
int main()
{
bookList.reserve(5);
bookList.emplace_back("Marcel Proust", "In Search of Lost Time", "Pub", 14.99f, 5);
bookList.emplace_back("James Joyce", "Ulysses", "Pub", 25.99f, 4);
bookList.emplace_back("Miguel de Cervantes", "Don Quixote", "Pub", 35.99f, 3);
bookList.emplace_back("Gabriel Garcia Marquez", "One Hundred Years of Solitude", "Pub", 100.99f, 2);
bookList.emplace_back("F. Scott Fitzgerald", "The Great Gatsby", "Pub", 49.99f, 1);
while (true)
{
showMenu();
getChoice();
}
return 0;
}
void showMenu()
{
cout << "\tMENU" << endl;
cout << "1. Purchase Book" << endl;
cout << "2. Search for Book" << endl;
cout << "3. Show Books available" << endl;
cout << "4. Add new Book" << endl;
cout << "5. Edit details of Book" << endl;
cout << "6. Exit" << endl;
}
void getChoice()
{
int choice;
cout << "Enter your choice (1-6): ";
cin >> choice;
userChoice(choice);
}
void userChoice(int choice)
{
switch (choice)
{
case 1:
cout << "Case 1 called." << endl;
break;
case 2:
cout << "Case 2 called." << endl;
break;
case 3:
cout << "Case 3 called." << endl;
showBooks(bookList);
break;
case 4:
cout << "Case 4 called." << endl;
break;
case 5:
cout << "Case 5 called." << endl;
break;
case 6:
cout << "Case 6 called." << endl;
exit(0);
}
}
void showBooks(vector<Book>& bookList)
{
for (const auto& book : bookList){
cout << "Book: " << book.get_title() << endl;
}
}
|
73,356,618
| 73,431,607
|
How should I configure clangd to make it scan the library I download with CMake FetchContent?
|
I use CMake FetchContent to download nlohmann/json. But my clangd doesn't scan the library after downloading. So how should I configure my clangd?
my CMakeLists.txt:
cmake_minimum_required(VERSION 3.11)
project(ExampleProject LANGUAGES CXX)
include(FetchContent)
FetchContent_Declare(json URL https://github.com/nlohmann/json/releases/download/v3.11.2/json.tar.xz)
FetchContent_MakeAvailable(json)
add_executable(example main.cc)
target_link_libraries(example PRIVATE nlohmann_json::nlohmann_json)
and my code main.cc:
#include <iostream>
#include <nlohmann/json.hpp>
using json = nlohmann::json;
int main()
{
json object = { { "one", 1 }, { "two", 2 } };
std::cout << object << '\n';
return 0;
}
my clangd says:
main.cc|2 col 10-29 error| 'nlohmann/json.hpp' file not found
main.cc|4 col 14-22 error| Use of undeclared identifier 'nlohmann'
main.cc|8 col 5-9 error| Unknown type name 'json'
|
Now I know how to solve this problem.
When using CMake, set CMAKE_EXPORT_COMPILE_COMMANDS to 1, to make CMake generate the file compile_commands.json. Clangd will automatically scan this file and follow it to scan for third-party libraries.
|
73,356,755
| 73,356,819
|
Is it necessary to avoid memory leak when returning pointer by using shared_ptr?
|
I have two functions for converting char array from gb2321 to utf-8 like,
#include <windows.h>
#include "memory.h"
#include <wchar.h>
#include <iostream>
using namespace std;
//GB2312 to UTF-8
char* G2U(const char* gb2312)
{
int len = MultiByteToWideChar(CP_ACP, 0, gb2312, -1, NULL, 0);
wchar_t* wstr = new wchar_t[len + 1];
memset(wstr, 0, len + 1);
MultiByteToWideChar(CP_ACP, 0, gb2312, -1, wstr, len);
len = WideCharToMultiByte(CP_UTF8, 0, wstr, -1, NULL, 0, NULL, NULL);
char* str = new char[len + 1];
//what about using shared_ptr to release resource but how to modify following code?
//shared_ptr<char> str(new char[len + 1], default_delete<char[]>());
memset(str, 0, len + 1);
WideCharToMultiByte(CP_UTF8, 0, wstr, -1, str, len, NULL, NULL);
if (wstr) delete[] wstr;
return str;
}
I read this code part in website and I was wondering if I should replace char* str = new char[len + 1]; with shared_ptr. If so, how to modify the following code, memset and WideCharToMultiByte? P.S, is there some method used by C not CPP ?
|
Instead of managing a raw pointer, or even a shared_ptr to manage a char pointer, you can simply use std::vector.
It has a constructor that accepts a size and a value (thus you don't need memset). You can use std::vector::data to access the underlying data buffer.
Below you can see an example for str. A similar solution can be applied to wstr.
#include <vector>
std::vector<char> str(len + 1, 0); // allocate size and set all elements to 0
//----------------------------------------vvvvvvvvvv
WideCharToMultiByte(CP_UTF8, 0, wstr, -1, str.data(), len, NULL, NULL);
Note:
In general it could be more straightforward using std::string. But until C++17 std::string::data returns a const char* which is incompatible with the Win32 API pointer which is used to write data into.
You can still return std::string from your G2U function, by using:
return std::string(str.begin(), str.end());
If you can use C++17 you can use std::string instead of std::vector<char> and then you will be able to return str directly.
Update:
Regarding the "P.S." at the end of your question - as far as I know the c language has no mechanism for automatic heap memory management.
|
73,356,985
| 73,374,648
|
C++ Restricting variable type to multiple base classes
|
Consider the following classes:
class A {};
class B {};
class C {};
class D : public A, public B, public C {};
class E : public A, public C {};
class F : public A {};
I want to write a variable type that only accepts types which derive from both A and B (in this case only D) so that the following hold:
T var;
var = D(); // valid
var = E(); // invalid, does not derive from B
var = F(); // invalid, does not derive from B
I could do the following and use G as my type. G would act as a sort of 'type grouper'.
class G : public A, public B {};
class D : public F, public C {};
However if I want to apply this to any combination of base types or with any number of base types, I would need a large number of these type groupers. Is there a way to implement a multiple base type requirement without manually generating type groupers like I showed?
I do not have a specific use case that requires this, I am purely curious whether this is possible.
EDIT: Accidentally used F twice. Switched to G for the second time.
|
You Here is a type that references (points to) an object that derives from both A and B. The simplest barebones version is
struct AB
{
A* a;
B* b;
template <class D> AB(D* d) : a(d), b(d) {};
template <class D> AB& operator=(D* d) { a=d; b=d; return *this;}
};
Now you can have:
D d;
E e;
AB ab1(&d); // OK
AB ab2(&e); // Not OK
This can be templatized and expanded:
template <typename ... As>
struct Combine
{
std::tuple<As * ...> as;
template <class D> As(D* d) : as(replicate<sizeof...(As)>(d) {};
// etc etc --- writing `replicate` is left ans an exercise
};
One could add data hiding, accessors, static asserts, concepts, smart pointers and whatnot on top of this, but the idea is still the same. If you derive from A and B, you have an A part and a B part. Let me reference both.
Providing a pair of pointers/references to A and to B is the only reasonable interface one could expect from such a type (without knowing specific details about A and B).
Ostensibly one can also have a type that copies, rather than references.
struct CopyingAB : A, B
{
template <class D> CopyingAB(const D& d) : A(d), B(d) {};
template <class D> CopyingAB& operator=(const D& d)
{
A::operator=(d);
B::operator=(d);
return *this;
}
};
But you should never want or need this, because object slicing is a bug, not a feature.
There are probably no real use cases for the referencing/pointing version either, but it is not inherently evil in and of itself, so let it be.
|
73,357,204
| 73,357,231
|
Why do we use this-> inside constructor of C++ and not this.(DOT)
|
Rectangle::Rectangle(Rectangle &r)
{
this.length=r.length;
this.breadth=r.breadth;
}
I used this. instead of this-> and it gives error
[Error] request for member 'breadth' in '(Rectangle*)this', which is of pointer type 'Rectangle*' (maybe you meant to use '->' ?)
So does this mean class are sort of like Pointers? or I might be lacking some concepts so please help me to understand.
|
According to docs:
The expression this is an rvalue (until C++11)a prvalue (since C++11) expression whose value is the address of the implicit object parameter
So, this here is a pointer that point to the address that store value of the instance of class Rectangle.
|
73,357,455
| 73,497,119
|
How to correctly use the update() clause in OpenMP
|
I have a program that was originally being executed sequentially and now I'm trying to parallelize it via OpenMP Offloading. The thing is that when I use the update clause, depending on the case, if I include the size of the array I want to move it returns an incorrect result, but other times it works. For example, this pragma:
#pragma omp target update from(image[:bands])
Is not the same as:
#pragma omp target update from(image)
What I want to do is move the whole thing. Suppose the variable was originally declared in the host as follows:
double* image = (double*)malloc(bands*sizeof(double));
And that these update pragmas are being called inside a target data region where the variable image has been mapped like this:
#pragma omp target data map(to: image[:bands]) {
// the code
}
I want to move it to the host to do some work that cannot be done in the device. Note: The same thing may happen with the "to" update pragmas, not only the "from".
|
Well I don't know why anyone from OpenMP answered this question, as the answer was pretty simple (I say this because they don't have a forum anymore and this is supposed to be the best place to ask questions about OpenMP...). If you want to copy data dynamically allocated using pointers you have to use the omp_target_memcpy() function.
|
73,357,492
| 73,357,576
|
How to initialize the LeastMaxValue template param in counting_semaphore?
|
I have a use case where I need to use counting_semaphore as a data member in a class. If it were a global variable, I could've omitted the template argument, and it would've been default initialized. But as mentioned here, in case of a member variable, the template argument needs to be specified, and in our case it has to be a compile-time constant.
So, I'm not sure what to initalize the value of LeastMaxValue to? Are there any heuristics for it that you use, or is there a way I can still use the implementation defined default value for this?
|
Ideally, you would look at how the data member will be used and determine an upper bound on what the semaphore needs to count. This upper bound is an appropriate LeastMaxValue.
It is not always possible to find such a bound, though. If you have no way of bounding the maximum the data member needs to handle, you could use the largest possible value, std::numeric_limits<std::ptrdiff_t>::max().
|
73,357,515
| 73,357,579
|
To check if there are any a and b that satisfy the given equation
|
Problem
You are given a positive integer X. Your task is to tell whether there exist two positive integers a and b (a > 0, b > 0) such that
2⋅a+2⋅b+a⋅b=X
If there exist positive integers a and b satisfying the above condition print YES, otherwise print NO.
Input Format
The first line of input will contain a single integer T, denoting the number of test cases.
Each test case consists of a single line containing a positive integer X.
Output Format
For each test case, output on a new line YES or NO.
You may print each character of the string in either uppercase or lowercase (for example, the strings yes, YES, Yes, and yeS will all be treated as identical).
#include <iostream>
using namespace std;
int main() {
int t,x,i,j;
cin>>t;
while(t--)
{
cin>>x;
for(i=1; i<x; i++)
{
for(j=1; j<x; j++)
{
if(2*i + 2*j + i*j == x)
{
cout<<"Yes"<<endl;
}
else
{
cout<<"No"<<endl;
}
}
}
}
return 0;
}
According to my above code, I will get No for every iteration of if_loop and Yes for that one specific case for which it satisfies that equation, but I want it to print yes or no only once. Please tell me how it can be done.
|
Take a flag variable and set to true if consition is satisfied.Print it after the loop not inside the loop
#include <iostream>
using namespace std;
int main() {
int t,x,i,j;
int flag;
cin>>t;
while(t--)
{
cin>>x;
flag = checkSatisfiedNumber(x)
if (flag == 1)
cout<<"Yes"<<endl;
else
cout<<"No"<<endl;
}
return 0;
}
int checkSatisfiedNumber(int x){
int i,j;
for(i=1; i<x; i++)
{
for(j=1; j<x; j++)
{
if(2*i + 2*j + i*j == x)
{
return 1;
}
}
}
return 0;
}
|
73,357,632
| 73,358,595
|
Limit chunks of combination in c++
|
I modified a code I've found on the internet to fit my needs. It calculates and prints all possible combinations of r elements in an array given size of N. Here's the code:
#include <iostream>
#include <vector>
void combinationUtil(std::vector<int> arr, std::vector<int> data, int start, int end, int index, int r);
void printCombination(std::vector<int> arr, int n, int r)
{
std::vector<int> data;
data.assign(r, 0);
combinationUtil(arr, data, 0, n-1, 0, r);
}
void combinationUtil(std::vector<int> arr, std::vector<int> data, int start, int end, int index, int r)
{
if (index == r)
{
for (int j = 0; j < r; j++)
std::cout << data.at(j) << " ";
std::cout << std::endl;
return;
}
for (int i = start; i <= end && end - i + 1 >= r - index; i++)
{
data.at(index) = arr.at(i);
combinationUtil(arr, data, i+1, end, index+1, r);
}
}
int main()
{
std::vector<int> arr = {1, 2, 3, 4, 5};
int r = 3;
int n = arr.size();
printCombination(arr, n, r);
}
The output of it is:
1 2 3
1 2 4
1 2 5
1 3 4
1 3 5
1 4 5
2 3 4
2 3 5
2 4 5
3 4 5
I can modify the start value to 1 so the output can start from value 2 like so:
2 3 4
2 3 5
2 4 5
3 4 5
How can I achieve a similar effect for the end. For example if I wanted it to end before calculating combinations starting with value 2. I want to see a result like:
1 2 3
1 2 4
1 2 5
1 3 4
1 3 5
1 4 5
I want to do this so I can utilize parallelizations for a larger scale function.
I hope I could relay the idea clear enough. Thanks in advance. (Code compiles with some casting warnings. I just left it like this for an easier read for the reader.)
|
I solved the issue by modifying another combination method I found on this site.
Here's the code for it:
#include <iostream>
#include <vector>
using namespace std;
vector<int> people;
vector<int> combination;
void pretty_print(const vector<int>& v) {
static int count = 0;
cout << "combination no " << (++count) << ": [ ";
for (int i = 0; i < v.size(); ++i) { cout << v[i] << " "; }
cout << "] " << endl;
}
void go(int offset, int k, int end, bool outermost = true) {
if (k == 0) {
pretty_print(combination);
return;
}
for (int i = offset; i <= people.size() - k; ++i) {
combination.push_back(people[i]);
go(i+1, k-1, end, false);
combination.pop_back();
if(outermost && i == end) return;
}
}
int main() {
int k = 3, end = 1;
people = {1, 2, 3, 4, 5};
go(0, k, end);
return 0;
}
offset controls start and end controls the end.
For loop going on at the outside most layer controls the first element selected for the combination and it progresses as the function recurses.
If statement inside the for loop checks if the wanted line is reached and returns the function prematuraly as needed.
|
73,357,915
| 73,358,303
|
How to debug a dll using Visual Studio?
|
How can I debug a dll using visual studio?
I have the DLL source, pdb, etc.
I tried these options:
BOOL APIENTRY DllMain( HMODULE hModule,
DWORD ul_reason_for_call,
LPVOID lpReserved
)
{
void DebugBreak();
switch (ul_reason_for_call)
{
case DLL_PROCESS_ATTACH:
{
//...
}
break;
case DLL_PROCESS_DETACH:
{
//...
}
break;
}
);
return TRUE;
}
It launches the exe but doesn't inject the DLL, by default this exe didn't load the DLL, im manually injecting it.
Is possible to visual studio attach the DLL? and be able to put breakpoints on it, view call stack on a crash, etc?
|
The quickest way to fix this is via the Modules Window in the Debugger:
Put a breakpoint after your LoadLibrary call.
Go to Debug->Windows->Modules in the menu bar to bring up the Modules window.
Search for your dll file in the list. In the Symbol Status column
it should read "Cannot find or open the PDB file".
Right click the dll and choose Load Symbols from the context menu.
Point it to the correct pdb file.
The Symbol Status should now change to "Symbols Loaded".
You should now be able to step into functions from the dll and put breakpoints.
|
73,358,448
| 73,369,784
|
Boost Log + Log rotation in another folder
|
is there a possibility, to write with Boost Log the history log files in another folder than the current log file?
log
trace_2.log
history
trace_0.log
trace_1.log
I'm using an asynchronous sink and tried it via set_file_collector, but all logs are written to /tmp/log folder and when after closing the application, the file is moved to /tmp/log/history:
sink->locked_backend()->set_file_name_pattern("/tmp/log/trace_%3N.log");
sink->locked_backend()->set_file_collector(boost::log::sinks::file::make_collector(
boost::log::keywords::target = "/tmp/log/history/"
));
When I try this without set_file_collector, the files are written to /tmp/log.
Thank you in advance!
|
I found the solution.
The configuration was not correct, so the settings enable_final_rotation = false does not worked on my side.
Because of this, with each program exit, the current log file was moved to the history folder, also if it does not reached the rotation size. I forgot this information in the post.
This config was required:
boost::shared_ptr< boost::log::sinks::text_file_backend > backend =
boost::make_shared< boost::log::sinks::text_file_backend >(
boost::log::keywords::enable_final_rotation = false,
boost::log::keywords::file_name = logPathAndFilename,
...
);
sink->locked_backend()->set_file_collector(boost::log::sinks::file::make_collector(
boost::log::keywords::target = logHistoryPath
));
sink->locked_backend()->scan_for_files();
core->add_sink(sink);
|
73,359,216
| 73,359,310
|
How to return nothing from an integer function in C++?
|
Consider the following code:
#include <iostream>
int test(int a){
if (a > 10){
return a;
}
else{
std::cout << "Error!";
return nothing;
}
}
int main(){
std::cout << test(9);
return 0;
}
What I want is that The integer function test(int a), return a if a > 10, otherwise return Error!. but since this is an integer function, it must return an integer value, but I want that it print Error and return nothing. Is there a way for do this? (Also note that I don't want to use a void function)
|
#include <stdexcept>
int test(int a){
if (a > 10){
return a;
}
else{
throw std::invalid_argument( "a is smaller or eq than 10" );
}
}
|
73,359,293
| 73,359,861
|
brute force approach for union of two array
|
I wanted to apply brute force to find the union of two arrays
and theoretically it should works but for some reason only first array goes into 3rd array(3rd array is for storing elements form array 1 and array2)
and size of 3rd array is getting increased from 8 to 12
//find the union of two arrays
#include<iostream>
#include<vector>
using namespace std;
void uniarr(vector<int> &arr1, vector<int> &arr2)
{
int n=arr1.size();
int m=arr2.size();
vector<int> arr3(n+m);
cout<<" arr "<<arr3.size()<<endl;
int count=1;
for (int i = 0; i < n; i++)
{ count=1;
for (int j = 0; j < arr2.size(); j++)
{
if(arr1[i]==arr2[j])
{
if(count==1)
{
arr3.push_back(arr1[i]);
count++;
}
arr2.erase(arr2.begin()+j);
j=j-1;
}
}
if(count==1)
{
arr3.push_back(arr1[i]);
}
}
cout<<" arr "<<arr3.size()<<endl;
for (int i = 0; i < arr3.size(); i++)
{
cout<<" "<<arr3.at(i);
}
}
int main()
{
system("cls");
vector<int> arr1={3,1,4,6};
vector<int> arr2={1,2,5,4};
uniarr(arr1,arr2);
return 0;
}
|
You have 2 issues:
First one, initialization of arr3
std::vector<int> arr3(n+m); // create a vector of SIZE n+m (with value 0)
it should be
std::vector<int> arr3;
arr3.reserve(std::min(n, m)); // "optimization" to avoid future allocation
Second one is your last
if(count==1)
{
arr3.push_back(arr1[i]);
}
which add in arr3 element from arr1 not found in arr2.
you should remove it.
Demo
|
73,359,351
| 73,360,017
|
Why does my second 3D object not have four faces in Open GL
|
As the title says I'm tyring to model a simple giraffe out of arraycubes in open GL wiht C++, now I got the concepts done, but ran into an issue, when I start on the neck for some reaosn I lose 5 out of the 6 faces of my cube, the example I'm following doesn't result in this. I linked a small video below to show the visual result and I'm wondering what might be causing this. If there's an easier way to go about this as well please do let me know.
Visual Result
Code Sample
#include <glut.h>
float angle[4];
GLfloat corners[8][3] = { {-0.5,0.5,-0.5},{0.5,0.5,-0.5},
{0.5,-0.5,-0.5},{-0.5,-0.5,-0.5},
{-0.5,0.5,0.5},{0.5,0.5,0.5},
{0.5,-0.5,0.5},{-0.5,-0.5,0.5} };
void drawFace(int a, int b, int c, int d) {
glBegin(GL_POLYGON);
glVertex3fv(corners[a]);
glVertex3fv(corners[b]);
glVertex3fv(corners[c]);
glVertex3fv(corners[d]);
glEnd();
}
void ArrayCube() {
glColor3f(1.0, 1.0, 1.0);
drawFace(0, 3, 2, 1);
glColor3f(1.0, 1.0, 1.0);
drawFace(3, 0, 4, 7);
glColor3f(1.0, 1.0, 1.0);
drawFace(2, 3, 7, 6);
glColor3f(1.0, 1.0, 1.0);
drawFace(1, 2, 6, 5);
glColor3f(1.0, 1.0, 1.0);
drawFace(4, 5, 6, 7);
glColor3f(1.0, 1.0, 1.0);
drawFace(5, 4, 0, 1);
}
void LowerNeck()
{
glPushMatrix();
glTranslatef(0.5, 0.25, -0.125);
glScalef(0.0, 0.5, 0.25);
ArrayCube();
glPopMatrix();
}
void MainBody()
{
glPushMatrix();
glScalef(1.25, 0.25, 0.5);
ArrayCube();
glPopMatrix();
}
void DrawGiraffe()
{
glRotatef(angle[0], 0.0, 1.0, 0.0);
MainBody();
LowerNeck();
}
void rotate() {
angle[0] += 1.0;
if (angle[0] > 360) angle[0] -= 360;
glutPostRedisplay();
}
void display() {
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(0.6, 0.6, 0.6, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0);
DrawGiraffe();
glutSwapBuffers();
}
void init() {
glClearColor(0.0, 0.0, 0.0, 0.0);
glColor3f(1.0, 1.0, 1.0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(-1.0, 1.0, -1.0, 1.0, -1.0, 2.5);
}
int main(int argc, char** argv) {
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB);
glutInitWindowSize(500, 500);
glutInitWindowPosition(0, 0);
glutCreateWindow("Basic 3D");
glutDisplayFunc(display);
init();
glutIdleFunc(rotate);
glutMainLoop();
}
|
For the second object (the neck) you apply a scale transformation on x that scales the x component of all the following drawn vertices to 0.0:
glScalef(0.0, 0.5, 0.25);
That 0.0 should've probably been a 1.0.
That's the reason you only see one quad in the render video: That's the quad/face (actually two faces) which still have a dimension in Y and Z. The faces that have a dimension on x are squished to degenerate quads and not displayed at all.
|
73,359,977
| 73,360,722
|
Why some C++ standard functions are missing literal exception specification or not marked as conditionally noexcept?
|
I've noticed that some standard functions having a wide contract such as functions in [iterator.range] conditionally do not throw exceptions, but they are not marked as conditionally noexcept.
EDIT: It is described in this paper:
Each library function having a wide contract, that the LWG agree cannot throw, should be marked as unconditionally noexcept.
If a library swap function, move-constructor, or move-assignment operator is conditionally-wide (i.e. can be proven to not throw by applying the noexcept operator) then it should be marked as conditionally noexcept. No other function should use a conditional noexcept specification.
But there are some functions that do not obey this guideline such as std::cbegin. And I did not find a reason in this paper.
Why?
Another case is that some functions should be non-throwing, but the standard doesn't say anything about it. Such as erase(q) (q denotes a valid dereferenceable constant iterator) of associative containers. This allows the implementation to throw any standard exception, according to [res.on.exception.handling#4]:
Functions defined in the C++ standard library that do not have a Throws: paragraph but do have a potentially-throwing exception specification may throw implementation-defined exceptions.170 Implementations should report errors by throwing exceptions of or derived from the standard exception classes ([bad.alloc], [support.exception], [std.exceptions]).
So if you want to swallow any implementation-defined exceptions they throw, you have to use a try-catch block.
std::set<int> s{1};
try
{
s.erase(s.cbegin());
}
catch (...) {}
It's ugly and inefficient, but necessary. So I also don't know of any benefit to this.
|
Here is a paper (quoted by some other standard library proposals) for reasons to not specify noexcept on some standard library functions: N3248: noexcept Prevents Library Validation
If a function that the standard says does not throw any exceptions does throw an exception, you have entered undefined behaviour territory. There is a bug in your code. Catching and swallowing it is certainly not the right thing to do.
For example in this code:
std::set<int> s;
try
{
s.erase(s.cbegin());
}
catch (...) {}
may make s.erase throw a C++ exception in debug mode since the preconditions are not met (s.cbegin() is not dereferenceable), making this run and seem to work unnoticed, but suddenly some other behaviour happens (like a crash or an infinite loop) in release mode.
If the standard mandated that this function was noexcept, the function could not throw an exception even in debug mode.
However, standard libraries are allowed to add noexcept specifiers even if not explicitly stated in the standard, which many do. This gives the freedom for a standard library implementor to do what is appropriate (e.g., noexcept(true) in <release_set>, but noexcept(false) in <debug_set>)
And, of course, if you know the preconditions are met (like if (!s.empty()) s.erase(s.cbegin());), you know an exception can't be thrown and don't need to write any exception handling.
See also: Why vector access operators are not specified as noexcept?
|
73,360,200
| 73,360,246
|
Why is C++ implicitly converting 0.0 to some extremly small 'random' value?
|
I'm trying to compare two class objects, which has both been initialized with 0.0, but for some reason C++ decides to convert the 0.0 to some extremly small value instead of keeping the 0.0, which makes the comparison fail as the value it converts to is not always exactly the same.
Vector.cpp
#include "Vector.h"
// operator overloadings
bool Vector::operator==(const Vector &rhs) const
{
return x == rhs.x && y == rhs.y;
}
bool Vector::operator!=(const Vector &rhs) const
{
return x != rhs.x || y != rhs.y;
}
Vector.h
#pragma once
class Vector
{
private:
double x;
double y;
public:
// constructor
Vector(double x = 0.0, double y = 0.0){};
// operator overloading
bool operator==(const Vector &rhs) const;
bool operator!=(const Vector &rhs) const;
};
main.cpp
#include "Vector.h"
#include <cassert>
using namespace std;
int main()
{
Vector zero{};
// check default constructor
assert((Vector{0.0, 0.0} == zero));
What is going on here and how should it be rewritten?
I'm using the clang compiler if it makes any difference.
|
Your Vector class never initializes the x and y members.
Since the member variables are uninitialized, they will have an indeterminate value, which you should look at like it was random or garbage. Using indeterminate values of any kind in any way, leads to undefined behavior.
To initialize the member variables, use a constructor initializer list:
Vector(double x = 0.0, double y = 0.0)
: x{ x }, y{ y }
{
}
On another note, floating point arithmetic on a computer will lead to rounding errors. The more operations you perform, the larger the error will be. And sooner or later you will think that floating point math is broken.
All this rounding of course means that it's almost impossible to do exact comparisons of floating point values.
The common way is to use an epsilon, a small margin of error, for your comparisons.
|
73,360,380
| 73,360,869
|
Why does this innocent function cause a segfault?
|
I'm trying to code metaballs in C++/SFML, my program works just fine in a single thread. I tried to write an MRE to find the problem and here's what I got:
main.cpp
// main
#include <iostream>
#include "threader.h"
float func(float x, float y, float a, float b, float r) {
return 1.f / sqrt((x - a)*(x - a) + (y - b)*(x - b));
}
int main () {
sf::RenderWindow window(sf::VideoMode(2800, 1800), "");
window.setFramerateLimit(20);
sf::Event event{};
threader tf(window);
while (window.isOpen()) {
while (window.pollEvent(event)) {
switch (event.type) {
case sf::Event::Closed: {
window.close();
}
}
}
window.clear();
tf.parallel(func);
window.display();
}
}
threader.h
//threader_H
#pragma once
#include <array>
#include <thread>
#include <cmath>
#include <functional>
#include <SFML/Graphics.hpp>
class threader {
public:
int threadCount = 4;
std::array<std::thread, 4> threads;
sf::RenderWindow& window;
public:
explicit threader(sf::RenderWindow& w) : window(w) {};
void strip(int start, int end, const std::function<float(float, float, float, float, float)>& func);
void parallel(const std::function<float(float, float, float, float, float)>& func);
};
threader.cpp
// threader_CPP
#include "threader.h"
#include <iostream>
void threader::strip (int start, int end, const std::function<float (float, float, float, float, float)> &func) {
for (int X = start; X < end; X += 10) {
for (int Y = 0; Y < window.getSize().y; Y += 10) {
auto x = static_cast<float>(X);
auto y = static_cast<float>(Y);
x = x / 2800.f * 4 + 0 - 4 / 2.f;
y = -((1800.f - y) / 1800 * 4 + 0 - 4 / 2.f);
if (func(x, y, 1, 2, 3) > 1) {
sf::CircleShape circle(20);
circle.setPointCount(3);
circle.setFillColor(sf::Color::Cyan);
circle.setPosition(X, Y);
window.draw(circle);
}
}
}
}
void threader::parallel (const std::function<float (float, float, float, float, float)> &func) {
int start = 0;
int end = window.getSize().x / threadCount;
for (auto& t : threads) {
t = std::thread(&threader::strip, this, start, end, func);
start = end;
end += window.getSize().x / threadCount;
}
for (auto& t : threads) {
t.join();
}
}
Now for the explanation. threader is a class which has two methods: strip that calculates a function for a given strip of the window and parallel that creates threads to separately calculate my function for every strip. This code doesn't work:
But here's the catch: if I adjust the function void func(...) in main to return 1.f / sqrt((x - a) + (y - b)), everything works just fine. What is happening? How a simple calculation can cause a segfault? help please...
EDIT 1: Written in CLion, C++ 20.
EDIT 2: If anything here makes sense, please explain it to me.
|
Rather than try to draw to the window on multiple threads, I would instead use std algorithms to filter the points to draw circles, and draw them all on the main thread.
std::vector<std::pair<int, int>> getPoints(const sf::RenderWindow& window) {
std::vector<std::pair<int, int>> points;
for (int X = 0; X < window.getSize().x; X += 10) {
for (int Y = 0; Y < window.getSize().y; Y += 10) {
points.emplace_back(X, Y);
}
}
return points;
}
template<typename F>
auto filter(F f) {
return [f](const std::pair<int, int> & point) {
auto x = static_cast<float>(point.first);
auto y = static_cast<float>(point.second);
x = x / 2800.f * 4 + 0 - 4 / 2.f;
y = -((1800.f - y) / 1800 * 4 + 0 - 4 / 2.f);
return (func(x, y, 1, 2, 3) > 1);
}
}
sf::CircleShape toCircle(int X, int Y) {
sf::CircleShape circle(20);
circle.setPointCount(3);
circle.setFillColor(sf::Color::Cyan);
circle.setPosition(X, Y);
return circle;
}
template <typename F>
void drawCircles(sf::RenderWindow& window, F f) {
auto points = getPoints(window);
auto end = std::remove_if(std::execution::par, points.begin(), points.end(), filter(f));
points.erase(end, points.end());
for (auto & [X, Y] : points) {
window.draw(toCircle(X, Y));
}
}
float func(float x, float y, float a, float b, float r) {
return 1.f / sqrt((x - a)*(x - a) + (y - b)*(x - b));
}
int main () {
sf::RenderWindow window(sf::VideoMode(2800, 1800), "");
window.setFramerateLimit(20);
sf::Event event{};
threader tf(window);
while (window.isOpen()) {
while (window.pollEvent(event)) {
switch (event.type) {
case sf::Event::Closed: {
window.close();
}
}
}
window.clear();
drawCircles(window, func);
window.display();
}
}
|
73,361,084
| 73,361,114
|
What is the best way to convert int to double in C++?
|
I know that convert int to double in C++ has many ways. but what is the best way for do this?
|
Use static_cast<double>(expression).
In general, it is recommended to use C++ ways of casting (static_cast, dynamic_cast), instead of the old C-style casting, such as (double)int, (int)double.
|
73,361,992
| 73,362,114
|
How to customize vcxproj generation by CMake
|
I want to use CMake to generate .vcxproj files. Then I open specific project/solution and work in VS as usual.
How do I customize default configurations, generated by VS generator? In particular, how do I specify in my cmake files (whatever it should be) that I need release runtime for "Debug" configuration? (i.e. /MD instead of /MDd).
As far as I understand, such an option is VS-specific (VS/MSBuild flag) and thus is not manupulated my CMake naturally.
|
In particular, how do I specify in my cmake files (whatever it should be) that I need release runtime for "Debug" configuration? (i.e. /MD instead of /MDd).
This will work:
set(CMAKE_MSVC_RUNTIME_LIBRARY "MultiThreadedDLL"
CACHE STRING "MSVC runtime library selection")
Then all of the targets you build in all configurations will use -MD. See the documentation here: https://cmake.org/cmake/help/latest/variable/CMAKE_MSVC_RUNTIME_LIBRARY.html
The default value is MultiThreaded$<$<CONFIG:Debug>:Debug>DLL which uses -MDd for the Debug configuration and -MD for all others. This uses generator expressions, which are out of scope for this question.
An alternative: you can lean in to CMake's defaults and use the RelWithDebInfo build config. If you need to override the optimization flags, you can always set CMAKE_CXX_FLAGS_RELWITHDEBINFO.
|
73,362,269
| 73,362,356
|
how do i link wininet library with cmake
|
This is c++ code to get IP address (main.cpp) (project -> Prueba2 ).
#include <iostream>
#include <windows.h>
#include <wininet.h>
std::string real_ip() {
HINTERNET net = InternetOpen("IP retriever",
INTERNET_OPEN_TYPE_PRECONFIG,
NULL,
NULL,
0);
HINTERNET conn = InternetOpenUrl(net,
"http://myexternalip.com/raw",
NULL,
0,
INTERNET_FLAG_RELOAD,
0);
char buffer[4096];
DWORD read;
InternetReadFile(conn, buffer, sizeof(buffer)/sizeof(buffer[0]), &read);
InternetCloseHandle(net);
return std::string(buffer, read);
}
int main() {
std::cout << real_ip() << std::endl;
return 0;
}
CMakeLists.txt file for compiling.
cmake_minimum_required(VERSION 3.22)
project(Prueba2)
set(CMAKE_CXX_STANDARD 20)
add_executable(Prueba2 main.cpp)
I have to link this library but i don't know how, this error appears. I know how to compile it with g++ adding the library with -lwininet and it works correctly, i'm trying to do it with cmake now. Thank you for your help
undefined reference to `__imp_InternetOpenA'
C:\Program Files\JetBrains\CLion 2022.1.3\bin\mingw\bin/ld.exe: C:/Users/JAVIER/CLionProjects/Prueba2/main.cpp:13: undefined reference to `__imp_InternetOpenUrlA'
C:\Program Files\JetBrains\CLion 2022.1.3\bin\mingw\bin/ld.exe: C:/Users/JAVIER/CLionProjects/Prueba2/main.cpp:23: undefined reference to `__imp_InternetReadFile'
C:\Program Files\JetBrains\CLion 2022.1.3\bin\mingw\bin/ld.exe: C:/Users/JAVIER/CLionProjects/Prueba2/main.cpp:24: undefined reference to `__imp_InternetCloseHandle'
|
You can use target_link_libraries:
...
add_executable(Prueba2 main.cpp)
target_link_libraries(Prueba2 wininet)
|
73,362,438
| 73,363,905
|
Problem with linking Boost 1.79 libs, builded with MinGW GCC, with CMake on Windows
|
I'm using Boost 1.79 and Windows 10. For building Boost libs I use TDM MinGW. After trying to build my test program with CMake, I get next error:
CMake Error at D:/CMake/share/cmake-3.24/Modules/FindPackageHandleStandardArgs.cmake:230 (message):
Could NOT find Boost (missing: log thread) (found suitable version
"1.79.0", minimum required is "1.79")
Call Stack (most recent call first):
D:/CMake/share/cmake-3.24/Modules/FindPackageHandleStandardArgs.cmake:594 (_FPHSA_FAILURE_MESSAGE)
D:/CMake/share/cmake-3.24/Modules/FindBoost.cmake:2376 (find_package_handle_standard_args)
CMakeLists.txt:16 (find_package)
My CMakeLists.txt:
cmake_minimum_required(VERSION 3.24)
set(CMAKE_CXX_STANDARD 20)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
#set(CMAKE_C_COMPILER "D:/TDM-MinGW/bin/gcc.exe")
#set(CMAKE_CXX_COMPILER "D:/TDM-MinGW/bin/g++.exe")
set(Boost_DEBUG=ON)
set(Boost_USE_STATIC_LIBS ON)
set(Boost_USE_MULTITHREADED ON)
set(Boost_USE_STATIC_RUNTIME OFF)
project (testlib)
find_package(Boost 1.79 COMPONENTS log thread REQUIRED)
IF(Boost_FOUND)
INCLUDE_DIRECTORIES(SYSTEM ${Boost_INCLUDE_DIR})
LINK_DIRECTORIES(${Boost_LIBRARY_DIR})
MESSAGE("Boost information")
MESSAGE("Boost_INCLUDE_DIRS: ${Boost_INCLUDE_DIRS}")
MESSAGE("Boost_LIBRARY_DIRS: ${Boost_LIBRARY_DIRS}")
MESSAGE("Boost_Version: ${Boost_VERSION}")
MESSAGE("Boost Libraries: ${Boost_LIBRARIES}")
ENDIF()
include_directories(${Boost_INCLUDE_DIR})
link_directories(${Boost_LIBRARY_DIR})
add_executable(testlib src/main.cpp)
target_link_libraries(testlib PUBLIC ${Boost_LIBRARIES} ${CMAKE_THREAD_LIBS_INIT})
Boost libs were compiled by following command:
./b2 --build-type=complete -j 8 variant=debug address-model=64 link=static toolset=gcc install
|
Well, I solve my problem by setting Boost_DEBUG to ON. After analyzing debug info it became clear that the problem was two empty variables: Boost_COMPILER and Boost_ARCHITECTURE. And to solve the problem i just set these variables by looking at the full filename, for example:
We have filename libboost_log-clang14-mt-x32-1_79.lib, and we need this parts: -clang14 and -x32. You should look at this parts at your filename and set this parts as CMake variables:
Boost_ARCHITECTURE = -x32
Boost_COMPILER = -clang14
|
73,362,585
| 73,362,622
|
C++ override specifier without virtual? Does override imply virtual?
|
Linked question is not the same - and does not even mention override
Edit: The new list of duplicates contains one legitimate duplicate, which I did not find from search.
I was not aware prior to asking this that the choice of whether or not to use virtual in derived class members was going to be a contentious issue for some.
I have just encountered some source code which looks like this:
class A
{
virtual void method();
};
class B : public A
{
void method() override;
}
I am unsure of how to interpret this, even after reading this.
Does override imply virtual here? void B::method() is not marked as a virtual function, but it is marked as override. So why does this work, and not result in a compilation error?
Is there any difference between the following? (In class B)
void method() override;
virtual void method() override;
|
The answer you're looking for is in https://en.cppreference.com/w/cpp/language/virtual
If some member function vf is declared as virtual in a class Base, and some class Derived, which is derived, directly or indirectly, from Base, has a declaration for member function with the same
name
parameter type list (but not the return type)
cv-qualifiers
ref-qualifiers
Then this function in the class Derived is also virtual (whether or not the keyword virtual is used in its declaration) and overrides Base::vf (whether or not the word override is used in its declaration).
|
73,363,252
| 73,365,285
|
6-bit CRC datasheet confusion STMicroelectronics L9963E
|
I’m working on the SPI communication between a microcontroller and the L9963E. The datasheet of the L9963E shows little information about the CRC calculation, but mentions:
a CRC of 6 bits,
a polynomial of X6 + X4 + X3 + 1 = 0b1011001
a seed of 0b111000
The documentation also mentions in the SPI protocol details that the CRC is a value between 0x00 and 0x3F, and is "calculated on the [39-7] field of the frame", see Table 22.
I'm wondering: What is meant by "field [39-7]"? The total frame length is 40bits, out of this the CRC makes up 6 bits. I would expect a CRC calculation over the remaining 34 bits, but field 39-7 would mean either 33 bits (field 7 inclusive) or 32 bits (excluding field 7).
Since I have access to a L9963E evaluation board, which includes pre-loaded firmware, I have hooked up a logic analyser. I have found the following example frames to be sent to the L9963E from the eval-board, I am assuming that these are valid, error-free frames.
0xC21C3ECEFD
0xC270080001
0xE330081064
0xC0F08C1047
0x827880800E
0xC270BFFFF9
0xC2641954BE
Could someone clear up the datasheet for me, and maybe advise me on how to implement this CRC calculation?
|
Bit ranges in datasheets like this are always inclusive.
I suspect that this is just a typo, or the person who wrote it temporarily forgot that the bits are numbered from zero.
Looking at the other bit-field boundaries in Table 19 of the document you linked it wouldn't make sense to exclude the bottom bit of the data field from the CRC, so I suspect the datasheet should say bits 39 to 6 inclusive.
There is a tool called pycrc that can generate C code to calculate a CRC with an arbitrary polynomial.
|
73,363,327
| 73,364,020
|
boost program-options uses deprecated feature
|
I have boost-program-options version 1.78 installed via vcpkg. When I compile with clang++ and -std=c++20 I get the following errors. This doesn't happen when I compile with g++. According to this this std::unary_function is deprecated as of C++11.
In file included from /home/david/C/vcpkg/installed/x64-linux/include/boost/program_options/variables_map.hpp:12:
In file included from /home/david/C/vcpkg/installed/x64-linux/include/boost/any.hpp:20:
In file included from /home/david/C/vcpkg/installed/x64-linux/include/boost/type_index.hpp:29:
In file included from /home/david/C/vcpkg/installed/x64-linux/include/boost/type_index/stl_type_index.hpp:47:
/home/david/C/vcpkg/installed/x64-linux/include/boost/container_hash/hash.hpp:132:33: warning: 'unary_function<const std::error_category *, unsigned long>' is deprecated [-Wdeprecated-declarations]
struct hash_base : std::unary_function<T, std::size_t> {};
^
/home/david/C/vcpkg/installed/x64-linux/include/boost/container_hash/hash.hpp:692:18: note: in instantiation of template class 'boost::hash_detail::hash_base<const std::error_category *>' requested here
: public boost::hash_detail::hash_base<T*>
^
/home/david/C/vcpkg/installed/x64-linux/include/boost/container_hash/hash.hpp:420:24: note: in instantiation of template class 'boost::hash<const std::error_category *>' requested here
boost::hash<T> hasher;
^
/home/david/C/vcpkg/installed/x64-linux/include/boost/container_hash/hash.hpp:551:9: note: in instantiation of function template specialization 'boost::hash_combine<const std::error_category *>' requested here
hash_combine(seed, &v.category());
^
/bin/../lib/gcc/x86_64-redhat-linux/12/../../../../include/c++/12/bits/stl_function.h:124:7: note: 'unary_function<const std::error_category *, unsigned long>' has been explicitly marked deprecated here
} _GLIBCXX11_DEPRECATED;
^
/bin/../lib/gcc/x86_64-redhat-linux/12/../../../../include/c++/12/x86_64-redhat-linux/bits/c++config.h:2340:32: note: expanded from macro '_GLIBCXX11_DEPRECATED'
# define _GLIBCXX11_DEPRECATED _GLIBCXX_DEPRECATED
^
/bin/../lib/gcc/x86_64-redhat-linux/12/../../../../include/c++/12/x86_64-redhat-linux/bits/c++config.h:2331:46: note: expanded from macro '_GLIBCXX_DEPRECATED'
# define _GLIBCXX_DEPRECATED __attribute__ ((__deprecated__))
Why is boost using deprecated parts of the standard? Is there anything wrong with ignoring these warnings and suppressing with -Wno-deprecated-declarations?
|
The use of std::unary_function has been replaced for compilers/standard libraries not supporting it anymore since Boost 1.64 for MSVC (commit) and since 1.73 for other compilers (commit).
But it continued using std::unary_function as default as long as it was not detected as removed.
Since Boost 1.80 the use of std::unary_function is disabled when using C++11 or later with libstdc++ (commit, issue) to get rid of deprecation warnings and a similar patch has been merged for libc++ (commit, issue). The latter seems to not be included in the lastest release 1.80.
So either you wait/upgrade to a recent version of Boost, add the patch manually or configure the inclusion of the headers as system headers, so that warnings for them are generally suppressed. (I am not sure whether this is completely possible in your setup.) Alternatively disable the deprecation warning with the flag you are showing, but that seems heavy-handed.
Compiling in C++17 mode or later where the function has been completely removed might also help (haven't tested).
Simply defining the macro BOOST_NO_CXX98_FUNCTION_BASE before including any boost header might also help, but I can't tell whether it is intended for a user to do that or whether it might break other boost stuff. It may also silently break ABI for all I know.
|
73,364,145
| 73,364,218
|
Forbidden syntax for pointer/reference to bound member function
|
Suppose I have the following:
struct A {
int foo(int bar) const { return bar; }
};
and I want to specify a name that refers to a "bound" member function (i.e.):
A a;
auto opt1 = a.foo; // Forbidden, instead do something like...
auto opt2 = [&a] (int i) { return a.foo(i); }; // or ...
auto opt3 = std::bind(&A::foo, a, std::placeholders::_1);
it is then simple to invoke the bound member function:
assert(opt1(42) == 42); // If 'opt1' were allowed
assert(opt2(42) == 42);
assert(opt3(42) == 42);
In my view, opt1 would be the preferred solution to achieve the goal. However, specifying a bound function via opt1 is forbidden by the language.
My question is purely legal: What part of the C++(20) standard forbids a construct like opt1? My question is not why, but where.
|
[expr.ref]:
[for the expression E1.E2]....if E1.E2 refers to a non-static member function...The expression can be used only as the left-hand operand
of a member function call.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.