question_id,title_clean,body_clean,full_text,tags,score,view_count,answer_count,link 82256,How do I use sudo to redirect output to a location I don't have permission to write to?,"I've been given sudo access on one of our development RedHat linux boxes, and I seem to find myself quite often needing to redirect output to a location I don't normally have write access to. The trouble is, this contrived example doesn't work: sudo ls -hal /root/ > /root/test.out I just receive the response: -bash: /root/test.out: Permission denied How can I get this to work?","How do I use sudo to redirect output to a location I don't have permission to write to? I've been given sudo access on one of our development RedHat linux boxes, and I seem to find myself quite often needing to redirect output to a location I don't normally have write access to. The trouble is, this contrived example doesn't work: sudo ls -hal /root/ > /root/test.out I just receive the response: -bash: /root/test.out: Permission denied How can I get this to work?","linux, bash, permissions, sudo, io-redirection",1150,366401,15,https://stackoverflow.com/questions/82256/how-do-i-use-sudo-to-redirect-output-to-a-location-i-dont-have-permission-to-wr 257844,Quickly create a large file on a Linux system,"How can I quickly create a large file on a Linux ( Red Hat Linux ) system? dd will do the job, but reading from /dev/zero and writing to the drive can take a long time when you need a file several hundreds of GBs in size for testing... If you need to do that repeatedly, the time really adds up. I don't care about the contents of the file, I just want it to be created quickly. How can this be done? Using a sparse file won't work for this. I need the file to be allocated disk space.","Quickly create a large file on a Linux system How can I quickly create a large file on a Linux ( Red Hat Linux ) system? dd will do the job, but reading from /dev/zero and writing to the drive can take a long time when you need a file several hundreds of GBs in size for testing... If you need to do that repeatedly, the time really adds up. I don't care about the contents of the file, I just want it to be created quickly. How can this be done? Using a sparse file won't work for this. I need the file to be allocated disk space.","linux, file, filesystems",626,714203,17,https://stackoverflow.com/questions/257844/quickly-create-a-large-file-on-a-linux-system 104055,How can I list the contents of a package using YUM?,"I know how to use rpm to list the contents of a package ( rpm -qpil package.rpm ). However, this requires knowing the location of the .rpm file on the filesystem. A more elegant solution would be to use the package manager, which in my case is YUM . How can YUM be used to achieve this?","How can I list the contents of a package using YUM? I know how to use rpm to list the contents of a package ( rpm -qpil package.rpm ). However, this requires knowing the location of the .rpm file on the filesystem. A more elegant solution would be to use the package manager, which in my case is YUM . How can YUM be used to achieve this?","linux, fedora, rpm, yum, package-managers",363,432616,7,https://stackoverflow.com/questions/104055/how-can-i-list-the-contents-of-a-package-using-yum 19943766,Hadoop "Unable to load native-hadoop library for your platform" warning,"I'm currently configuring hadoop on a server running CentOs . When I run start-dfs.sh or stop-dfs.sh , I get the following error: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable I'm running Hadoop 2.2.0. Doing a search online brought up this link: [URL] However, the contents of /native/ directory on hadoop 2.x appear to be different so I am not sure what to do. I've also added these two environment variables in hadoop-env.sh : export HADOOP_OPTS=""$HADOOP_OPTS -Djava.library.path=/usr/local/hadoop/lib/"" export HADOOP_COMMON_LIB_NATIVE_DIR=""/usr/local/hadoop/lib/native/"" Any ideas?","Hadoop "Unable to load native-hadoop library for your platform" warning I'm currently configuring hadoop on a server running CentOs . When I run start-dfs.sh or stop-dfs.sh , I get the following error: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable I'm running Hadoop 2.2.0. Doing a search online brought up this link: [URL] However, the contents of /native/ directory on hadoop 2.x appear to be different so I am not sure what to do. I've also added these two environment variables in hadoop-env.sh : export HADOOP_OPTS=""$HADOOP_OPTS -Djava.library.path=/usr/local/hadoop/lib/"" export HADOOP_COMMON_LIB_NATIVE_DIR=""/usr/local/hadoop/lib/native/"" Any ideas?","java, linux, hadoop, hadoop2, java.library.path",323,652094,24,https://stackoverflow.com/questions/19943766/hadoop-unable-to-load-native-hadoop-library-for-your-platform-warning 37585758,How to redirect output of systemd service to a file,I am trying to redirect output of a systemd service to a file but it doesn't seem to work: [Unit] Description=customprocess After=network.target [Service] Type=forking ExecStart=/usr/local/bin/binary1 agent -config-dir /etc/sample.d/server StandardOutput=/var/log1.log StandardError=/var/log2.log Restart=always [Install] WantedBy=multi-user.target Please correct my approach.,How to redirect output of systemd service to a file I am trying to redirect output of a systemd service to a file but it doesn't seem to work: [Unit] Description=customprocess After=network.target [Service] Type=forking ExecStart=/usr/local/bin/binary1 agent -config-dir /etc/sample.d/server StandardOutput=/var/log1.log StandardError=/var/log2.log Restart=always [Install] WantedBy=multi-user.target Please correct my approach.,"linux, centos7, systemd, rhel, rhel7",312,425010,10,https://stackoverflow.com/questions/37585758/how-to-redirect-output-of-systemd-service-to-a-file 24641536,How to set JAVA_HOME in Linux for all users,"I am new to Linux system and there seem to be too many Java folders. java -version gives me: java version ""1.7.0_55"" OpenJDK Runtime Environment (rhel-2.4.7.1.el6_5-x86_64 u55-b13) OpenJDK 64-Bit Server VM (build 24.51-b03, mixed mode) When I am trying to build a Maven project , I am getting error: Error: JAVA_HOME is not defined correctly. We cannot execute /usr/java/jdk1.7.0_05/bin/java Could you please tell me which files I need to modify for root as well as not-root user and where exactly is java located?","How to set JAVA_HOME in Linux for all users I am new to Linux system and there seem to be too many Java folders. java -version gives me: java version ""1.7.0_55"" OpenJDK Runtime Environment (rhel-2.4.7.1.el6_5-x86_64 u55-b13) OpenJDK 64-Bit Server VM (build 24.51-b03, mixed mode) When I am trying to build a Maven project , I am getting error: Error: JAVA_HOME is not defined correctly. We cannot execute /usr/java/jdk1.7.0_05/bin/java Could you please tell me which files I need to modify for root as well as not-root user and where exactly is java located?","java, linux, java-home, path-variables",311,1623560,25,https://stackoverflow.com/questions/24641536/how-to-set-java-home-in-linux-for-all-users 22101778,How to preserve line breaks when storing command output to a variable?,"I’m using bash shell on Linux. I have this simple script … #!/bin/bash TEMP=sed -n '/'""Starting deployment of""'/,/'""Failed to start context""'/p' ""/usr/java/jboss/standalone/log/server.log"" | tac | awk '/'""Starting deployment of""'/ {print;exit} 1' | tac echo $TEMP However, when I run this script ./temp.sh all the output is printed without the carriage returns/new lines. Not sure if its the way I’m storing the output to $TEMP, or the echo command itself. How do I store the output of the command to a variable and preserve the line breaks/carriage returns?","How to preserve line breaks when storing command output to a variable? I’m using bash shell on Linux. I have this simple script … #!/bin/bash TEMP=sed -n '/'""Starting deployment of""'/,/'""Failed to start context""'/p' ""/usr/java/jboss/standalone/log/server.log"" | tac | awk '/'""Starting deployment of""'/ {print;exit} 1' | tac echo $TEMP However, when I run this script ./temp.sh all the output is printed without the carriage returns/new lines. Not sure if its the way I’m storing the output to $TEMP, or the echo command itself. How do I store the output of the command to a variable and preserve the line breaks/carriage returns?","linux, bash, shell, line-breaks",306,162004,2,https://stackoverflow.com/questions/22101778/how-to-preserve-line-breaks-when-storing-command-output-to-a-variable 16200501,How can I automatically redirect HTTP to HTTPS on Apache servers?,"I am trying to set up automatic redirection from HTTP to HTTPS: From manage.mydomain.com --- To ---> [URL] I have tried adding the following to my httpd.conf file, but it didn't work: RewriteEngine on ReWriteCond %{SERVER_PORT} !^443$ RewriteRule ^/(.*) [URL] [NC,R,L] How can I fix it? Environment: CentOS with Apache","How can I automatically redirect HTTP to HTTPS on Apache servers? I am trying to set up automatic redirection from HTTP to HTTPS: From manage.mydomain.com --- To ---> [URL] I have tried adding the following to my httpd.conf file, but it didn't work: RewriteEngine on ReWriteCond %{SERVER_PORT} !^443$ RewriteRule ^/(.*) [URL] [NC,R,L] How can I fix it? Environment: CentOS with Apache","linux, apache, .htaccess, webserver, httpd.conf",281,593657,13,https://stackoverflow.com/questions/16200501/how-can-i-automatically-redirect-http-to-https-on-apache-servers 15622328,How to grep a string in a directory and all its subdirectories?,How to grep a string or a text in a directory and all its subdirectories'files in LINUX ??,How to grep a string in a directory and all its subdirectories? How to grep a string or a text in a directory and all its subdirectories'files in LINUX ??,"linux, unix, grep, centos",273,696165,2,https://stackoverflow.com/questions/15622328/how-to-grep-a-string-in-a-directory-and-all-its-subdirectories 21820715,How to install latest version of git on CentOS 8.x/7.x/6.x,I used the usual: yum install git It did not install the latest version of git on my CentOS 6. How can I update to the latest version of git for CentOS 6? The solution can be applicable to newer versions of CentOS such as CentOS 7.,How to install latest version of git on CentOS 8.x/7.x/6.x I used the usual: yum install git It did not install the latest version of git on my CentOS 6. How can I update to the latest version of git for CentOS 6? The solution can be applicable to newer versions of CentOS such as CentOS 7.,"linux, git, installation, centos, yum",269,279683,15,https://stackoverflow.com/questions/21820715/how-to-install-latest-version-of-git-on-centos-8-x-7-x-6-x 2150882,How to automatically add user account AND password with a Bash script?,"I need to have the ability to create user accounts on my Linux (Fedora 10) and automatically assign a password via a bash script(or otherwise, if need be). It's easy to create the user via Bash e.g.: [whoever@server ]# /usr/sbin/useradd newuser Is it possible to assign a password in Bash, something functionally similar to this, but automatically: [whoever@server ]# passwd newuser Changing password for user testpass. New UNIX password: Retype new UNIX password: passwd: all authentication tokens updated successfully. [whoever@server ]#","How to automatically add user account AND password with a Bash script? I need to have the ability to create user accounts on my Linux (Fedora 10) and automatically assign a password via a bash script(or otherwise, if need be). It's easy to create the user via Bash e.g.: [whoever@server ]# /usr/sbin/useradd newuser Is it possible to assign a password in Bash, something functionally similar to this, but automatically: [whoever@server ]# passwd newuser Changing password for user testpass. New UNIX password: Retype new UNIX password: passwd: all authentication tokens updated successfully. [whoever@server ]#","linux, bash, passwd",247,505730,20,https://stackoverflow.com/questions/2150882/how-to-automatically-add-user-account-and-password-with-a-bash-script 15255070,How do you scroll up/down on the console of a Linux VM,"I recognize that Up / Down will give you the command history. But, how do you look at past output by scrolling up and down? I have used Shift + Page Up / Page Down , Alt + Shift + Up / Down and Page Up / Page Down but none of these seem to work. It is a Redhat Linux box.","How do you scroll up/down on the console of a Linux VM I recognize that Up / Down will give you the command history. But, how do you look at past output by scrolling up and down? I have used Shift + Page Up / Page Down , Alt + Shift + Up / Down and Page Up / Page Down but none of these seem to work. It is a Redhat Linux box.","linux, terminal, rhel",241,562880,15,https://stackoverflow.com/questions/15255070/how-do-you-scroll-up-down-on-the-console-of-a-linux-vm 7142735,How to specify more spaces for the delimiter using cut?,"Is there a way to specify multiple spaces as a field delimiter with the cut command (something like a "" ""+ regex)? For example, what field delimiter I should specify for the following string to reach value 3744 ? $ps axu | grep jboss jboss 2574 0.0 0.0 3744 1092 ? S Aug17 0:00 /bin/sh /usr/java/jboss/bin/run.sh -c example.com -b 0.0.0.0 cut -d' ' is not what I want, because it's only for a single space. awk is not what I am looking for either, so how to do this with cut ?","How to specify more spaces for the delimiter using cut? Is there a way to specify multiple spaces as a field delimiter with the cut command (something like a "" ""+ regex)? For example, what field delimiter I should specify for the following string to reach value 3744 ? $ps axu | grep jboss jboss 2574 0.0 0.0 3744 1092 ? S Aug17 0:00 /bin/sh /usr/java/jboss/bin/run.sh -c example.com -b 0.0.0.0 cut -d' ' is not what I want, because it's only for a single space. awk is not what I am looking for either, so how to do this with cut ?","linux, delimiter, cut",233,172685,13,https://stackoverflow.com/questions/7142735/how-to-specify-more-spaces-for-the-delimiter-using-cut 13046624,How can I permanently export a variable in Linux?,"I am running RHEL 6, and I have exported an environment variable like this: export DISPLAY=:0 That variable is lost when the terminal is closed. How do I permanently add this so that this variable value always exists with a particular user?","How can I permanently export a variable in Linux? I am running RHEL 6, and I have exported an environment variable like this: export DISPLAY=:0 That variable is lost when the terminal is closed. How do I permanently add this so that this variable value always exists with a particular user?","linux, environment-variables, redhat",232,420201,6,https://stackoverflow.com/questions/13046624/how-can-i-permanently-export-a-variable-in-linux 12952913,How do I install g++ for Fedora?,How do I install g++ for Fedora Linux? I have been searching the dnf command to install g++ but didn't find anything. How do I install it? I have already installed gcc,How do I install g++ for Fedora? How do I install g++ for Fedora Linux? I have been searching the dnf command to install g++ but didn't find anything. How do I install it? I have already installed gcc,"c++, linux, g++, fedora, dnf",218,227423,11,https://stackoverflow.com/questions/12952913/how-do-i-install-g-for-fedora 1157209,Is there an alternative sleep function in C to milliseconds?,"I have some source code that was compiled on Windows. I am converting it to run on Red Hat Linux. The source code has included the header file and the programmer has used the Sleep() function to wait for a period of milliseconds. This won't work on the Linux. However, I can use the sleep(seconds) function, but that uses integer in seconds. I don't want to convert milliseconds to seconds. Is there a alternative sleep function that I can use with gcc compiling on Linux?","Is there an alternative sleep function in C to milliseconds? I have some source code that was compiled on Windows. I am converting it to run on Red Hat Linux. The source code has included the header file and the programmer has used the Sleep() function to wait for a period of milliseconds. This won't work on the Linux. However, I can use the sleep(seconds) function, but that uses integer in seconds. I don't want to convert milliseconds to seconds. Is there a alternative sleep function that I can use with gcc compiling on Linux?","c, linux, sleep",217,504403,6,https://stackoverflow.com/questions/1157209/is-there-an-alternative-sleep-function-in-c-to-milliseconds 14460656,Android Debug Bridge (adb) device - no permissions,"I have a problem connecting HTC Wildfire A3333 in debugging mode with my Fedora Linux 17. Adb says: ./adb devices List of devices attached ???????????? no permissions my udev rules (first rule for Samsung which works just fine and second for HTC which is not): SUBSYSTEM==""usb"",SYSFS{idVendor}==""04e8"",SYMLINK+=""android_adb"",MODE=""0666"",GROUP=""plugdev"" SUBSYSTEM==""usb"",SYSFS{idVendor}==""0bb4"",SYMLINK+=""android_adb"",MODE=""0666"",GROUP=""plugdev"" For Samsung devices everything's okay: ./adb devices List of devices attached 00198a9422618e device I have been trying all of the answers given in a simmilar thread wthout any luck: Using HTC wildfire for android development","Android Debug Bridge (adb) device - no permissions I have a problem connecting HTC Wildfire A3333 in debugging mode with my Fedora Linux 17. Adb says: ./adb devices List of devices attached ???????????? no permissions my udev rules (first rule for Samsung which works just fine and second for HTC which is not): SUBSYSTEM==""usb"",SYSFS{idVendor}==""04e8"",SYMLINK+=""android_adb"",MODE=""0666"",GROUP=""plugdev"" SUBSYSTEM==""usb"",SYSFS{idVendor}==""0bb4"",SYMLINK+=""android_adb"",MODE=""0666"",GROUP=""plugdev"" For Samsung devices everything's okay: ./adb devices List of devices attached 00198a9422618e device I have been trying all of the answers given in a simmilar thread wthout any luck: Using HTC wildfire for android development","android, linux, debugging, adb",196,199480,20,https://stackoverflow.com/questions/14460656/android-debug-bridge-adb-device-no-permissions 9541460,"httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName","I tried to restart my Apache server on CentOS 5.0 and got this message: httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName Here is the /etc/hosts file: 127.0.0.1 server4-245 server4-245.com localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 Here is the /etc/sysconfig/network file: NETWORKING=yes NETWORKING_IPV6=no HOSTNAME=server4-245 I also have this in the Apache httpd.conf file: ServerName localhost However, I still get the first error message when I restart Apache.","httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName I tried to restart my Apache server on CentOS 5.0 and got this message: httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName Here is the /etc/hosts file: 127.0.0.1 server4-245 server4-245.com localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 Here is the /etc/sysconfig/network file: NETWORKING=yes NETWORKING_IPV6=no HOSTNAME=server4-245 I also have this in the Apache httpd.conf file: ServerName localhost However, I still get the first error message when I restart Apache.","linux, apache, centos",189,540858,12,https://stackoverflow.com/questions/9541460/httpd-could-not-reliably-determine-the-servers-fully-qualified-domain-name-us 1766380,Determining the path that a yum package installed to,"I've installed ffmpeg using yum under Redhat, and I'm having difficulty figuring out where (what path) it installed the package to. Is there an easy way of determining this without resorting to finding it myself manually?","Determining the path that a yum package installed to I've installed ffmpeg using yum under Redhat, and I'm having difficulty figuring out where (what path) it installed the package to. Is there an easy way of determining this without resorting to finding it myself manually?","linux, redhat, rpm, yum",189,164991,3,https://stackoverflow.com/questions/1766380/determining-the-path-that-a-yum-package-installed-to 8328250,CentOS 64 bit bad ELF interpreter,"I have just installed CentOS 6 64bit version, I'm trying to install a 32-bit application on a 64-bit machine and got this error: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory How do I resolve this?","CentOS 64 bit bad ELF interpreter I have just installed CentOS 6 64bit version, I'm trying to install a 32-bit application on a 64-bit machine and got this error: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory How do I resolve this?","linux, centos, 32bit-64bit, elf, centos6",186,389863,9,https://stackoverflow.com/questions/8328250/centos-64-bit-bad-elf-interpreter 13876875,How to make rpm auto install dependencies,I have built two RPM packages proj1-1.0-1.x86_64.rpm libtest1-1.0-1.x86_64.rpm proj1 depends on the file libtest1.so being present and it is reflected correctly in the RPM packages as seen here: user@my-pc:~$ rpm -qp --requires proj1-1.0-1.x86_64.rpm libtest1.so()(64bit) user@my-pc:~$ rpm -qp --provides libtest1-1.0-1.x86_64.rpm libtest1.so()(64bit) The installation of proj1 fails due to a missing dependency. user@my-pc:~$ rpm -ivh proj1-1.0-1.x86_64.rpm error: Failed dependencies: libtest1.so()(64bit) is needed by proj1-1.0-1.x86_64.rpm How do I ensure that libtest1-1.0-1.x86_64.rpm is installed automatically during the installation of proj1-1.0-1.x86_64.rpm ? I did try the --aid option with rpm -i as described here but it didn't work for me. Is there any other way? Thanks for any help.,How to make rpm auto install dependencies I have built two RPM packages proj1-1.0-1.x86_64.rpm libtest1-1.0-1.x86_64.rpm proj1 depends on the file libtest1.so being present and it is reflected correctly in the RPM packages as seen here: user@my-pc:~$ rpm -qp --requires proj1-1.0-1.x86_64.rpm libtest1.so()(64bit) user@my-pc:~$ rpm -qp --provides libtest1-1.0-1.x86_64.rpm libtest1.so()(64bit) The installation of proj1 fails due to a missing dependency. user@my-pc:~$ rpm -ivh proj1-1.0-1.x86_64.rpm error: Failed dependencies: libtest1.so()(64bit) is needed by proj1-1.0-1.x86_64.rpm How do I ensure that libtest1-1.0-1.x86_64.rpm is installed automatically during the installation of proj1-1.0-1.x86_64.rpm ? I did try the --aid option with rpm -i as described here but it didn't work for me. Is there any other way? Thanks for any help.,"linux, installation, package, rpm, yum",174,656457,12,https://stackoverflow.com/questions/13876875/how-to-make-rpm-auto-install-dependencies 17850787,Where is the php.ini file on a Linux/CentOS PC?,"I can't find PHP.ini location on my server. I've checked all Stack Overflow answers but I can't find my php.ini location. I have Linux, Cent OS, zPanel. Last version of PHP. My computer: Linux Mint 15 KDE.","Where is the php.ini file on a Linux/CentOS PC? I can't find PHP.ini location on my server. I've checked all Stack Overflow answers but I can't find my php.ini location. I have Linux, Cent OS, zPanel. Last version of PHP. My computer: Linux Mint 15 KDE.","php, linux, centos",164,547728,5,https://stackoverflow.com/questions/17850787/where-is-the-php-ini-file-on-a-linux-centos-pc 27733511,How to set Linux environment variables with Ansible,Hi I am trying to find out how to set environment variable with Ansible. something that a simple shell command like this: EXPORT LC_ALL=C tried as shell command and got an error tried using the environment module and nothing happend. what am I missing,How to set Linux environment variables with Ansible Hi I am trying to find out how to set environment variable with Ansible. something that a simple shell command like this: EXPORT LC_ALL=C tried as shell command and got an error tried using the environment module and nothing happend. what am I missing,"linux, ansible",164,338170,7,https://stackoverflow.com/questions/27733511/how-to-set-linux-environment-variables-with-ansible 540603,How can I find the version of the Fedora I use?,sudo find /etc | xargs grep -i fedora > searchFedora gives: /etc/netplug.d/netplug: # At least on Fedora Core 1 ... But see the Fedora version in the /etc/netplug.d/netplug file. Is it serious?,How can I find the version of the Fedora I use? sudo find /etc | xargs grep -i fedora > searchFedora gives: /etc/netplug.d/netplug: # At least on Fedora Core 1 ... But see the Fedora version in the /etc/netplug.d/netplug file. Is it serious?,"linux, fedora",161,349808,14,https://stackoverflow.com/questions/540603/how-can-i-find-the-version-of-the-fedora-i-use 20348007,How can I find out the total physical memory (RAM) of my Linux box suitable to be parsed by a shell script?,"I'm creating a shell script to find out the total physical memory in some RHEL Linux boxes. First of all I want to stress that I'm interested in the total physical memory recognized by the kernel, not just the available memory . Therefore, please, avoid answers suggesting to read /proc/meminfo or to use the free , top or sar commands -- In all these cases, their "" total memory "" values mean "" available memory "" ones. The first thought was to read the boot kernel messages: Memory: 61861540k/63438844k available (2577k kernel code, 1042516k reserved, 1305k data, 212k init) But in some Linux boxes, due to the use of EMC2's PowerPath software and its flooding boot messages in the kernel startup, that useful boot kernel message is not available, not even in the /var/log/dmesg file. The second option was the dmidecode command (I'm warned against the possible mismatch of kernel recognized RAM and real RAM due to the limitations of some older kernels and architectures). The option --memory simplifies the script but I realized that older releases of that command have no --memory option. My last chance was the getconf command. It reports the memory page size, but not the total number of physical pages -- the _PHYS_PAGES system variable seems to be the available physical pages, not the total physical pages. # getconf -a | grep PAGES PAGESIZE 4096 _AVPHYS_PAGES 1049978 _PHYS_PAGES 15466409 My question: Is there another way to get the total amount of physical memory, suitable to be parsed by a shell script?","How can I find out the total physical memory (RAM) of my Linux box suitable to be parsed by a shell script? I'm creating a shell script to find out the total physical memory in some RHEL Linux boxes. First of all I want to stress that I'm interested in the total physical memory recognized by the kernel, not just the available memory . Therefore, please, avoid answers suggesting to read /proc/meminfo or to use the free , top or sar commands -- In all these cases, their "" total memory "" values mean "" available memory "" ones. The first thought was to read the boot kernel messages: Memory: 61861540k/63438844k available (2577k kernel code, 1042516k reserved, 1305k data, 212k init) But in some Linux boxes, due to the use of EMC2's PowerPath software and its flooding boot messages in the kernel startup, that useful boot kernel message is not available, not even in the /var/log/dmesg file. The second option was the dmidecode command (I'm warned against the possible mismatch of kernel recognized RAM and real RAM due to the limitations of some older kernels and architectures). The option --memory simplifies the script but I realized that older releases of that command have no --memory option. My last chance was the getconf command. It reports the memory page size, but not the total number of physical pages -- the _PHYS_PAGES system variable seems to be the available physical pages, not the total physical pages. # getconf -a | grep PAGES PAGESIZE 4096 _AVPHYS_PAGES 1049978 _PHYS_PAGES 15466409 My question: Is there another way to get the total amount of physical memory, suitable to be parsed by a shell script?","linux, ram, memory-size",147,349220,15,https://stackoverflow.com/questions/20348007/how-can-i-find-out-the-total-physical-memory-ram-of-my-linux-box-suitable-to-b 394984,Best practice to run Linux service as a different user,"Services default to starting as root at boot time on my RHEL box. If I recall correctly, the same is true for other Linux distros which use the init scripts in /etc/init.d . What do you think is the best way to instead have the processes run as a (static) user of my choosing? The only method I'd arrived at was to use something like: su my_user -c 'daemon my_cmd &>/dev/null &' But this seems a bit untidy... Is there some bit of magic tucked away that provides an easy mechanism to automatically start services as other, non-root users? EDIT: I should have said that the processes I'm starting in this instance are either Python scripts or Java programs. I'd rather not write a native wrapper around them, so unfortunately I'm unable to call setuid() as Black suggests.","Best practice to run Linux service as a different user Services default to starting as root at boot time on my RHEL box. If I recall correctly, the same is true for other Linux distros which use the init scripts in /etc/init.d . What do you think is the best way to instead have the processes run as a (static) user of my choosing? The only method I'd arrived at was to use something like: su my_user -c 'daemon my_cmd &>/dev/null &' But this seems a bit untidy... Is there some bit of magic tucked away that provides an easy mechanism to automatically start services as other, non-root users? EDIT: I should have said that the processes I'm starting in this instance are either Python scripts or Java programs. I'd rather not write a native wrapper around them, so unfortunately I'm unable to call setuid() as Black suggests.","linux, system-administration, rhel, init.d",142,266305,8,https://stackoverflow.com/questions/394984/best-practice-to-run-linux-service-as-a-different-user 19256127,Two versions of python on linux. how to make 2.7 the default,"I've got two versions of python on my linuxbox: $python Python 2.6.6 (r266:84292, Jul 10 2013, 22:48:45) [GCC 4.4.7 20120313 (Red Hat 4.4.7-3)] on linux2 Type ""help"", ""copyright"", ""credits"" or ""license"" for more information. >>> $ /usr/local/bin/python2.7 Python 2.7.3 (default, Oct 8 2013, 15:53:09) [GCC 4.4.7 20120313 (Red Hat 4.4.7-3)] on linux2 Type ""help"", ""copyright"", ""credits"" or ""license"" for more information. >>> $ which python /usr/bin/python $ ls -al /usr/bin/python -rwxr-xr-x. 2 root root 4864 Jul 10 22:49 /usr/bin/python How can I make 2.7 be the default version so when I type python it puts me in 2.7?","Two versions of python on linux. how to make 2.7 the default I've got two versions of python on my linuxbox: $python Python 2.6.6 (r266:84292, Jul 10 2013, 22:48:45) [GCC 4.4.7 20120313 (Red Hat 4.4.7-3)] on linux2 Type ""help"", ""copyright"", ""credits"" or ""license"" for more information. >>> $ /usr/local/bin/python2.7 Python 2.7.3 (default, Oct 8 2013, 15:53:09) [GCC 4.4.7 20120313 (Red Hat 4.4.7-3)] on linux2 Type ""help"", ""copyright"", ""credits"" or ""license"" for more information. >>> $ which python /usr/bin/python $ ls -al /usr/bin/python -rwxr-xr-x. 2 root root 4864 Jul 10 22:49 /usr/bin/python How can I make 2.7 be the default version so when I type python it puts me in 2.7?","python, linux, centos",136,478643,8,https://stackoverflow.com/questions/19256127/two-versions-of-python-on-linux-how-to-make-2-7-the-default 43235179,How to execute ssh-keygen without prompt,"I want to automate generate a pair of ssh key using shell script on Centos7, and I have tried yes ""y"" | ssh-keygen -t rsa echo ""\n\n\n"" | ssh-keygen... echo | ssh-keygen.. all of these command doesn't work, just input one 'enter' and the shell script stopped on ""Enter passphrase (empty for no passphrase)"", I just want to know how to simulate mutiple 'enter' in shell continuously. Many thanks if anyone can help !","How to execute ssh-keygen without prompt I want to automate generate a pair of ssh key using shell script on Centos7, and I have tried yes ""y"" | ssh-keygen -t rsa echo ""\n\n\n"" | ssh-keygen... echo | ssh-keygen.. all of these command doesn't work, just input one 'enter' and the shell script stopped on ""Enter passphrase (empty for no passphrase)"", I just want to know how to simulate mutiple 'enter' in shell continuously. Many thanks if anyone can help !","linux, bash, shell, ssh",135,125582,9,https://stackoverflow.com/questions/43235179/how-to-execute-ssh-keygen-without-prompt 1367373,Python subprocess.Popen "OSError: [Errno 12] Cannot allocate memory","Note: This question was originally asked here but the bounty time expired even though an acceptable answer was not actually found. I am re-asking this question including all details provided in the original question. A python script is running a set of class functions every 60 seconds using the sched module: # sc is a sched.scheduler instance sc.enter(60, 1, self.doChecks, (sc, False)) The script is running as a daemonised process using the code here . A number of class methods that are called as part of doChecks use the subprocess module to call system functions in order to get system statistics: ps = subprocess.Popen(['ps', 'aux'], stdout=subprocess.PIPE).communicate()[0] This runs fine for a period of time before the entire script crashing with the following error: File ""/home/admin/sd-agent/checks.py"", line 436, in getProcesses File ""/usr/lib/python2.4/subprocess.py"", line 533, in __init__ File ""/usr/lib/python2.4/subprocess.py"", line 835, in _get_handles OSError: [Errno 12] Cannot allocate memory The output of free -m on the server once the script has crashed is: $ free -m total used free shared buffers cached Mem: 894 345 549 0 0 0 -/+ buffers/cache: 345 549 Swap: 0 0 0 The server is running CentOS 5.3. I am unable to reproduce on my own CentOS boxes nor with any other user reporting the same problem. I have tried a number of things to debug this as suggested in the original question: Logging the output of free -m before and after the Popen call. There is no significant change in memory usage i.e. memory is not gradually being used up as the script runs. I added close_fds=True to the Popen call but this made no difference - the script still crashed with the same error. Suggested here and here . I checked the rlimits which showed (-1, -1) on both RLIMIT_DATA and RLIMIT_AS as suggested here . An article suggested the having no swap space might be the cause but swap is actually available on demand (according to the web host) and this was also suggested as a bogus cause here . The processes are being closed because that is the behaviour of using .communicate() as backed up by the Python source code and comments here . The entire checks can be found at on GitHub here with the getProcesses function defined from line 442. This is called by doChecks() starting at line 520. The script was run with strace with the following output before the crash: recv(4, ""Total Accesses: 516662\nTotal kBy""..., 234, 0) = 234 gettimeofday({1250893252, 887805}, NULL) = 0 write(3, ""2009-08-21 17:20:52,887 - checks""..., 91) = 91 gettimeofday({1250893252, 888362}, NULL) = 0 write(3, ""2009-08-21 17:20:52,888 - checks""..., 74) = 74 gettimeofday({1250893252, 888897}, NULL) = 0 write(3, ""2009-08-21 17:20:52,888 - checks""..., 67) = 67 gettimeofday({1250893252, 889184}, NULL) = 0 write(3, ""2009-08-21 17:20:52,889 - checks""..., 81) = 81 close(4) = 0 gettimeofday({1250893252, 889591}, NULL) = 0 write(3, ""2009-08-21 17:20:52,889 - checks""..., 63) = 63 pipe([4, 5]) = 0 pipe([6, 7]) = 0 fcntl64(7, F_GETFD) = 0 fcntl64(7, F_SETFD, FD_CLOEXEC) = 0 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0xb7f12708) = -1 ENOMEM (Cannot allocate memory) write(2, ""Traceback (most recent call last""..., 35) = 35 open(""/usr/bin/sd-agent/agent.py"", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory) open(""/usr/bin/sd-agent/agent.py"", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory) open(""/usr/lib/python24.zip/agent.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/agent.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/plat-linux2/agent.py"", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory) open(""/usr/lib/python2.4/lib-tk/agent.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/lib-dynload/agent.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/site-packages/agent.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) write(2, "" File \""/usr/bin/sd-agent/agent.""..., 52) = 52 open(""/home/admin/sd-agent/daemon.py"", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory) open(""/usr/bin/sd-agent/daemon.py"", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory) open(""/usr/lib/python24.zip/daemon.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/daemon.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/plat-linux2/daemon.py"", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory) open(""/usr/lib/python2.4/lib-tk/daemon.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/lib-dynload/daemon.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/site-packages/daemon.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) write(2, "" File \""/home/admin/sd-agent/dae""..., 60) = 60 open(""/usr/bin/sd-agent/agent.py"", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory) open(""/usr/bin/sd-agent/agent.py"", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory) open(""/usr/lib/python24.zip/agent.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/agent.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/plat-linux2/agent.py"", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory) open(""/usr/lib/python2.4/lib-tk/agent.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/lib-dynload/agent.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/site-packages/agent.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) write(2, "" File \""/usr/bin/sd-agent/agent.""..., 54) = 54 open(""/usr/lib/python2.4/sched.py"", O_RDONLY|O_LARGEFILE) = 8 write(2, "" File \""/usr/lib/python2.4/sched""..., 55) = 55 fstat64(8, {st_mode=S_IFREG|0644, st_size=4054, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7d28000 read(8, ""\""\""\""A generally useful event sche""..., 4096) = 4054 write(2, "" "", 4) = 4 write(2, ""void = action(*argument)\n"", 25) = 25 close(8) = 0 munmap(0xb7d28000, 4096) = 0 open(""/usr/bin/sd-agent/checks.py"", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory) open(""/usr/bin/sd-agent/checks.py"", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory) open(""/usr/lib/python24.zip/checks.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/checks.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/plat-linux2/checks.py"", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory) open(""/usr/lib/python2.4/lib-tk/checks.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/lib-dynload/checks.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/site-packages/checks.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) write(2, "" File \""/usr/bin/sd-agent/checks""..., 60) = 60 open(""/usr/bin/sd-agent/checks.py"", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory) open(""/usr/bin/sd-agent/checks.py"", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory) open(""/usr/lib/python24.zip/checks.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/checks.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/plat-linux2/checks.py"", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory) open(""/usr/lib/python2.4/lib-tk/checks.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/lib-dynload/checks.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/site-packages/checks.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) write(2, "" File \""/usr/bin/sd-agent/checks""..., 64) = 64 open(""/usr/lib/python2.4/subprocess.py"", O_RDONLY|O_LARGEFILE) = 8 write(2, "" File \""/usr/lib/python2.4/subpr""..., 65) = 65 fstat64(8, {st_mode=S_IFREG|0644, st_size=39931, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7d28000 read(8, ""# subprocess - Subprocesses with""..., 4096) = 4096 read(8, ""lso, the newlines attribute of t""..., 4096) = 4096 read(8, ""code < 0:\n print >>sys.st""..., 4096) = 4096 read(8, ""alse does not exist on 2.2.0\ntry""..., 4096) = 4096 read(8, "" p2cread\n # c2pread <-""..., 4096) = 4096 write(2, "" "", 4) = 4 write(2, ""errread, errwrite)\n"", 19) = 19 close(8) = 0 munmap(0xb7d28000, 4096) = 0 open(""/usr/lib/python2.4/subprocess.py"", O_RDONLY|O_LARGEFILE) = 8 write(2, "" File \""/usr/lib/python2.4/subpr""..., 71) = 71 fstat64(8, {st_mode=S_IFREG|0644, st_size=39931, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7d28000 read(8, ""# subprocess - Subprocesses with""..., 4096) = 4096 read(8, ""lso, the newlines attribute of t""..., 4096) = 4096 read(8, ""code < 0:\n print >>sys.st""..., 4096) = 4096 read(8, ""alse does not exist on 2.2.0\ntry""..., 4096) = 4096 read(8, "" p2cread\n # c2pread <-""..., 4096) = 4096 read(8, ""table(self, handle):\n ""..., 4096) = 4096 read(8, ""rrno using _sys_errlist (or siml""..., 4096) = 4096 read(8, "" p2cwrite = None, None\n ""..., 4096) = 4096 write(2, "" "", 4) = 4 write(2, ""self.pid = os.fork()\n"", 21) = 21 close(8) = 0 munmap(0xb7d28000, 4096) = 0 write(2, ""OSError"", 7) = 7 write(2, "": "", 2) = 2 write(2, ""[Errno 12] Cannot allocate memor""..., 33) = 33 write(2, ""\n"", 1) = 1 unlink(""/var/run/sd-agent.pid"") = 0 close(3) = 0 munmap(0xb7e0d000, 4096) = 0 rt_sigaction(SIGINT, {SIG_DFL, [], SA_RESTORER, 0x589978}, {0xb89a60, [], SA_RESTORER, 0x589978}, 8) = 0 brk(0xa022000) = 0xa022000 exit_group(1) = ?","Python subprocess.Popen "OSError: [Errno 12] Cannot allocate memory" Note: This question was originally asked here but the bounty time expired even though an acceptable answer was not actually found. I am re-asking this question including all details provided in the original question. A python script is running a set of class functions every 60 seconds using the sched module: # sc is a sched.scheduler instance sc.enter(60, 1, self.doChecks, (sc, False)) The script is running as a daemonised process using the code here . A number of class methods that are called as part of doChecks use the subprocess module to call system functions in order to get system statistics: ps = subprocess.Popen(['ps', 'aux'], stdout=subprocess.PIPE).communicate()[0] This runs fine for a period of time before the entire script crashing with the following error: File ""/home/admin/sd-agent/checks.py"", line 436, in getProcesses File ""/usr/lib/python2.4/subprocess.py"", line 533, in __init__ File ""/usr/lib/python2.4/subprocess.py"", line 835, in _get_handles OSError: [Errno 12] Cannot allocate memory The output of free -m on the server once the script has crashed is: $ free -m total used free shared buffers cached Mem: 894 345 549 0 0 0 -/+ buffers/cache: 345 549 Swap: 0 0 0 The server is running CentOS 5.3. I am unable to reproduce on my own CentOS boxes nor with any other user reporting the same problem. I have tried a number of things to debug this as suggested in the original question: Logging the output of free -m before and after the Popen call. There is no significant change in memory usage i.e. memory is not gradually being used up as the script runs. I added close_fds=True to the Popen call but this made no difference - the script still crashed with the same error. Suggested here and here . I checked the rlimits which showed (-1, -1) on both RLIMIT_DATA and RLIMIT_AS as suggested here . An article suggested the having no swap space might be the cause but swap is actually available on demand (according to the web host) and this was also suggested as a bogus cause here . The processes are being closed because that is the behaviour of using .communicate() as backed up by the Python source code and comments here . The entire checks can be found at on GitHub here with the getProcesses function defined from line 442. This is called by doChecks() starting at line 520. The script was run with strace with the following output before the crash: recv(4, ""Total Accesses: 516662\nTotal kBy""..., 234, 0) = 234 gettimeofday({1250893252, 887805}, NULL) = 0 write(3, ""2009-08-21 17:20:52,887 - checks""..., 91) = 91 gettimeofday({1250893252, 888362}, NULL) = 0 write(3, ""2009-08-21 17:20:52,888 - checks""..., 74) = 74 gettimeofday({1250893252, 888897}, NULL) = 0 write(3, ""2009-08-21 17:20:52,888 - checks""..., 67) = 67 gettimeofday({1250893252, 889184}, NULL) = 0 write(3, ""2009-08-21 17:20:52,889 - checks""..., 81) = 81 close(4) = 0 gettimeofday({1250893252, 889591}, NULL) = 0 write(3, ""2009-08-21 17:20:52,889 - checks""..., 63) = 63 pipe([4, 5]) = 0 pipe([6, 7]) = 0 fcntl64(7, F_GETFD) = 0 fcntl64(7, F_SETFD, FD_CLOEXEC) = 0 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0xb7f12708) = -1 ENOMEM (Cannot allocate memory) write(2, ""Traceback (most recent call last""..., 35) = 35 open(""/usr/bin/sd-agent/agent.py"", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory) open(""/usr/bin/sd-agent/agent.py"", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory) open(""/usr/lib/python24.zip/agent.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/agent.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/plat-linux2/agent.py"", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory) open(""/usr/lib/python2.4/lib-tk/agent.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/lib-dynload/agent.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/site-packages/agent.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) write(2, "" File \""/usr/bin/sd-agent/agent.""..., 52) = 52 open(""/home/admin/sd-agent/daemon.py"", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory) open(""/usr/bin/sd-agent/daemon.py"", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory) open(""/usr/lib/python24.zip/daemon.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/daemon.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/plat-linux2/daemon.py"", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory) open(""/usr/lib/python2.4/lib-tk/daemon.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/lib-dynload/daemon.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/site-packages/daemon.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) write(2, "" File \""/home/admin/sd-agent/dae""..., 60) = 60 open(""/usr/bin/sd-agent/agent.py"", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory) open(""/usr/bin/sd-agent/agent.py"", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory) open(""/usr/lib/python24.zip/agent.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/agent.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/plat-linux2/agent.py"", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory) open(""/usr/lib/python2.4/lib-tk/agent.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/lib-dynload/agent.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/site-packages/agent.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) write(2, "" File \""/usr/bin/sd-agent/agent.""..., 54) = 54 open(""/usr/lib/python2.4/sched.py"", O_RDONLY|O_LARGEFILE) = 8 write(2, "" File \""/usr/lib/python2.4/sched""..., 55) = 55 fstat64(8, {st_mode=S_IFREG|0644, st_size=4054, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7d28000 read(8, ""\""\""\""A generally useful event sche""..., 4096) = 4054 write(2, "" "", 4) = 4 write(2, ""void = action(*argument)\n"", 25) = 25 close(8) = 0 munmap(0xb7d28000, 4096) = 0 open(""/usr/bin/sd-agent/checks.py"", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory) open(""/usr/bin/sd-agent/checks.py"", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory) open(""/usr/lib/python24.zip/checks.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/checks.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/plat-linux2/checks.py"", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory) open(""/usr/lib/python2.4/lib-tk/checks.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/lib-dynload/checks.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/site-packages/checks.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) write(2, "" File \""/usr/bin/sd-agent/checks""..., 60) = 60 open(""/usr/bin/sd-agent/checks.py"", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory) open(""/usr/bin/sd-agent/checks.py"", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory) open(""/usr/lib/python24.zip/checks.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/checks.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/plat-linux2/checks.py"", O_RDONLY|O_LARGEFILE) = -1 ENOMEM (Cannot allocate memory) open(""/usr/lib/python2.4/lib-tk/checks.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/lib-dynload/checks.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) open(""/usr/lib/python2.4/site-packages/checks.py"", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory) write(2, "" File \""/usr/bin/sd-agent/checks""..., 64) = 64 open(""/usr/lib/python2.4/subprocess.py"", O_RDONLY|O_LARGEFILE) = 8 write(2, "" File \""/usr/lib/python2.4/subpr""..., 65) = 65 fstat64(8, {st_mode=S_IFREG|0644, st_size=39931, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7d28000 read(8, ""# subprocess - Subprocesses with""..., 4096) = 4096 read(8, ""lso, the newlines attribute of t""..., 4096) = 4096 read(8, ""code < 0:\n print >>sys.st""..., 4096) = 4096 read(8, ""alse does not exist on 2.2.0\ntry""..., 4096) = 4096 read(8, "" p2cread\n # c2pread <-""..., 4096) = 4096 write(2, "" "", 4) = 4 write(2, ""errread, errwrite)\n"", 19) = 19 close(8) = 0 munmap(0xb7d28000, 4096) = 0 open(""/usr/lib/python2.4/subprocess.py"", O_RDONLY|O_LARGEFILE) = 8 write(2, "" File \""/usr/lib/python2.4/subpr""..., 71) = 71 fstat64(8, {st_mode=S_IFREG|0644, st_size=39931, ...}) = 0 mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7d28000 read(8, ""# subprocess - Subprocesses with""..., 4096) = 4096 read(8, ""lso, the newlines attribute of t""..., 4096) = 4096 read(8, ""code < 0:\n print >>sys.st""..., 4096) = 4096 read(8, ""alse does not exist on 2.2.0\ntry""..., 4096) = 4096 read(8, "" p2cread\n # c2pread <-""..., 4096) = 4096 read(8, ""table(self, handle):\n ""..., 4096) = 4096 read(8, ""rrno using _sys_errlist (or siml""..., 4096) = 4096 read(8, "" p2cwrite = None, None\n ""..., 4096) = 4096 write(2, "" "", 4) = 4 write(2, ""self.pid = os.fork()\n"", 21) = 21 close(8) = 0 munmap(0xb7d28000, 4096) = 0 write(2, ""OSError"", 7) = 7 write(2, "": "", 2) = 2 write(2, ""[Errno 12] Cannot allocate memor""..., 33) = 33 write(2, ""\n"", 1) = 1 unlink(""/var/run/sd-agent.pid"") = 0 close(3) = 0 munmap(0xb7e0d000, 4096) = 0 rt_sigaction(SIGINT, {SIG_DFL, [], SA_RESTORER, 0x589978}, {0xb89a60, [], SA_RESTORER, 0x589978}, 8) = 0 brk(0xa022000) = 0xa022000 exit_group(1) = ?","python, linux, memory",135,181093,8,https://stackoverflow.com/questions/1367373/python-subprocess-popen-oserror-errno-12-cannot-allocate-memory 29396928,error retrieving current directory: getcwd: cannot access parent directories,"I have a simple script: #!/bin/bash for server in $(~/.ansible/ansible_hosts) do ssh $server ""hostname; readlink /opt/mydir/mylink;"" done It works fine - the program returns the correct hostname and link - except that I get the following error on some but not all of the servers: shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory All the directories exist. One of the most common suggestions has been to add a cd, a cd -, or a cd /. All that happens when that step is added is an additional: chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory I tried kickstarting the nfs daemon on the off chance that there was some confusion about my homedir and substituted /etc/init.d in case the problem was with /opt. No difference This would simply be an annoyance except that when I try to use an ansible playbook instead of a simple ssh command it fails for that server. Any insights would appreciated.","error retrieving current directory: getcwd: cannot access parent directories I have a simple script: #!/bin/bash for server in $(~/.ansible/ansible_hosts) do ssh $server ""hostname; readlink /opt/mydir/mylink;"" done It works fine - the program returns the correct hostname and link - except that I get the following error on some but not all of the servers: shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory All the directories exist. One of the most common suggestions has been to add a cd, a cd -, or a cd /. All that happens when that step is added is an additional: chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory I tried kickstarting the nfs daemon on the off chance that there was some confusion about my homedir and substituted /etc/init.d in case the problem was with /opt. No difference This would simply be an annoyance except that when I try to use an ansible playbook instead of a simple ssh command it fails for that server. Any insights would appreciated.","linux, bash, shell, ssh, ansible",127,269238,9,https://stackoverflow.com/questions/29396928/error-retrieving-current-directory-getcwd-cannot-access-parent-directories 9422461,Check if directory mounted with bash,"I am using mount -o bind /some/directory/here /foo/bar I want to check /foo/bar though with a bash script, and see if its been mounted? If not, then call the above mount command, else do something else. How can I do this? CentOS is the operating system.","Check if directory mounted with bash I am using mount -o bind /some/directory/here /foo/bar I want to check /foo/bar though with a bash script, and see if its been mounted? If not, then call the above mount command, else do something else. How can I do this? CentOS is the operating system.","linux, bash, centos, mount",122,199629,8,https://stackoverflow.com/questions/9422461/check-if-directory-mounted-with-bash 2960339,Unable to install pyodbc on Linux,"I am running Linux (2.6.18-164.15.1.el5.centos.plus) and trying to install pyodbc. I am doing pip install pyodbc and get a very long list of errors, which end in error: command 'gcc' failed with exit status 1 I looked in /root/.pip/pip.log and saw the following: InstallationError: Command /usr/local/bin/python -c ""import setuptools; file ='/home/build/pyodbc/setup.py'; execfile('/home/build/pyodbc/setup.py')"" install --single-version-externally-managed --record /tmp/pip-7MS9Vu-record/install-record.txt failed with error code 1 Has anybody had a similar issue installing pyodbc?","Unable to install pyodbc on Linux I am running Linux (2.6.18-164.15.1.el5.centos.plus) and trying to install pyodbc. I am doing pip install pyodbc and get a very long list of errors, which end in error: command 'gcc' failed with exit status 1 I looked in /root/.pip/pip.log and saw the following: InstallationError: Command /usr/local/bin/python -c ""import setuptools; file ='/home/build/pyodbc/setup.py'; execfile('/home/build/pyodbc/setup.py')"" install --single-version-externally-managed --record /tmp/pip-7MS9Vu-record/install-record.txt failed with error code 1 Has anybody had a similar issue installing pyodbc?","python, linux, centos, pyodbc",119,180320,20,https://stackoverflow.com/questions/2960339/unable-to-install-pyodbc-on-linux 36918387,How to free up space on docker devmapper and CentOS7?,I am learning docker and I am using v1.11.0 I am trying to install hadoop but devmapper is complaining about free disk space? devmapper: Thin Pool has 82984 free data blocks which is less than minimum required 163840 free data blocks. Create more free space in thin pool or use dm.min_free_space option to change behavior I have deleted all my images but the problem persists: [root@localhost hadoop_docker]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE debian latest 47af6ca8a14a 3 weeks ago 125 MB [root@localhost hadoop_docker]# and this is my disk configuration: [root@localhost ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 8G 0 disk ├─sda1 8:1 0 500M 0 part /boot └─sda2 8:2 0 7.5G 0 part ├─centos-root 253:0 0 6.7G 0 lvm / └─centos-swap 253:1 0 820M 0 lvm [SWAP] sr0 11:0 1 1024M 0 rom loop0 7:0 0 100G 0 loop └─docker-253:0-844682-pool 253:2 0 100G 0 dm loop1 7:1 0 2G 0 loop └─docker-253:0-844682-pool 253:2 0 100G 0 dm How can I free up the disk space?,How to free up space on docker devmapper and CentOS7? I am learning docker and I am using v1.11.0 I am trying to install hadoop but devmapper is complaining about free disk space? devmapper: Thin Pool has 82984 free data blocks which is less than minimum required 163840 free data blocks. Create more free space in thin pool or use dm.min_free_space option to change behavior I have deleted all my images but the problem persists: [root@localhost hadoop_docker]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE debian latest 47af6ca8a14a 3 weeks ago 125 MB [root@localhost hadoop_docker]# and this is my disk configuration: [root@localhost ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 8G 0 disk ├─sda1 8:1 0 500M 0 part /boot └─sda2 8:2 0 7.5G 0 part ├─centos-root 253:0 0 6.7G 0 lvm / └─centos-swap 253:1 0 820M 0 lvm [SWAP] sr0 11:0 1 1024M 0 rom loop0 7:0 0 100G 0 loop └─docker-253:0-844682-pool 253:2 0 100G 0 dm loop1 7:1 0 2G 0 loop └─docker-253:0-844682-pool 253:2 0 100G 0 dm How can I free up the disk space?,"linux, docker",118,100568,2,https://stackoverflow.com/questions/36918387/how-to-free-up-space-on-docker-devmapper-and-centos7 6265595,How can I perform a git pull without re-entering my SSH password?,"Is it possible to configure git/ssh so I don't have to enter my passphrase every time I want to perform a git pull ? Note that the repo is a private one on github. Or, alternatively, what would be the best practice to automate code deployment from a private Github repo? Additional details: EC2 instance running a public AMI based on Fedora.","How can I perform a git pull without re-entering my SSH password? Is it possible to configure git/ssh so I don't have to enter my passphrase every time I want to perform a git pull ? Note that the repo is a private one on github. Or, alternatively, what would be the best practice to automate code deployment from a private Github repo? Additional details: EC2 instance running a public AMI based on Fedora.","linux, git, ssh, github",118,136706,7,https://stackoverflow.com/questions/6265595/how-can-i-perform-a-git-pull-without-re-entering-my-ssh-password 886237,How can I randomize the lines in a file using standard tools on Red Hat Linux?,"How can I randomize the lines in a file using standard tools on Red Hat Linux? I don't have the shuf command, so I am looking for something like a perl or awk one-liner that accomplishes the same task.","How can I randomize the lines in a file using standard tools on Red Hat Linux? How can I randomize the lines in a file using standard tools on Red Hat Linux? I don't have the shuf command, so I am looking for something like a perl or awk one-liner that accomplishes the same task.","linux, file, random, redhat, shuffle",116,86440,11,https://stackoverflow.com/questions/886237/how-can-i-randomize-the-lines-in-a-file-using-standard-tools-on-red-hat-linux 16809134,How to get a list of programs running with nohup,"I am accessing a server running CentOS (linux distribution) with an SSH connection. Since I can't always stay logged in, I use ""nohup [command] &"" to run my programs. I couldn't find how to get a list of all the programs I started using nohup. ""jobs"" only works out before I log out. After that, if I log back again, the jobs command shows me nothing, but I can see in my log files that my programs are still running. Is there a way to get a list of all the programs that I started using ""nohup"" ?","How to get a list of programs running with nohup I am accessing a server running CentOS (linux distribution) with an SSH connection. Since I can't always stay logged in, I use ""nohup [command] &"" to run my programs. I couldn't find how to get a list of all the programs I started using nohup. ""jobs"" only works out before I log out. After that, if I log back again, the jobs command shows me nothing, but I can see in my log files that my programs are still running. Is there a way to get a list of all the programs that I started using ""nohup"" ?","linux, shell, centos, nohup",115,242261,7,https://stackoverflow.com/questions/16809134/how-to-get-a-list-of-programs-running-with-nohup 36496911,Run an Ansible task only when the variable contains a specific string,"I have multiple tasks depend from the value of variable1. I want to check if the value is in {{ variable1 }} but I get an error: - name: do something when the value in variable1 command: when: ""'value' in {{ variable1 }}"" I'm using ansible 2.0.2","Run an Ansible task only when the variable contains a specific string I have multiple tasks depend from the value of variable1. I want to check if the value is in {{ variable1 }} but I get an error: - name: do something when the value in variable1 command: when: ""'value' in {{ variable1 }}"" I'm using ansible 2.0.2","linux, ansible, conditional-statements, ansible-2.x",112,379851,10,https://stackoverflow.com/questions/36496911/run-an-ansible-task-only-when-the-variable-contains-a-specific-string 30464980,How to check all versions of Python installed on OS X and CentOS,I just started setting up a CentOS server today and noticed that the default version of Python on CentOS is set to 2.6.6. I want to use Python 2.7 instead. I googled around and found that 2.6.6 is used by system tools such as YUM so I should not tamper with it. Then I opened up a terminal on my Mac and found that I had Python 2.6.8 and 2.7.5 and 3.3.3 installed. Sorry for the long story. In short I just want to know how to look up all the versions of Python installed on CentOS so I don't accidentally install it twice.,How to check all versions of Python installed on OS X and CentOS I just started setting up a CentOS server today and noticed that the default version of Python on CentOS is set to 2.6.6. I want to use Python 2.7 instead. I googled around and found that 2.6.6 is used by system tools such as YUM so I should not tamper with it. Then I opened up a terminal on my Mac and found that I had Python 2.6.8 and 2.7.5 and 3.3.3 installed. Sorry for the long story. In short I just want to know how to look up all the versions of Python installed on CentOS so I don't accidentally install it twice.,"python, linux, macos, centos, version",111,428400,15,https://stackoverflow.com/questions/30464980/how-to-check-all-versions-of-python-installed-on-os-x-and-centos 5952467,How to specify a editor to open crontab file? "export EDITOR=vi" does not work,"I'm using Red Hat Enterprise Linux 5, and I want to set the vim editor to edit the crontab file. If I run echo $EDITOR , I get vim. But when I run crontab -e , I get different editor.","How to specify a editor to open crontab file? "export EDITOR=vi" does not work I'm using Red Hat Enterprise Linux 5, and I want to set the vim editor to edit the crontab file. If I run echo $EDITOR , I get vim. But when I run crontab -e , I get different editor.","linux, vim",109,148988,8,https://stackoverflow.com/questions/5952467/how-to-specify-a-editor-to-open-crontab-file-export-editor-vi-does-not-work 1423346,How do I extract a single chunk of bytes from within a file?,"On a Linux desktop (RHEL4) I want to extract a range of bytes (typically less than 1000) from within a large file (>1 Gig). I know the offset into the file and the size of the chunk. I can write code to do this but is there a command line solution? Ideally, something like: magicprogram --offset 102567 --size 253 < input.binary > output.binary","How do I extract a single chunk of bytes from within a file? On a Linux desktop (RHEL4) I want to extract a range of bytes (typically less than 1000) from within a large file (>1 Gig). I know the offset into the file and the size of the chunk. I can write code to do this but is there a command line solution? Ideally, something like: magicprogram --offset 102567 --size 253 < input.binary > output.binary","linux, file, split",108,80661,7,https://stackoverflow.com/questions/1423346/how-do-i-extract-a-single-chunk-of-bytes-from-within-a-file 18068358,Can't su to user jenkins after installing Jenkins,"I've installed jenkins and I'm trying to get into a shell as Jenkins to add an ssh key. I can't seem to su into the jenkins user: [root@pacmandev /]# sudo su jenkins [root@pacmandev /]# whoami root [root@pacmandev /]# echo $USER root [root@pacmandev /]# The jenkins user exists in my /etc/passwd file. Runnin su jenkins asks for a password, but rejects my normal password. sudo su jenkins doesn't seem to do anything; same for sudo su - jenkins . I'm on CentOS.","Can't su to user jenkins after installing Jenkins I've installed jenkins and I'm trying to get into a shell as Jenkins to add an ssh key. I can't seem to su into the jenkins user: [root@pacmandev /]# sudo su jenkins [root@pacmandev /]# whoami root [root@pacmandev /]# echo $USER root [root@pacmandev /]# The jenkins user exists in my /etc/passwd file. Runnin su jenkins asks for a password, but rejects my normal password. sudo su jenkins doesn't seem to do anything; same for sudo su - jenkins . I'm on CentOS.","linux, unix, jenkins",107,131725,6,https://stackoverflow.com/questions/18068358/cant-su-to-user-jenkins-after-installing-jenkins 18880024,Start ssh-agent on login,"I have a site as a remote Git repository pulling from Bitbucket using an SSH alias. I can manually start the ssh-agent on my server, but I have to do this every time I log in via SSH. I manually start the ssh-agent : eval ssh-agent $SHELL Then I add the agent: ssh-add ~/.ssh/bitbucket_id Then it shows up when I do: ssh-add -l And I'm good to go. Is there a way to automate this process, so I don't have to do it every time I log in? The server is running Red Hat 6.2 (Santiago).","Start ssh-agent on login I have a site as a remote Git repository pulling from Bitbucket using an SSH alias. I can manually start the ssh-agent on my server, but I have to do this every time I log in via SSH. I manually start the ssh-agent : eval ssh-agent $SHELL Then I add the agent: ssh-add ~/.ssh/bitbucket_id Then it shows up when I do: ssh-add -l And I'm good to go. Is there a way to automate this process, so I don't have to do it every time I log in? The server is running Red Hat 6.2 (Santiago).","git, ssh, bitbucket, redhat, ssh-agent",445,506259,12,https://stackoverflow.com/questions/18880024/start-ssh-agent-on-login 11213520,Yum crashed with Keyboard Interrupt error,"I installed the newer version of python (3.2.3) than the one available in Fedora16 (python2.7) And now yum stops working. It shows the following error. [root@localhost yum-3.4.3]# yum File ""/usr/bin/yum"", line 30 except KeyboardInterrupt, e: ^ SyntaxError: invalid syntax Please advice as how to resolve the error. It would be helpful as I am not able to update or install any package.","Yum crashed with Keyboard Interrupt error I installed the newer version of python (3.2.3) than the one available in Fedora16 (python2.7) And now yum stops working. It shows the following error. [root@localhost yum-3.4.3]# yum File ""/usr/bin/yum"", line 30 except KeyboardInterrupt, e: ^ SyntaxError: invalid syntax Please advice as how to resolve the error. It would be helpful as I am not able to update or install any package.","python-3.x, redhat, yum, fedora16",99,134313,7,https://stackoverflow.com/questions/11213520/yum-crashed-with-keyboard-interrupt-error 25751030,How to get only the process ID for a specified process name on Linux?,"How to get only the process ID for a specified process name on Linux? ps -ef|grep java test 31372 31265 0 13:41 pts/1 00:00:00 grep java Based on the process id I will write some logic. So how do I get only the process id for a specific process name. Sample program: PIDS= ps -ef|grep java if [ -z ""$PIDS"" ]; then echo ""nothing"" else mail test@domain.example fi","How to get only the process ID for a specified process name on Linux? How to get only the process ID for a specified process name on Linux? ps -ef|grep java test 31372 31265 0 13:41 pts/1 00:00:00 grep java Based on the process id I will write some logic. So how do I get only the process id for a specific process name. Sample program: PIDS= ps -ef|grep java if [ -z ""$PIDS"" ]; then echo ""nothing"" else mail test@domain.example fi","regex, linux, shell, redhat",97,142584,5,https://stackoverflow.com/questions/25751030/how-to-get-only-the-process-id-for-a-specified-process-name-on-linux 46008624,How to fix: fatal error: openssl/opensslv.h: No such file or directory in RedHat 7,"I have RedHat Enterprise Linux Server 7, and I downloaded the linux kernel version 4.12.10 which I am trying to compile but when I execute the following command: make modules I get the following error: scripts/sign-file.c:25:30: fatal error: openssl/opensslv.h: No such file or directory Does anyone have an idea to fix this please ?","How to fix: fatal error: openssl/opensslv.h: No such file or directory in RedHat 7 I have RedHat Enterprise Linux Server 7, and I downloaded the linux kernel version 4.12.10 which I am trying to compile but when I execute the following command: make modules I get the following error: scripts/sign-file.c:25:30: fatal error: openssl/opensslv.h: No such file or directory Does anyone have an idea to fix this please ?","module, linux-kernel, openssl, redhat",89,230256,4,https://stackoverflow.com/questions/46008624/how-to-fix-fatal-error-openssl-opensslv-h-no-such-file-or-directory-in-redhat 3241086,How to schedule to run first Sunday of every month,I am using Bash on RedHat. I need to schedule a cron job to run at at 9:00 AM on first Sunday of every month. How can I do this?,How to schedule to run first Sunday of every month I am using Bash on RedHat. I need to schedule a cron job to run at at 9:00 AM on first Sunday of every month. How can I do this?,"bash, shell, cron, redhat",81,145076,11,https://stackoverflow.com/questions/3241086/how-to-schedule-to-run-first-sunday-of-every-month 119390,Specify the from user when sending email using the mail command,Does anyone know how to change the from user when sending email using the mail command? I have looked through the man page and can not see how to do this. We are running Redhat Linux 5.,Specify the from user when sending email using the mail command Does anyone know how to change the from user when sending email using the mail command? I have looked through the man page and can not see how to do this. We are running Redhat Linux 5.,"linux, email, redhat",75,236135,15,https://stackoverflow.com/questions/119390/specify-the-from-user-when-sending-email-using-the-mail-command 17618737,Linux free shows high memory usage but top does not,"On RedHat Linux 6.2 I'm running free -m and it shows nearly all 8GB used total used free shared buffers cached Mem: 7989 7734 254 0 28 7128 -/+ buffers/cache: 578 7411 Swap: 4150 0 4150 But at the same time in top -M I cannot see any processes using all this memory: top - 16:03:34 up 4:10, 2 users, load average: 0.08, 0.04, 0.01 Tasks: 169 total, 1 running, 163 sleeping, 5 stopped, 0 zombie Cpu(s): 0.7%us, 0.3%sy, 0.0%ni, 98.6%id, 0.4%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 7989.539M total, 7721.570M used, 267.969M free, 28.633M buffers Swap: 4150.992M total, 0.000k used, 4150.992M free, 7115.312M cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1863 sroot 20 0 398m 24m 9.8m S 0.3 0.3 3:12.87 App1 1 sroot 20 0 2864 1392 1180 S 0.0 0.0 0:00.91 init 2 sroot 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 sroot RT 0 0 0 0 S 0.0 0.0 0:00.07 migration/0 4 sroot 20 0 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0 5 sroot RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 6 sroot RT 0 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 7 sroot RT 0 0 0 0 S 0.0 0.0 0:00.08 migration/1 8 sroot RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/1 I also tried this ps mem script but it onlt shows about 400MB memory being used.","Linux free shows high memory usage but top does not On RedHat Linux 6.2 I'm running free -m and it shows nearly all 8GB used total used free shared buffers cached Mem: 7989 7734 254 0 28 7128 -/+ buffers/cache: 578 7411 Swap: 4150 0 4150 But at the same time in top -M I cannot see any processes using all this memory: top - 16:03:34 up 4:10, 2 users, load average: 0.08, 0.04, 0.01 Tasks: 169 total, 1 running, 163 sleeping, 5 stopped, 0 zombie Cpu(s): 0.7%us, 0.3%sy, 0.0%ni, 98.6%id, 0.4%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 7989.539M total, 7721.570M used, 267.969M free, 28.633M buffers Swap: 4150.992M total, 0.000k used, 4150.992M free, 7115.312M cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1863 sroot 20 0 398m 24m 9.8m S 0.3 0.3 3:12.87 App1 1 sroot 20 0 2864 1392 1180 S 0.0 0.0 0:00.91 init 2 sroot 20 0 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 sroot RT 0 0 0 0 S 0.0 0.0 0:00.07 migration/0 4 sroot 20 0 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0 5 sroot RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/0 6 sroot RT 0 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 7 sroot RT 0 0 0 0 S 0.0 0.0 0:00.08 migration/1 8 sroot RT 0 0 0 0 S 0.0 0.0 0:00.00 migration/1 I also tried this ps mem script but it onlt shows about 400MB memory being used.","linux, memory-management, process, redhat, free-command",68,84111,3,https://stackoverflow.com/questions/17618737/linux-free-shows-high-memory-usage-but-top-does-not 70926799,CentOS through a VM - no URLs in mirrorlist,"I am trying to run a CentOS 8 server through VirtualBox (6.1.30) ( Vagrant ), which worked just fine yesterday for me, but today I tried running a sudo yum update . I keep getting this error for some reason: [vagrant@192.168.38.4] ~ >> sudo yum update CentOS Linux 8 - AppStream 71 B/s | 38 B 00:00 Error: Failed to download metadata for repo 'appstream': Cannot prepare internal mirrorlist: No URLs in mirrorlist I already tried to change the namespaces on /etc/resolve.conf , remove the DNF folders and everything. On other computers, this works just fine, so I think the problem is with my host machine. I also tried to reset the network settings (I am on a Windows 10 host), without success either. It's not a DNS problem; it works just fine. After I reinstalled Windows, I still have the same error in my VM. File dnf.log : 2022-01-31T15:28:03+0000 INFO --- logging initialized --- 2022-01-31T15:28:03+0000 DDEBUG timer: config: 2 ms 2022-01-31T15:28:03+0000 DEBUG Loaded plugins: builddep, changelog, config-manager, copr, debug, debuginfo-install, download, generate_completion_cache, groups-manager, needs-restarting, playground, repoclosure, repodiff, repograph, repomanage, reposync 2022-01-31T15:28:03+0000 DEBUG YUM version: 4.4.2 2022-01-31T15:28:03+0000 DDEBUG Command: yum update 2022-01-31T15:28:03+0000 DDEBUG Installroot: / 2022-01-31T15:28:03+0000 DDEBUG Releasever: 8 2022-01-31T15:28:03+0000 DEBUG cachedir: /var/cache/dnf 2022-01-31T15:28:03+0000 DDEBUG Base command: update 2022-01-31T15:28:03+0000 DDEBUG Extra commands: ['update'] 2022-01-31T15:28:03+0000 DEBUG User-Agent: constructed: 'libdnf (CentOS Linux 8; generic; Linux.x86_64)' 2022-01-31T15:28:05+0000 DDEBUG Cleaning up. 2022-01-31T15:28:05+0000 SUBDEBUG Traceback (most recent call last): File ""/usr/lib/python3.6/site-packages/dnf/repo.py"", line 574, in load ret = self._repo.load() File ""/usr/lib64/python3.6/site-packages/libdnf/repo.py"", line 397, in load return _repo.Repo_load(self) libdnf._error.Error: Failed to download metadata for repo 'appstream': Cannot prepare internal mirrorlist: No URLs in mirrorlist During handling of the above exception, another exception occurred: Traceback (most recent call last): File ""/usr/lib/python3.6/site-packages/dnf/cli/main.py"", line 67, in main return _main(base, args, cli_class, option_parser_class) File ""/usr/lib/python3.6/site-packages/dnf/cli/main.py"", line 106, in _main return cli_run(cli, base) File ""/usr/lib/python3.6/site-packages/dnf/cli/main.py"", line 122, in cli_run cli.run() File ""/usr/lib/python3.6/site-packages/dnf/cli/cli.py"", line 1050, in run self._process_demands() File ""/usr/lib/python3.6/site-packages/dnf/cli/cli.py"", line 740, in _process_demands load_available_repos=self.demands.available_repos) File ""/usr/lib/python3.6/site-packages/dnf/base.py"", line 394, in fill_sack self._add_repo_to_sack(r) File ""/usr/lib/python3.6/site-packages/dnf/base.py"", line 137, in _add_repo_to_sack repo.load() File ""/usr/lib/python3.6/site-packages/dnf/repo.py"", line 581, in load raise dnf.exceptions.RepoError(str(e)) dnf.exceptions.RepoError: Failed to download metadata for repo 'appstream': Cannot prepare internal mirrorlist: No URLs in mirrorlist 2022-01-31T15:28:05+0000 CRITICAL Error: Failed to download metadata for repo 'appstream': Cannot prepare internal mirrorlist: No URLs in mirrorlist","CentOS through a VM - no URLs in mirrorlist I am trying to run a CentOS 8 server through VirtualBox (6.1.30) ( Vagrant ), which worked just fine yesterday for me, but today I tried running a sudo yum update . I keep getting this error for some reason: [vagrant@192.168.38.4] ~ >> sudo yum update CentOS Linux 8 - AppStream 71 B/s | 38 B 00:00 Error: Failed to download metadata for repo 'appstream': Cannot prepare internal mirrorlist: No URLs in mirrorlist I already tried to change the namespaces on /etc/resolve.conf , remove the DNF folders and everything. On other computers, this works just fine, so I think the problem is with my host machine. I also tried to reset the network settings (I am on a Windows 10 host), without success either. It's not a DNS problem; it works just fine. After I reinstalled Windows, I still have the same error in my VM. File dnf.log : 2022-01-31T15:28:03+0000 INFO --- logging initialized --- 2022-01-31T15:28:03+0000 DDEBUG timer: config: 2 ms 2022-01-31T15:28:03+0000 DEBUG Loaded plugins: builddep, changelog, config-manager, copr, debug, debuginfo-install, download, generate_completion_cache, groups-manager, needs-restarting, playground, repoclosure, repodiff, repograph, repomanage, reposync 2022-01-31T15:28:03+0000 DEBUG YUM version: 4.4.2 2022-01-31T15:28:03+0000 DDEBUG Command: yum update 2022-01-31T15:28:03+0000 DDEBUG Installroot: / 2022-01-31T15:28:03+0000 DDEBUG Releasever: 8 2022-01-31T15:28:03+0000 DEBUG cachedir: /var/cache/dnf 2022-01-31T15:28:03+0000 DDEBUG Base command: update 2022-01-31T15:28:03+0000 DDEBUG Extra commands: ['update'] 2022-01-31T15:28:03+0000 DEBUG User-Agent: constructed: 'libdnf (CentOS Linux 8; generic; Linux.x86_64)' 2022-01-31T15:28:05+0000 DDEBUG Cleaning up. 2022-01-31T15:28:05+0000 SUBDEBUG Traceback (most recent call last): File ""/usr/lib/python3.6/site-packages/dnf/repo.py"", line 574, in load ret = self._repo.load() File ""/usr/lib64/python3.6/site-packages/libdnf/repo.py"", line 397, in load return _repo.Repo_load(self) libdnf._error.Error: Failed to download metadata for repo 'appstream': Cannot prepare internal mirrorlist: No URLs in mirrorlist During handling of the above exception, another exception occurred: Traceback (most recent call last): File ""/usr/lib/python3.6/site-packages/dnf/cli/main.py"", line 67, in main return _main(base, args, cli_class, option_parser_class) File ""/usr/lib/python3.6/site-packages/dnf/cli/main.py"", line 106, in _main return cli_run(cli, base) File ""/usr/lib/python3.6/site-packages/dnf/cli/main.py"", line 122, in cli_run cli.run() File ""/usr/lib/python3.6/site-packages/dnf/cli/cli.py"", line 1050, in run self._process_demands() File ""/usr/lib/python3.6/site-packages/dnf/cli/cli.py"", line 740, in _process_demands load_available_repos=self.demands.available_repos) File ""/usr/lib/python3.6/site-packages/dnf/base.py"", line 394, in fill_sack self._add_repo_to_sack(r) File ""/usr/lib/python3.6/site-packages/dnf/base.py"", line 137, in _add_repo_to_sack repo.load() File ""/usr/lib/python3.6/site-packages/dnf/repo.py"", line 581, in load raise dnf.exceptions.RepoError(str(e)) dnf.exceptions.RepoError: Failed to download metadata for repo 'appstream': Cannot prepare internal mirrorlist: No URLs in mirrorlist 2022-01-31T15:28:05+0000 CRITICAL Error: Failed to download metadata for repo 'appstream': Cannot prepare internal mirrorlist: No URLs in mirrorlist","linux, centos, vagrant, virtualbox, redhat",67,90242,2,https://stackoverflow.com/questions/70926799/centos-through-a-vm-no-urls-in-mirrorlist 57796839,docker compose: Error while loading shared libraries: libz.so.1: failed to map segment from shared object: Operation not permitted,"After installing docker and docker-compose on: NAME=""Red Hat Enterprise Linux Server"" VERSION=""7.6 (Maipo)"" When executing: sudo docker-compose -version It returns: Error while loading shared libraries: libz.so.1: failed to map segment from shared object: Operation not permitted It should return: docker-compose version 1.25.0-rc2, build 661ac20e Installation from docker-compose is this","docker compose: Error while loading shared libraries: libz.so.1: failed to map segment from shared object: Operation not permitted After installing docker and docker-compose on: NAME=""Red Hat Enterprise Linux Server"" VERSION=""7.6 (Maipo)"" When executing: sudo docker-compose -version It returns: Error while loading shared libraries: libz.so.1: failed to map segment from shared object: Operation not permitted It should return: docker-compose version 1.25.0-rc2, build 661ac20e Installation from docker-compose is this","linux, docker, docker-compose, pyinstaller, redhat",65,109217,3,https://stackoverflow.com/questions/57796839/docker-compose-error-while-loading-shared-libraries-libz-so-1-failed-to-map-s 36651680,Click will abort further execution because Python 3 was configured to use ASCII as encoding for the environment,"I downloaded Quokka Python/Flask CMS to a CentOS7 server. Everything works fine with command sudo python3 manage.py runserver --host 0.0.0.0 --port 80 Then I create a file /etc/init.d/quokkacms. The file contains following code start() { echo -n ""Starting quokkacms: "" python3 /var/www/quokka/manage.py runserver --host 0.0.0.0 --port 80 touch /var/lock/subsys/quokkacms return 0 } stop() { echo -n ""Shutting down quokkacms: "" rm -f /var/lock/subsys/quokkacms return 0 } case ""$1"" in start) start ;; stop) stop ;; status) ;; restart) stop start ;; *) echo ""Usage: quokkacms {start|stop|status|restart}"" exit 1 ;; esac exit $? But I get error when running sudo service quokkacms start RuntimeError: Click will abort further execution because Python 3 was configured to use ASCII as encoding for the environment. Either switch to Python 2 or consult [URL] for mitigation steps. It seems to me that it is the bash script. How come I get different results? Also I followed instructions in the link in the error message but still had no luck. [update] I had already tried the solution provided by Click before I posted this question. Check the results below (i run in root): [root@webserver quokka]# python3 Python 3.4.3 (default, Jan 26 2016, 02:25:35) [GCC 4.8.5 20150623 (Red Hat 4.8.5-4)] on linux Type ""help"", ""copyright"", ""credits"" or ""license"" for more information. >>> import locale >>> import codecs >>> print(locale.getpreferredencoding()) UTF-8 >>> print(codecs.lookup(locale.getpreferredencoding()).name) utf-8 >>> locale.getdefaultlocale() ('en_US', 'UTF-8') >>> locale.CODESET 14 >>>","Click will abort further execution because Python 3 was configured to use ASCII as encoding for the environment I downloaded Quokka Python/Flask CMS to a CentOS7 server. Everything works fine with command sudo python3 manage.py runserver --host 0.0.0.0 --port 80 Then I create a file /etc/init.d/quokkacms. The file contains following code start() { echo -n ""Starting quokkacms: "" python3 /var/www/quokka/manage.py runserver --host 0.0.0.0 --port 80 touch /var/lock/subsys/quokkacms return 0 } stop() { echo -n ""Shutting down quokkacms: "" rm -f /var/lock/subsys/quokkacms return 0 } case ""$1"" in start) start ;; stop) stop ;; status) ;; restart) stop start ;; *) echo ""Usage: quokkacms {start|stop|status|restart}"" exit 1 ;; esac exit $? But I get error when running sudo service quokkacms start RuntimeError: Click will abort further execution because Python 3 was configured to use ASCII as encoding for the environment. Either switch to Python 2 or consult [URL] for mitigation steps. It seems to me that it is the bash script. How come I get different results? Also I followed instructions in the link in the error message but still had no luck. [update] I had already tried the solution provided by Click before I posted this question. Check the results below (i run in root): [root@webserver quokka]# python3 Python 3.4.3 (default, Jan 26 2016, 02:25:35) [GCC 4.8.5 20150623 (Red Hat 4.8.5-4)] on linux Type ""help"", ""copyright"", ""credits"" or ""license"" for more information. >>> import locale >>> import codecs >>> print(locale.getpreferredencoding()) UTF-8 >>> print(codecs.lookup(locale.getpreferredencoding()).name) utf-8 >>> locale.getdefaultlocale() ('en_US', 'UTF-8') >>> locale.CODESET 14 >>>","python-3.x, centos, locale, redhat, python-click",65,62169,2,https://stackoverflow.com/questions/36651680/click-will-abort-further-execution-because-python-3-was-configured-to-use-ascii 40622162,Docker load and save: "archive/tar: invalid tar header","I'm trying to import a Docker image into Docker on AWS Red Hat Linux (3.10.0-514.el7.x86_64) and am having problems with the error; Error processing tar file(exit status 1): archive/tar: invalid tar header This same image works fine on my local machine, and in Boot2Docker on Windows also. It's quite large (2.5 GB), but I've verified the checksum on the Red Hat Linux instance, and it's the same as from the source. What could be wrong, or how I can resolve it?","Docker load and save: "archive/tar: invalid tar header" I'm trying to import a Docker image into Docker on AWS Red Hat Linux (3.10.0-514.el7.x86_64) and am having problems with the error; Error processing tar file(exit status 1): archive/tar: invalid tar header This same image works fine on my local machine, and in Boot2Docker on Windows also. It's quite large (2.5 GB), but I've verified the checksum on the Red Hat Linux instance, and it's the same as from the source. What could be wrong, or how I can resolve it?","linux, docker, redhat, tar, docker-image",58,77936,5,https://stackoverflow.com/questions/40622162/docker-load-and-save-archive-tar-invalid-tar-header 4632261,PIL /JPEG Library: "decoder jpeg not available","I tried to use PIL to do some JPEG work in my django app with PIL but I'm getting this IOError.. not sure what to do. """"decoder jpeg not available"""" Am I missing the JPEG decoder from my server? If so, how do I fix it?","PIL /JPEG Library: "decoder jpeg not available" I tried to use PIL to do some JPEG work in my django app with PIL but I'm getting this IOError.. not sure what to do. """"decoder jpeg not available"""" Am I missing the JPEG decoder from my server? If so, how do I fix it?","python, django, python-imaging-library, redhat, libjpeg",56,51880,8,https://stackoverflow.com/questions/4632261/pil-jpeg-library-decoder-jpeg-not-available 12076326,How to install maven on redhat linux,"Note: When originally posted I was trying to install maven2. Since the main answer is for maven3 I have updated the title. The rest of the question remains as it was originally posted. I'm trying to install maven2 on a redhat linux box using the command yum install maven2 but yum doesn't seem to be able to find maven2. No package maven2 available I've run across other posts about this topic, but the answer to the following post suggests to add repos. I add said repos, but run into errors after adding them. How to install Maven into Red Hat Enterprise Linux 6? I can only access this box via command line so simply downloading maven from their website is difficult for me.","How to install maven on redhat linux Note: When originally posted I was trying to install maven2. Since the main answer is for maven3 I have updated the title. The rest of the question remains as it was originally posted. I'm trying to install maven2 on a redhat linux box using the command yum install maven2 but yum doesn't seem to be able to find maven2. No package maven2 available I've run across other posts about this topic, but the answer to the following post suggests to add repos. I add said repos, but run into errors after adding them. How to install Maven into Red Hat Enterprise Linux 6? I can only access this box via command line so simply downloading maven from their website is difficult for me.","linux, maven, redhat, yum",55,131534,7,https://stackoverflow.com/questions/12076326/how-to-install-maven-on-redhat-linux 42981114,install Docker CE 17.03 on RHEL7,Is it possible to install DockerCE in the specific version 17.03 on RHEL7 ? There is information here: [URL] about the installing Docker on RHEL but there is no version info. and here with Docker 17.03 but only in Docker EE not Docker CE [URL] but they talk about Docker v 0.12,install Docker CE 17.03 on RHEL7 Is it possible to install DockerCE in the specific version 17.03 on RHEL7 ? There is information here: [URL] about the installing Docker on RHEL but there is no version info. and here with Docker 17.03 but only in Docker EE not Docker CE [URL] but they talk about Docker v 0.12,"docker, redhat",50,99484,8,https://stackoverflow.com/questions/42981114/install-docker-ce-17-03-on-rhel7 341608,MySQL config file location - redhat linux server,What is the default location for the MySQL configuration file on a redhat linux box?,MySQL config file location - redhat linux server What is the default location for the MySQL configuration file on a redhat linux box?,"mysql, linux, redhat",45,226470,8,https://stackoverflow.com/questions/341608/mysql-config-file-location-redhat-linux-server 792563,How do I clone an OpenLDAP database,"I know this is more like a serverfault question than a stackoverflow question, but since serverfault isn't up yet, here I go: I'm supposed to move an application from one redhat server to another, and without very good knowledge of the internal workings of the application, how would I move the OpenLDAP database from the one machine to the other, with schemas and all. What files would I need to copy over? I believe the setup is pretty standard.","How do I clone an OpenLDAP database I know this is more like a serverfault question than a stackoverflow question, but since serverfault isn't up yet, here I go: I'm supposed to move an application from one redhat server to another, and without very good knowledge of the internal workings of the application, how would I move the OpenLDAP database from the one machine to the other, with schemas and all. What files would I need to copy over? I believe the setup is pretty standard.","linux, ldap, redhat, openldap",43,100225,6,https://stackoverflow.com/questions/792563/how-do-i-clone-an-openldap-database 8962477,Logrotate files with date in the file name,"I am trying to configure logrotate in RHEL for tomcat6 logs. Currently, logrotate works fine for catalina.out log, it is rotated and compressed properly. The problem is with the files with date in them like: catalina.2012-01-20.log catalina.2012-01-21.log catalina.2012-01-22.log These files are not being rotated. I understand that I have to configure these in /etc/logrotate.d/tomcat6 file where rotation for catalina.out is configured. But I am not able to configure it. All I want is these older files to be compressed daily, except the current date log file. Can anybody help me out on this, please!! Thanks Noman A.","Logrotate files with date in the file name I am trying to configure logrotate in RHEL for tomcat6 logs. Currently, logrotate works fine for catalina.out log, it is rotated and compressed properly. The problem is with the files with date in them like: catalina.2012-01-20.log catalina.2012-01-21.log catalina.2012-01-22.log These files are not being rotated. I understand that I have to configure these in /etc/logrotate.d/tomcat6 file where rotation for catalina.out is configured. But I am not able to configure it. All I want is these older files to be compressed daily, except the current date log file. Can anybody help me out on this, please!! Thanks Noman A.","tomcat, date, redhat, logrotate",43,96222,11,https://stackoverflow.com/questions/8962477/logrotate-files-with-date-in-the-file-name 5250345,Cannot overwrite Symbolic Link RedHat Linux,"I have created a symbolic link: sudo ln -s /some/dir new_dir Now I want to overwrite the symbolic link to point to a new location and it will not overwrite. I have tried: sudo ln -f -s /other/dir new_dir I can always sudo rm new_dir , but I would rather have it overwrite accordingly if possible. Any ideas?","Cannot overwrite Symbolic Link RedHat Linux I have created a symbolic link: sudo ln -s /some/dir new_dir Now I want to overwrite the symbolic link to point to a new location and it will not overwrite. I have tried: sudo ln -f -s /other/dir new_dir I can always sudo rm new_dir , but I would rather have it overwrite accordingly if possible. Any ideas?","linux, symlink, redhat",42,15428,3,https://stackoverflow.com/questions/5250345/cannot-overwrite-symbolic-link-redhat-linux 56432205,How to install ps in redhat ubi8/ubi-minimal,"For registry.access.redhat.com/ubi8/ubi-minimal this image, i need ps utility to be installed. There is no yum package manager available in the image. Instead , we have microdnf . microdnf install procps says there is no such package named procps","How to install ps in redhat ubi8/ubi-minimal For registry.access.redhat.com/ubi8/ubi-minimal this image, i need ps utility to be installed. There is no yum package manager available in the image. Instead , we have microdnf . microdnf install procps says there is no such package named procps","redhat, redhat-containers, procps",39,47499,2,https://stackoverflow.com/questions/56432205/how-to-install-ps-in-redhat-ubi8-ubi-minimal 47202468,Segfault on declaring a variable of type vector<shared_ptr<int>>,"Code Here is the program that gives the segfault. #include #include #include int main() { std::cout << ""Hello World"" << std::endl; std::vector> y {}; std::cout << ""Hello World"" << std::endl; } Of course, there is absolutely nothing wrong in the program itself. The root cause of the segfault depends on the environment in which its built and ran. Background We, at Amazon, use a build system which builds and deploys the binaries ( lib and bin ) in an almost machine independent way. For our case, that basically means it deploys the executable (built from the above program) into $project_dir/build/bin/ and almost all its dependencies (i.e the shared libraries) into $project_dir/build/lib/ . Why I used the phrase ""almost"" is because for shared libraries such libc.so , libm.so , ld-linux-x86-64.so.2 and possibly few others, the executable picks from the system (i.e from /lib64 ). Note that it is supposed to pick libstdc++ from $project_dir/build/lib though. Now I run it as follows: $ LD_LIBRARY_PATH=$project_dir/build/lib ./build/bin/run segmentation fault However if I run it, without setting the LD_LIBRARY_PATH . It runs fine. Diagnostics 1. ldd Here are ldd informations for both cases (please note that I've edited the output to mention the full version of the libraries wherever there is difference ) $ LD_LIBRARY_PATH=$project_dir/build/lib ldd ./build/bin/run linux-vdso.so.1 => (0x00007ffce19ca000) libstdc++.so.6 => $project_dir/build/lib/libstdc++.so.6.0.20 libgcc_s.so.1 => $project_dir/build/lib/libgcc_s.so.1 libc.so.6 => /lib64/libc.so.6 libm.so.6 => /lib64/libm.so.6 /lib64/ld-linux-x86-64.so.2 (0x0000562ec51bc000) and without LD_LIBRARY_PATH: $ ldd ./build/bin/run linux-vdso.so.1 => (0x00007fffcedde000) libstdc++.so.6 => /usr/lib64/libstdc++.so.6.0.16 libgcc_s.so.1 => /lib64/libgcc_s-4.4.6-20110824.so.1 libc.so.6 => /lib64/libc.so.6 libm.so.6 => /lib64/libm.so.6 /lib64/ld-linux-x86-64.so.2 (0x0000560caff38000) 2. gdb when it segfaults Program received signal SIGSEGV, Segmentation fault. 0x00007ffff7dea45c in _dl_fixup () from /lib64/ld-linux-x86-64.so.2 Missing separate debuginfos, use: debuginfo-install glibc-2.12-1.209.62.al12.x86_64 (gdb) bt #0 0x00007ffff7dea45c in _dl_fixup () from /lib64/ld-linux-x86-64.so.2 #1 0x00007ffff7df0c55 in _dl_runtime_resolve () from /lib64/ld-linux-x86-64.so.2 #2 0x00007ffff7b1dc41 in std::locale::_S_initialize() () from $project_dir/build/lib/libstdc++.so.6 #3 0x00007ffff7b1dc85 in std::locale::locale() () from $project_dir/build/lib/libstdc++.so.6 #4 0x00007ffff7b1a574 in std::ios_base::Init::Init() () from $project_dir/build/lib/libstdc++.so.6 #5 0x0000000000400fde in _GLOBAL__sub_I_main () at $project_dir/build/gcc-4.9.4/include/c++/4.9.4/iostream:74 #6 0x00000000004012ed in __libc_csu_init () #7 0x00007ffff7518cb0 in __libc_start_main () from /lib64/libc.so.6 #8 0x0000000000401021 in _start () (gdb) 3. LD_DEBUG=all I also tried to see the linker information by enabling LD_DEBUG=all for the segfault case. I found something suspicious, as it searches for pthread_once symbol, and when it unable to find this, it gives segfault (that is my interpretation of the following output snippet BTW): initialize program: $project_dir/build/bin/run symbol=_ZNSt8ios_base4InitC1Ev; lookup in file=$project_dir/build/bin/run [0] symbol=_ZNSt8ios_base4InitC1Ev; lookup in file=$project_dir/build/lib/libstdc++.so.6 [0] binding file $project_dir/build/bin/run [0] to $project_dir/build/lib/libstdc++.so.6 [0]: normal symbol _ZNSt8ios_base4InitC1Ev' [GLIBCXX_3.4] symbol=_ZNSt6localeC1Ev; lookup in file=$project_dir/build/bin/run [0] symbol=_ZNSt6localeC1Ev; lookup in file=$project_dir/build/lib/libstdc++.so.6 [0] binding file $project_dir/build/lib/libstdc++.so.6 [0] to $project_dir/build/lib/libstdc++.so.6 [0]: normal symbol _ZNSt6localeC1Ev' [GLIBCXX_3.4] symbol=pthread_once; lookup in file=$project_dir/build/bin/run [0] symbol=pthread_once; lookup in file=$project_dir/build/lib/libstdc++.so.6 [0] symbol=pthread_once; lookup in file=$project_dir/build/lib/libgcc_s.so.1 [0] symbol=pthread_once; lookup in file=/lib64/libc.so.6 [0] symbol=pthread_once; lookup in file=/lib64/libm.so.6 [0] symbol=pthread_once; lookup in file=/lib64/ld-linux-x86-64.so.2 [0] But I dont see any pthread_once for the case when it runs successfully! Questions I know that its very difficult to debug like this and probably I've not given a lot of informations about the environments and all. But still, my question is: what could be the possible root-cause for this segfault? How to debug further and find that? Once I find the issue, fix would be easy. Compiler and Platform I'm using GCC 4.9 on RHEL5. Experiments E#1 If I comment the following line: std::vector> y {}; It compiles and runs fine! E#2 I just included the following header to my program: #include and linked accordingly. Now it works without any segfault. So it seems by having a dependency on libboost_system.so.1.53.0. , some requirements are met, or the problem is circumvented! E#3 Since I saw it working when I made the executable to be linked against libboost_system.so.1.53.0 , so I did the following things step by step. Instead of using #include in the code itself, I use the original code and ran it by preloading libboost_system.so using LD_PRELOAD as follows: $ LD_PRELOAD=$project_dir/build/lib/libboost_system.so $project_dir/build/bin/run and it ran successfully! Next I did ldd on the libboost_system.so which gave a list of libs, two of which were: /lib64/librt.so.1 /lib64/libpthread.so.0 So instead of preloading libboost_system , I preload librt and libpthread separately: $ LD_PRELOAD=/lib64/librt.so.1 $project_dir/build/bin/run $ LD_PRELOAD=/lib64/libpthread.so.0 $project_dir/build/bin/run In both cases, it ran successfully. Now my conclusion is that by loading either librt or libpthread (or both ), some requirements are met or the problem is circumvented! I still dont know the root cause of the issue, though. Compilation and Linking Options Since the build system is complex and there are lots of options which are there by default. So I tried to explicitly add -lpthread using CMake's set command, then it worked, as we have already seen that by preloading libpthread it works! In order to see the build difference between these two cases ( when-it-works and when-it-gives-segfault ), I built it in verbose mode by passing -v to GCC, to see the compilation stages and the options it actually passes to cc1plus (compiler) and collect2 (linker). ( Note that paths has been edited for brevity, using dollar-sign and dummy paths. ) $/gcc-4.9.4/cc1plus -quiet -v -I /a/include -I /b/include -iprefix $/gcc-4.9.4/ -MMD main.cpp.d -MF main.cpp.o.d -MT main.cpp.o -D_GNU_SOURCE -D_REENTRANT -D __USE_XOPEN2K8 -D _LARGEFILE_SOURCE -D _FILE_OFFSET_BITS=64 -D __STDC_FORMAT_MACROS -D __STDC_LIMIT_MACROS -D NDEBUG $/lab/main.cpp -quiet -dumpbase main.cpp -msse -mfpmath=sse -march=core2 -auxbase-strip main.cpp.o -g -O3 -Wall -Wextra -std=gnu++1y -version -fdiagnostics-color=auto -ftemplate-depth=128 -fno-operator-names -o /tmp/ccxfkRyd.s Irrespective of whether it works or not, the command-line arguments to cc1plus are exactly the same. No difference at all. That does not seem to be very helpful. The difference, however, is at the linking time. Here is what I see, for the case when it works : $/gcc-4.9.4/collect2 -plugin $/gcc-4.9.4/liblto_plugin.so -plugin-opt=$/gcc-4.9.4/lto-wrapper -plugin-opt=-fresolution=/tmp/cchl8RtI.res -plugin-opt=-pass-through=-lgcc_s -plugin-opt=-pass-through=-lgcc -plugin-opt=-pass-through=-lpthread -plugin-opt=-pass-through=-lc -plugin-opt=-pass-through=-lgcc_s -plugin-opt=-pass-through=-lgcc --eh-frame-hdr -m elf_x86_64 -export-dynamic -dynamic-linker /lib64/ld-linux-x86-64.so.2 -o run /usr/lib/../lib64/crt1.o /usr/lib/../lib64/crti.o $/gcc-4.9.4/crtbegin.o -L/a/lib -L/b/lib -L/c/lib -lpthread --as-needed main.cpp.o -lboost_timer -lboost_wave -lboost_chrono -lboost_filesystem -lboost_graph -lboost_locale -lboost_thread -lboost_wserialization -lboost_atomic -lboost_context -lboost_date_time -lboost_iostreams -lboost_math_c99 -lboost_math_c99f -lboost_math_c99l -lboost_math_tr1 -lboost_math_tr1f -lboost_math_tr1l -lboost_mpi -lboost_prg_exec_monitor -lboost_program_options -lboost_random -lboost_regex -lboost_serialization -lboost_signals -lboost_system -lboost_unit_test_framework -lboost_exception -lboost_test_exec_monitor -lbz2 -licui18n -licuuc -licudata -lz -rpath /a/lib:/b/lib:/c/lib: -lstdc++ -lm -lgcc_s -lgcc -lpthread -lc -lgcc_s -lgcc $/gcc-4.9.4/crtend.o /usr/lib/../lib64/crtn.o As you can see, -lpthread is mentioned twice ! The first -lpthread (which is followed by --as-needed ) is missing for the case when it gives segfault . That is the only difference between these two cases. Output of nm -C in both cases Interestingly, the output of nm -C in both cases is identical ( if you ignore the integer values in the first columns ). 0000000000402580 d _DYNAMIC 0000000000402798 d _GLOBAL_OFFSET_TABLE_ 0000000000401000 t _GLOBAL__sub_I_main 0000000000401358 R _IO_stdin_used w _ITM_deregisterTMCloneTable w _ITM_registerTMCloneTable w _Jv_RegisterClasses U _Unwind_Resume 0000000000401150 W std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_destroy() 0000000000401170 W std::vector, std::allocator > >::~vector() 0000000000401170 W std::vector, std::allocator > >::~vector() 0000000000401250 W std::vector >, std::allocator > > >::~vector() 0000000000401250 W std::vector >, std::allocator > > >::~vector() U std::ios_base::Init::Init() U std::ios_base::Init::~Init() 0000000000402880 B std::cout U std::basic_ostream >& std::endl >(std::basic_ostream >&) 0000000000402841 b std::__ioinit U std::basic_ostream >& std::operator<< >(std::basic_ostream >&, char const*) U operator delete(void*) U operator new(unsigned long) 0000000000401510 r __FRAME_END__ 0000000000402818 d __JCR_END__ 0000000000402818 d __JCR_LIST__ 0000000000402820 d __TMC_END__ 0000000000402820 d __TMC_LIST__ 0000000000402838 A __bss_start U __cxa_atexit 0000000000402808 D __data_start 0000000000401100 t __do_global_dtors_aux 0000000000402820 t __do_global_dtors_aux_fini_array_entry 0000000000402810 d __dso_handle 0000000000402828 t __frame_dummy_init_array_entry w __gmon_start__ U __gxx_personality_v0 0000000000402838 t __init_array_end 0000000000402828 t __init_array_start 00000000004012b0 T __libc_csu_fini 00000000004012c0 T __libc_csu_init U __libc_start_main w __pthread_key_create 0000000000402838 A _edata 0000000000402990 A _end 000000000040134c T _fini 0000000000400e68 T _init 0000000000401028 T _start 0000000000401054 t call_gmon_start 0000000000402840 b completed.6661 0000000000402808 W data_start 0000000000401080 t deregister_tm_clones 0000000000401120 t frame_dummy 0000000000400f40 T main 00000000004010c0 t register_tm_clones","Segfault on declaring a variable of type vector<shared_ptr<int>> Code Here is the program that gives the segfault. #include #include #include int main() { std::cout << ""Hello World"" << std::endl; std::vector> y {}; std::cout << ""Hello World"" << std::endl; } Of course, there is absolutely nothing wrong in the program itself. The root cause of the segfault depends on the environment in which its built and ran. Background We, at Amazon, use a build system which builds and deploys the binaries ( lib and bin ) in an almost machine independent way. For our case, that basically means it deploys the executable (built from the above program) into $project_dir/build/bin/ and almost all its dependencies (i.e the shared libraries) into $project_dir/build/lib/ . Why I used the phrase ""almost"" is because for shared libraries such libc.so , libm.so , ld-linux-x86-64.so.2 and possibly few others, the executable picks from the system (i.e from /lib64 ). Note that it is supposed to pick libstdc++ from $project_dir/build/lib though. Now I run it as follows: $ LD_LIBRARY_PATH=$project_dir/build/lib ./build/bin/run segmentation fault However if I run it, without setting the LD_LIBRARY_PATH . It runs fine. Diagnostics 1. ldd Here are ldd informations for both cases (please note that I've edited the output to mention the full version of the libraries wherever there is difference ) $ LD_LIBRARY_PATH=$project_dir/build/lib ldd ./build/bin/run linux-vdso.so.1 => (0x00007ffce19ca000) libstdc++.so.6 => $project_dir/build/lib/libstdc++.so.6.0.20 libgcc_s.so.1 => $project_dir/build/lib/libgcc_s.so.1 libc.so.6 => /lib64/libc.so.6 libm.so.6 => /lib64/libm.so.6 /lib64/ld-linux-x86-64.so.2 (0x0000562ec51bc000) and without LD_LIBRARY_PATH: $ ldd ./build/bin/run linux-vdso.so.1 => (0x00007fffcedde000) libstdc++.so.6 => /usr/lib64/libstdc++.so.6.0.16 libgcc_s.so.1 => /lib64/libgcc_s-4.4.6-20110824.so.1 libc.so.6 => /lib64/libc.so.6 libm.so.6 => /lib64/libm.so.6 /lib64/ld-linux-x86-64.so.2 (0x0000560caff38000) 2. gdb when it segfaults Program received signal SIGSEGV, Segmentation fault. 0x00007ffff7dea45c in _dl_fixup () from /lib64/ld-linux-x86-64.so.2 Missing separate debuginfos, use: debuginfo-install glibc-2.12-1.209.62.al12.x86_64 (gdb) bt #0 0x00007ffff7dea45c in _dl_fixup () from /lib64/ld-linux-x86-64.so.2 #1 0x00007ffff7df0c55 in _dl_runtime_resolve () from /lib64/ld-linux-x86-64.so.2 #2 0x00007ffff7b1dc41 in std::locale::_S_initialize() () from $project_dir/build/lib/libstdc++.so.6 #3 0x00007ffff7b1dc85 in std::locale::locale() () from $project_dir/build/lib/libstdc++.so.6 #4 0x00007ffff7b1a574 in std::ios_base::Init::Init() () from $project_dir/build/lib/libstdc++.so.6 #5 0x0000000000400fde in _GLOBAL__sub_I_main () at $project_dir/build/gcc-4.9.4/include/c++/4.9.4/iostream:74 #6 0x00000000004012ed in __libc_csu_init () #7 0x00007ffff7518cb0 in __libc_start_main () from /lib64/libc.so.6 #8 0x0000000000401021 in _start () (gdb) 3. LD_DEBUG=all I also tried to see the linker information by enabling LD_DEBUG=all for the segfault case. I found something suspicious, as it searches for pthread_once symbol, and when it unable to find this, it gives segfault (that is my interpretation of the following output snippet BTW): initialize program: $project_dir/build/bin/run symbol=_ZNSt8ios_base4InitC1Ev; lookup in file=$project_dir/build/bin/run [0] symbol=_ZNSt8ios_base4InitC1Ev; lookup in file=$project_dir/build/lib/libstdc++.so.6 [0] binding file $project_dir/build/bin/run [0] to $project_dir/build/lib/libstdc++.so.6 [0]: normal symbol _ZNSt8ios_base4InitC1Ev' [GLIBCXX_3.4] symbol=_ZNSt6localeC1Ev; lookup in file=$project_dir/build/bin/run [0] symbol=_ZNSt6localeC1Ev; lookup in file=$project_dir/build/lib/libstdc++.so.6 [0] binding file $project_dir/build/lib/libstdc++.so.6 [0] to $project_dir/build/lib/libstdc++.so.6 [0]: normal symbol _ZNSt6localeC1Ev' [GLIBCXX_3.4] symbol=pthread_once; lookup in file=$project_dir/build/bin/run [0] symbol=pthread_once; lookup in file=$project_dir/build/lib/libstdc++.so.6 [0] symbol=pthread_once; lookup in file=$project_dir/build/lib/libgcc_s.so.1 [0] symbol=pthread_once; lookup in file=/lib64/libc.so.6 [0] symbol=pthread_once; lookup in file=/lib64/libm.so.6 [0] symbol=pthread_once; lookup in file=/lib64/ld-linux-x86-64.so.2 [0] But I dont see any pthread_once for the case when it runs successfully! Questions I know that its very difficult to debug like this and probably I've not given a lot of informations about the environments and all. But still, my question is: what could be the possible root-cause for this segfault? How to debug further and find that? Once I find the issue, fix would be easy. Compiler and Platform I'm using GCC 4.9 on RHEL5. Experiments E#1 If I comment the following line: std::vector> y {}; It compiles and runs fine! E#2 I just included the following header to my program: #include and linked accordingly. Now it works without any segfault. So it seems by having a dependency on libboost_system.so.1.53.0. , some requirements are met, or the problem is circumvented! E#3 Since I saw it working when I made the executable to be linked against libboost_system.so.1.53.0 , so I did the following things step by step. Instead of using #include in the code itself, I use the original code and ran it by preloading libboost_system.so using LD_PRELOAD as follows: $ LD_PRELOAD=$project_dir/build/lib/libboost_system.so $project_dir/build/bin/run and it ran successfully! Next I did ldd on the libboost_system.so which gave a list of libs, two of which were: /lib64/librt.so.1 /lib64/libpthread.so.0 So instead of preloading libboost_system , I preload librt and libpthread separately: $ LD_PRELOAD=/lib64/librt.so.1 $project_dir/build/bin/run $ LD_PRELOAD=/lib64/libpthread.so.0 $project_dir/build/bin/run In both cases, it ran successfully. Now my conclusion is that by loading either librt or libpthread (or both ), some requirements are met or the problem is circumvented! I still dont know the root cause of the issue, though. Compilation and Linking Options Since the build system is complex and there are lots of options which are there by default. So I tried to explicitly add -lpthread using CMake's set command, then it worked, as we have already seen that by preloading libpthread it works! In order to see the build difference between these two cases ( when-it-works and when-it-gives-segfault ), I built it in verbose mode by passing -v to GCC, to see the compilation stages and the options it actually passes to cc1plus (compiler) and collect2 (linker). ( Note that paths has been edited for brevity, using dollar-sign and dummy paths. ) $/gcc-4.9.4/cc1plus -quiet -v -I /a/include -I /b/include -iprefix $/gcc-4.9.4/ -MMD main.cpp.d -MF main.cpp.o.d -MT main.cpp.o -D_GNU_SOURCE -D_REENTRANT -D __USE_XOPEN2K8 -D _LARGEFILE_SOURCE -D _FILE_OFFSET_BITS=64 -D __STDC_FORMAT_MACROS -D __STDC_LIMIT_MACROS -D NDEBUG $/lab/main.cpp -quiet -dumpbase main.cpp -msse -mfpmath=sse -march=core2 -auxbase-strip main.cpp.o -g -O3 -Wall -Wextra -std=gnu++1y -version -fdiagnostics-color=auto -ftemplate-depth=128 -fno-operator-names -o /tmp/ccxfkRyd.s Irrespective of whether it works or not, the command-line arguments to cc1plus are exactly the same. No difference at all. That does not seem to be very helpful. The difference, however, is at the linking time. Here is what I see, for the case when it works : $/gcc-4.9.4/collect2 -plugin $/gcc-4.9.4/liblto_plugin.so -plugin-opt=$/gcc-4.9.4/lto-wrapper -plugin-opt=-fresolution=/tmp/cchl8RtI.res -plugin-opt=-pass-through=-lgcc_s -plugin-opt=-pass-through=-lgcc -plugin-opt=-pass-through=-lpthread -plugin-opt=-pass-through=-lc -plugin-opt=-pass-through=-lgcc_s -plugin-opt=-pass-through=-lgcc --eh-frame-hdr -m elf_x86_64 -export-dynamic -dynamic-linker /lib64/ld-linux-x86-64.so.2 -o run /usr/lib/../lib64/crt1.o /usr/lib/../lib64/crti.o $/gcc-4.9.4/crtbegin.o -L/a/lib -L/b/lib -L/c/lib -lpthread --as-needed main.cpp.o -lboost_timer -lboost_wave -lboost_chrono -lboost_filesystem -lboost_graph -lboost_locale -lboost_thread -lboost_wserialization -lboost_atomic -lboost_context -lboost_date_time -lboost_iostreams -lboost_math_c99 -lboost_math_c99f -lboost_math_c99l -lboost_math_tr1 -lboost_math_tr1f -lboost_math_tr1l -lboost_mpi -lboost_prg_exec_monitor -lboost_program_options -lboost_random -lboost_regex -lboost_serialization -lboost_signals -lboost_system -lboost_unit_test_framework -lboost_exception -lboost_test_exec_monitor -lbz2 -licui18n -licuuc -licudata -lz -rpath /a/lib:/b/lib:/c/lib: -lstdc++ -lm -lgcc_s -lgcc -lpthread -lc -lgcc_s -lgcc $/gcc-4.9.4/crtend.o /usr/lib/../lib64/crtn.o As you can see, -lpthread is mentioned twice ! The first -lpthread (which is followed by --as-needed ) is missing for the case when it gives segfault . That is the only difference between these two cases. Output of nm -C in both cases Interestingly, the output of nm -C in both cases is identical ( if you ignore the integer values in the first columns ). 0000000000402580 d _DYNAMIC 0000000000402798 d _GLOBAL_OFFSET_TABLE_ 0000000000401000 t _GLOBAL__sub_I_main 0000000000401358 R _IO_stdin_used w _ITM_deregisterTMCloneTable w _ITM_registerTMCloneTable w _Jv_RegisterClasses U _Unwind_Resume 0000000000401150 W std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_destroy() 0000000000401170 W std::vector, std::allocator > >::~vector() 0000000000401170 W std::vector, std::allocator > >::~vector() 0000000000401250 W std::vector >, std::allocator > > >::~vector() 0000000000401250 W std::vector >, std::allocator > > >::~vector() U std::ios_base::Init::Init() U std::ios_base::Init::~Init() 0000000000402880 B std::cout U std::basic_ostream >& std::endl >(std::basic_ostream >&) 0000000000402841 b std::__ioinit U std::basic_ostream >& std::operator<< >(std::basic_ostream >&, char const*) U operator delete(void*) U operator new(unsigned long) 0000000000401510 r __FRAME_END__ 0000000000402818 d __JCR_END__ 0000000000402818 d __JCR_LIST__ 0000000000402820 d __TMC_END__ 0000000000402820 d __TMC_LIST__ 0000000000402838 A __bss_start U __cxa_atexit 0000000000402808 D __data_start 0000000000401100 t __do_global_dtors_aux 0000000000402820 t __do_global_dtors_aux_fini_array_entry 0000000000402810 d __dso_handle 0000000000402828 t __frame_dummy_init_array_entry w __gmon_start__ U __gxx_personality_v0 0000000000402838 t __init_array_end 0000000000402828 t __init_array_start 00000000004012b0 T __libc_csu_fini 00000000004012c0 T __libc_csu_init U __libc_start_main w __pthread_key_create 0000000000402838 A _edata 0000000000402990 A _end 000000000040134c T _fini 0000000000400e68 T _init 0000000000401028 T _start 0000000000401054 t call_gmon_start 0000000000402840 b completed.6661 0000000000402808 W data_start 0000000000401080 t deregister_tm_clones 0000000000401120 t frame_dummy 0000000000400f40 T main 00000000004010c0 t register_tm_clones","c++, gcc, segmentation-fault, redhat, ld",38,1976,2,https://stackoverflow.com/questions/47202468/segfault-on-declaring-a-variable-of-type-vectorshared-ptrint 25905923,python sys.exit not working in try,"Python 2.7.5 (default, Feb 26 2014, 13:43:17) [GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] on linux2 Type ""help"", ""copyright"", ""credits"" or ""license"" for more information. >>> import sys >>> try: ... sys.exit() ... except: ... print ""in except"" ... in except >>> try: ... sys.exit(0) ... except: ... print ""in except"" ... in except >>> try: ... sys.exit(1) ... except: ... print ""in except"" ... in except Why am not able to trigger sys.exit() in try, any suggestions...!!! The code posted here has all the version details. I have tried all possible ways i know to trigger it, but i failed. It gets to 'except' block. Thanks in advance..","python sys.exit not working in try Python 2.7.5 (default, Feb 26 2014, 13:43:17) [GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] on linux2 Type ""help"", ""copyright"", ""credits"" or ""license"" for more information. >>> import sys >>> try: ... sys.exit() ... except: ... print ""in except"" ... in except >>> try: ... sys.exit(0) ... except: ... print ""in except"" ... in except >>> try: ... sys.exit(1) ... except: ... print ""in except"" ... in except Why am not able to trigger sys.exit() in try, any suggestions...!!! The code posted here has all the version details. I have tried all possible ways i know to trigger it, but i failed. It gets to 'except' block. Thanks in advance..","python, python-2.7, redhat, exit",35,82508,2,https://stackoverflow.com/questions/25905923/python-sys-exit-not-working-in-try 4140219,How to confirm RedHat Enterprise Linux version?,"I am a bit confused by the fact that although I installed RHEL 5.1 from DVD (RedHat/5.1.x86_64), when I issue command: cat /etc/redhat-release I got: Red Hat Enterprise Linux Server release 5.5 (Tikanga) What does this mean? is this to be the release version or kernel version? Is there another way to confirm the real version of RHEL? I am asking this question because there will be certain applications that would depend on this. Many thanks in advance.","How to confirm RedHat Enterprise Linux version? I am a bit confused by the fact that although I installed RHEL 5.1 from DVD (RedHat/5.1.x86_64), when I issue command: cat /etc/redhat-release I got: Red Hat Enterprise Linux Server release 5.5 (Tikanga) What does this mean? is this to be the release version or kernel version? Is there another way to confirm the real version of RHEL? I am asking this question because there will be certain applications that would depend on this. Many thanks in advance.","linux, redhat",33,190568,4,https://stackoverflow.com/questions/4140219/how-to-confirm-redhat-enterprise-linux-version 22509271,import self signed certificate in redhat,"How can I import a self-signed certificate in Red-Hat Linux. I'm not an expert with respect to certificates and find it difficult to find the right answer through googling, since I don't know the difference between a .cer, .crt or a .pem. Having said that, what I would like to do should not be rocket science (In windows I can do this with a few clicks in my browser) I want to connect to a server that makes use of a self-signed certificate. For example using wget, without having to use the --no-check-certificate option. To make this work I will have to add the self-signed certificate of the server to my RedHat box. I have found out the certificates reside in /etc/pki/tls. But I am at a loss what actions I should perform to make wget function without complaining. I can get the SSL certificate from the server using: openssl s_client -connect server:443 The certificate is between ""BEGIN CERTIFICATE and END CERTIFICATE"" I do not know what kind of certificate this is. Next I will have to put it in the /etc/pki/tls/certs directory and apply some openssl secert sauce I don't know about. Can you help?","import self signed certificate in redhat How can I import a self-signed certificate in Red-Hat Linux. I'm not an expert with respect to certificates and find it difficult to find the right answer through googling, since I don't know the difference between a .cer, .crt or a .pem. Having said that, what I would like to do should not be rocket science (In windows I can do this with a few clicks in my browser) I want to connect to a server that makes use of a self-signed certificate. For example using wget, without having to use the --no-check-certificate option. To make this work I will have to add the self-signed certificate of the server to my RedHat box. I have found out the certificates reside in /etc/pki/tls. But I am at a loss what actions I should perform to make wget function without complaining. I can get the SSL certificate from the server using: openssl s_client -connect server:443 The certificate is between ""BEGIN CERTIFICATE and END CERTIFICATE"" I do not know what kind of certificate this is. Next I will have to put it in the /etc/pki/tls/certs directory and apply some openssl secert sauce I don't know about. Can you help?","ssl, https, openssl, redhat, self-signed",33,110889,4,https://stackoverflow.com/questions/22509271/import-self-signed-certificate-in-redhat 46089219,How to reduce the size of RHEL/Centos/Fedora Docker image,"The base image from Red Hat is quite small, on the order of 196M for RHEL 7.4. However it tends to be missing a lot of the bits and pieces that are required by the products I want to build new images for. The moment I do a ""yum install Xxx"" on top of it the image size blows out to by +500M-800M. Is there a way to reduce the size of the image?","How to reduce the size of RHEL/Centos/Fedora Docker image The base image from Red Hat is quite small, on the order of 196M for RHEL 7.4. However it tends to be missing a lot of the bits and pieces that are required by the products I want to build new images for. The moment I do a ""yum install Xxx"" on top of it the image size blows out to by +500M-800M. Is there a way to reduce the size of the image?","docker, centos, redhat, fedora",32,44777,2,https://stackoverflow.com/questions/46089219/how-to-reduce-the-size-of-rhel-centos-fedora-docker-image 11286669,jps not working,"I have installed java-1.6.0-openjdk-devel. $java -version java version ""1.6.0_24"" OpenJDK Runtime Environment (IcedTea6 1.11.3) (rhel-1.48.1.11.3.el6_2-x86_64) OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode) when typing jps into command prompt $jps -bash: jps: command not found I do not believe it is an openjdk error because I have used it around 6 months back on the same system and it worked fine. Also, it works fine on my laptop.","jps not working I have installed java-1.6.0-openjdk-devel. $java -version java version ""1.6.0_24"" OpenJDK Runtime Environment (IcedTea6 1.11.3) (rhel-1.48.1.11.3.el6_2-x86_64) OpenJDK 64-Bit Server VM (build 20.0-b12, mixed mode) when typing jps into command prompt $jps -bash: jps: command not found I do not believe it is an openjdk error because I have used it around 6 months back on the same system and it worked fine. Also, it works fine on my laptop.","linux, redhat, java",31,99027,12,https://stackoverflow.com/questions/11286669/jps-not-working 5272026,"TCP: Server sends [RST, ACK] immediately after receiving [SYN] from Client","Host_A tries to send some data to Host_B over TCP. Host_B is listening on port 8181. Both Host_A & Host_B are Linux boxes (Red Hat Enterprise). The TCP layer is implemented using Java NIO API. Whatever Host_A sends, Host_B is unable to receive. Sniffing the data on wire using WireShark resulted in the following log: 1) Host_A (33253) > Host_B (8181): [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=513413781 TSER=0 WS=7 2) Host_B (8181) > Host_A (33253): [RST, ACK] Seq=1 Ack=1 Win=0 Len=0 The logs show that Host_A sends a [SYN] flag to Host_B in order to establish connection. But instead of [SYN, ACK] Host_B responds with an [RST, ACK] which resets/closes the connection. This behavior is observed always. I am wondering under what circumstance does a TCP listener sends [RST,ACK] in response to a [SYN]?","TCP: Server sends [RST, ACK] immediately after receiving [SYN] from Client Host_A tries to send some data to Host_B over TCP. Host_B is listening on port 8181. Both Host_A & Host_B are Linux boxes (Red Hat Enterprise). The TCP layer is implemented using Java NIO API. Whatever Host_A sends, Host_B is unable to receive. Sniffing the data on wire using WireShark resulted in the following log: 1) Host_A (33253) > Host_B (8181): [SYN] Seq=0 Win=5840 Len=0 MSS=1460 TSV=513413781 TSER=0 WS=7 2) Host_B (8181) > Host_A (33253): [RST, ACK] Seq=1 Ack=1 Win=0 Len=0 The logs show that Host_A sends a [SYN] flag to Host_B in order to establish connection. But instead of [SYN, ACK] Host_B responds with an [RST, ACK] which resets/closes the connection. This behavior is observed always. I am wondering under what circumstance does a TCP listener sends [RST,ACK] in response to a [SYN]?","linux, tcp, nio, redhat",31,154355,2,https://stackoverflow.com/questions/5272026/tcp-server-sends-rst-ack-immediately-after-receiving-syn-from-client 21839538,Change JENKINS_HOME on Red Hat Linux?,"I used this procedure to install Jenkins: [URL] After it was up and running I discovered the /var/lib/jenkins partition on my server is very small. I want to move it, but I do not want to change the user that it runs under. I am new to Linux and I'm stumped. How do I move it for example to my Home/Public folder? The ""Jenkins"" user doesn't seem to have a Home folder. Its running as a daemon on startup, so I have no idea where to configure those settings. Can I create a Home folder for the Jenkins user? How? I read this article: [URL] but do not understand HOW to ""set the new Jenkins home"". I have used the export command, and restarted the service, but the old path still shows up in the Manage Jenkins screens. I've read the 2-3 similar questions on stackoverflow also, but there's always a big missing piece for me. Where to find that file where I change the path permanently?","Change JENKINS_HOME on Red Hat Linux? I used this procedure to install Jenkins: [URL] After it was up and running I discovered the /var/lib/jenkins partition on my server is very small. I want to move it, but I do not want to change the user that it runs under. I am new to Linux and I'm stumped. How do I move it for example to my Home/Public folder? The ""Jenkins"" user doesn't seem to have a Home folder. Its running as a daemon on startup, so I have no idea where to configure those settings. Can I create a Home folder for the Jenkins user? How? I read this article: [URL] but do not understand HOW to ""set the new Jenkins home"". I have used the export command, and restarted the service, but the old path still shows up in the Manage Jenkins screens. I've read the 2-3 similar questions on stackoverflow also, but there's always a big missing piece for me. Where to find that file where I change the path permanently?","jenkins, redhat",30,98861,9,https://stackoverflow.com/questions/21839538/change-jenkins-home-on-red-hat-linux 8789522,How to change the mysql root password,I have installed MySQL server 5 on redhat linux. I can't login as root so I can't change the root password. mysql -u root -p Enter password: ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: NO) When I try to set one like this: mysqladmin -u root password 'newpass' I get an error: mysqladmin: connect to server at 'localhost' failed error: 'Access denied for user 'root'@'localhost' (using password: NO)' As if there is a root password set. I have also tried resetting the password using (described here ) /sbin/service mysqld start --skip-grant-tables And then making: mysql> UPDATE mysql.user SET Password=PASSWORD('newpass') -> WHERE User='root'; ERROR 1142 (42000): UPDATE command denied to user ''@'localhost' for table 'user' I even uninstalled mysql-server (using yum) and then reinstalled it but that did not help. How do I force reset the root password?,How to change the mysql root password I have installed MySQL server 5 on redhat linux. I can't login as root so I can't change the root password. mysql -u root -p Enter password: ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: NO) When I try to set one like this: mysqladmin -u root password 'newpass' I get an error: mysqladmin: connect to server at 'localhost' failed error: 'Access denied for user 'root'@'localhost' (using password: NO)' As if there is a root password set. I have also tried resetting the password using (described here ) /sbin/service mysqld start --skip-grant-tables And then making: mysql> UPDATE mysql.user SET Password=PASSWORD('newpass') -> WHERE User='root'; ERROR 1142 (42000): UPDATE command denied to user ''@'localhost' for table 'user' I even uninstalled mysql-server (using yum) and then reinstalled it but that did not help. How do I force reset the root password?,"mysql, linux, passwords, redhat",30,104776,10,https://stackoverflow.com/questions/8789522/how-to-change-the-mysql-root-password 20790499,No implicit conversion of String into Integer (TypeError)?,"I'm trying to write a script that will get a system ID from Red Hat Satellite/Spacewalk, which uses XMLRPC. I'm trying to get the ID which is the first value when using the XMLRPC client using the system name. I'm referencing the documentation from Red Hat for the method used below: #!/usr/bin/env ruby require ""xmlrpc/client"" @SATELLITE_URL = ""satellite.rdu.salab.redhat.com"" @SATELLITE_API = ""/rpc/api"" @SATELLITE_LOGIN = ""********"" @SATELLITE_PASSWORD = ""*******"" @client = XMLRPC::Client.new(@SATELLITE_URL, @SATELLITE_API) @key = @client.call(""auth.login"", @SATELLITE_LOGIN, @SATELLITE_PASSWORD) @getsystemid = @client.call(""system.getId"", @key, 'cfme038') print ""#{@getsystemid}"" @systemid = @getsystemid ['id'] The output of getsystemid looks like this: [{""id""=>1000010466, ""name""=>""cfme038"", ""last_checkin""=>#}] But when I try to just get just id I get this error: no implicit conversion of String into Integer (TypeError) Any help is appreciated","No implicit conversion of String into Integer (TypeError)? I'm trying to write a script that will get a system ID from Red Hat Satellite/Spacewalk, which uses XMLRPC. I'm trying to get the ID which is the first value when using the XMLRPC client using the system name. I'm referencing the documentation from Red Hat for the method used below: #!/usr/bin/env ruby require ""xmlrpc/client"" @SATELLITE_URL = ""satellite.rdu.salab.redhat.com"" @SATELLITE_API = ""/rpc/api"" @SATELLITE_LOGIN = ""********"" @SATELLITE_PASSWORD = ""*******"" @client = XMLRPC::Client.new(@SATELLITE_URL, @SATELLITE_API) @key = @client.call(""auth.login"", @SATELLITE_LOGIN, @SATELLITE_PASSWORD) @getsystemid = @client.call(""system.getId"", @key, 'cfme038') print ""#{@getsystemid}"" @systemid = @getsystemid ['id'] The output of getsystemid looks like this: [{""id""=>1000010466, ""name""=>""cfme038"", ""last_checkin""=>#}] But when I try to just get just id I get this error: no implicit conversion of String into Integer (TypeError) Any help is appreciated","ruby, redhat",28,117377,1,https://stackoverflow.com/questions/20790499/no-implicit-conversion-of-string-into-integer-typeerror 37313677,What is the Difference between ConditionPathExists= and ConditionPathExists=| in systemd?,I need check a file not exist before i start my service in Systemd. I see two case in [Unit]: ConditionPathExists=!/tmp/abc and ConditionPathExists=|!/tmp/abc are they the same? Can anybody help me explain if they are different?,What is the Difference between ConditionPathExists= and ConditionPathExists=| in systemd? I need check a file not exist before i start my service in Systemd. I see two case in [Unit]: ConditionPathExists=!/tmp/abc and ConditionPathExists=|!/tmp/abc are they the same? Can anybody help me explain if they are different?,"linux, redhat, systemd",28,28371,1,https://stackoverflow.com/questions/37313677/what-is-the-difference-between-conditionpathexists-and-conditionpathexists-in 32482664,Symfony is linked to the wrong PHP version,"I'm trying to move my Project to a linux redhat server that uses Apache but the problem I'm facing there is that this Server has 2 different PHP versions installed. Symfony (2.5.12) seems to look for the php executable at /usr/bin/php by default but there is a 5.2 version installed, which is needed for other projects. At /opt/rh/php55/root/usr/bin/php is an installed 5.5 version of PHP that I want to use for symfony. So how can I configure Symfony to use the php version that is installed at the custom path?","Symfony is linked to the wrong PHP version I'm trying to move my Project to a linux redhat server that uses Apache but the problem I'm facing there is that this Server has 2 different PHP versions installed. Symfony (2.5.12) seems to look for the php executable at /usr/bin/php by default but there is a 5.2 version installed, which is needed for other projects. At /opt/rh/php55/root/usr/bin/php is an installed 5.5 version of PHP that I want to use for symfony. So how can I configure Symfony to use the php version that is installed at the custom path?","php, symfony, redhat",27,43679,6,https://stackoverflow.com/questions/32482664/symfony-is-linked-to-the-wrong-php-version 65763994,LibClamAV Error: cli_loaddbdir(): No supported database files found in /var/lib/clamav,"When I tried to Scan the /home directory I got this error. [root@ip-172-31-34-67 ~]# clamscan /home LibClamAV Error: cli_loaddbdir(): No supported database files found in /var/lib/clamav ERROR: Can't open file or directory ----------- SCAN SUMMARY ----------- Known viruses: 0 Engine version: 0.103.0 Scanned directories: 0 Scanned files: 0 Infected files: 0 Data scanned: 0.00 MB Data read: 0.00 MB (ratio 0.00:1) Time: 0.004 sec (0 m 0 s) Start Date: 2021:01:17 17:43:31 End Date: 2021:01:17 17:43:31 [root@ip-172-31-34-67 ~]# It shows on supported database files found in /var/lib/clamav, which may caused the issue?","LibClamAV Error: cli_loaddbdir(): No supported database files found in /var/lib/clamav When I tried to Scan the /home directory I got this error. [root@ip-172-31-34-67 ~]# clamscan /home LibClamAV Error: cli_loaddbdir(): No supported database files found in /var/lib/clamav ERROR: Can't open file or directory ----------- SCAN SUMMARY ----------- Known viruses: 0 Engine version: 0.103.0 Scanned directories: 0 Scanned files: 0 Infected files: 0 Data scanned: 0.00 MB Data read: 0.00 MB (ratio 0.00:1) Time: 0.004 sec (0 m 0 s) Start Date: 2021:01:17 17:43:31 End Date: 2021:01:17 17:43:31 [root@ip-172-31-34-67 ~]# It shows on supported database files found in /var/lib/clamav, which may caused the issue?","linux, centos, redhat, clamav",26,30702,4,https://stackoverflow.com/questions/65763994/libclamav-error-cli-loaddbdir-no-supported-database-files-found-in-var-lib 3848064,Building OpenLDAP from sources and missing BerkelyDB,"I'm building OpenLDAP on a RHEL 5; I used instructions found at [URL] . All went well, until running './configure' for OpenLDAP - the following error was recorded: ** checking for gethostbyaddr_r... yes checking number of arguments of ctime_r... 2 checking number of arguments of gethostbyname_r... 6 checking number of arguments of gethostbyaddr_r... 8 checking db.h usability... yes checking db.h presence... yes checking for db.h... yes checking for Berkeley DB major version in db.h... 5 checking for Berkeley DB minor version in db.h... 1 checking if Berkeley DB version supported by BDB/HDB backends... yes **checking for Berkeley DB link (default)... no configure: error: BDB/HDB: BerkeleyDB not available** I have Googled like a maniac but have been unsuccessful to find a resolution - any tips on areas to explore? Thanks","Building OpenLDAP from sources and missing BerkelyDB I'm building OpenLDAP on a RHEL 5; I used instructions found at [URL] . All went well, until running './configure' for OpenLDAP - the following error was recorded: ** checking for gethostbyaddr_r... yes checking number of arguments of ctime_r... 2 checking number of arguments of gethostbyname_r... 6 checking number of arguments of gethostbyaddr_r... 8 checking db.h usability... yes checking db.h presence... yes checking for db.h... yes checking for Berkeley DB major version in db.h... 5 checking for Berkeley DB minor version in db.h... 1 checking if Berkeley DB version supported by BDB/HDB backends... yes **checking for Berkeley DB link (default)... no configure: error: BDB/HDB: BerkeleyDB not available** I have Googled like a maniac but have been unsuccessful to find a resolution - any tips on areas to explore? Thanks","redhat, berkeley-db, openldap",26,26421,5,https://stackoverflow.com/questions/3848064/building-openldap-from-sources-and-missing-berkelydb 29343809,PHP is_writable() function always returns false for a writable directory,"I'm trying to install a PHP-based software package in a Red Hat 7 Amazon EC2 instance (ami-8cff51fb) that has had Apache 2.4.6 and PHP 5.4.16 installed on it using yum. The installation fails because it says a particular directory needs to be writable by the webserver with 0755 or 0775 permissions. The directory in question has 0775 permissions with root:apache ownership. I have verified that the httpd process is being run by the apache user and that the apache user is a member of the apache group. If I edit /etc/passwd to temporarily give the apache user a login shell and then su to that account, I am able to manually create files as the apache user within the directory using the touch command. I took a look at the source code of the installer script and identified that it's failing because PHP's is_writable() function is returning false for the directory in question. I created a separate test PHP script to isolate and verify the behaviour I'm seeing: This outputs the NOT writable message. If I change $dir above to be /tmp then it correctly outputs that /tmp is writable. If I change the directory permissions to 0777 and/or change the ownership to apache:apache then PHP still reports that the directory isn't writable. I even tried creating a /test directory set up with the same permissions and ownership and my test script still reports it as not writable. I'm really at a loss as to explain this behaviour, so any ideas would be welcome! Thanks in advance. The directory listing for /var/www/html/limesurvey is given below. The tmp and upload directories have 0775 permissions as per Lime Survey's installation instructions . test.php is my test script mentioned above. [ec2-user@ip-xx-x-x-xxx limesurvey]$ pwd /var/www/html/limesurvey [ec2-user@ip-xx-x-x-xxx limesurvey]$ ls -al total 80 drwxr-xr-x. 20 root apache 4096 Mar 30 11:25 . drwxr-xr-x. 3 root root 23 Mar 25 14:41 .. drwxr-xr-x. 2 root apache 38 Mar 10 12:56 admin drwxr-xr-x. 16 root apache 4096 Mar 10 12:56 application drwxr-xr-x. 3 root apache 4096 Mar 10 12:56 docs drwxr-xr-x. 2 root apache 4096 Mar 10 12:56 fonts drwxr-xr-x. 19 root apache 4096 Mar 10 12:56 framework -rw-r--r--. 1 root apache 429 Mar 10 12:56 .gitattributes -rw-r--r--. 1 root apache 399 Mar 10 12:56 .gitignore -rw-r--r--. 1 root apache 296 Mar 10 12:56 .htaccess drwxr-xr-x. 4 root apache 4096 Mar 10 12:56 images -rw-r--r--. 1 root apache 6652 Mar 10 12:56 index.php drwxr-xr-x. 5 root apache 39 Mar 10 12:56 installer drwxr-xr-x. 89 root apache 4096 Mar 10 12:56 locale drwxrwxr-x. 2 root apache 39 Mar 25 14:41 logs drwxr-xr-x. 4 root apache 49 Mar 10 12:56 plugins -rw-r--r--. 1 root apache 61 Mar 10 12:56 README drwxr-xr-x. 4 root apache 4096 Mar 10 12:56 scripts -rw-r--r--. 1 root apache 380 Mar 10 12:56 .scrutinizer.yml drwxr-xr-x. 5 root apache 4096 Mar 10 12:56 styles drwxr-xr-x. 5 root apache 4096 Mar 10 12:56 styles-public drwxr-xr-x. 12 root apache 4096 Mar 10 12:56 templates -rw-r--r--. 1 root apache 159 Mar 30 11:11 test.php drwxr-xr-x. 3 root apache 20 Mar 10 12:56 themes drwxr-xr-x. 26 root apache 4096 Mar 10 12:56 third_party drwxrwxr-x. 5 root apache 80 Mar 26 13:45 tmp drwxrwxr-x. 6 root apache 79 Mar 10 12:57 upload Running namei -l /var/www/html/limesurvey/tmp gives: [ec2-user@ip-x-x-x-xxx ~]$ namei -l /var/www/html/limesurvey/tmp f: /var/www/html/limesurvey/tmp drwxr-xr-x root root / drwxr-xr-x root root var drwxr-xr-x root root www drwxr-xr-x root root html drwxr-xr-x root apache limesurvey drwxrwxr-x root apache tmp","PHP is_writable() function always returns false for a writable directory I'm trying to install a PHP-based software package in a Red Hat 7 Amazon EC2 instance (ami-8cff51fb) that has had Apache 2.4.6 and PHP 5.4.16 installed on it using yum. The installation fails because it says a particular directory needs to be writable by the webserver with 0755 or 0775 permissions. The directory in question has 0775 permissions with root:apache ownership. I have verified that the httpd process is being run by the apache user and that the apache user is a member of the apache group. If I edit /etc/passwd to temporarily give the apache user a login shell and then su to that account, I am able to manually create files as the apache user within the directory using the touch command. I took a look at the source code of the installer script and identified that it's failing because PHP's is_writable() function is returning false for the directory in question. I created a separate test PHP script to isolate and verify the behaviour I'm seeing: This outputs the NOT writable message. If I change $dir above to be /tmp then it correctly outputs that /tmp is writable. If I change the directory permissions to 0777 and/or change the ownership to apache:apache then PHP still reports that the directory isn't writable. I even tried creating a /test directory set up with the same permissions and ownership and my test script still reports it as not writable. I'm really at a loss as to explain this behaviour, so any ideas would be welcome! Thanks in advance. The directory listing for /var/www/html/limesurvey is given below. The tmp and upload directories have 0775 permissions as per Lime Survey's installation instructions . test.php is my test script mentioned above. [ec2-user@ip-xx-x-x-xxx limesurvey]$ pwd /var/www/html/limesurvey [ec2-user@ip-xx-x-x-xxx limesurvey]$ ls -al total 80 drwxr-xr-x. 20 root apache 4096 Mar 30 11:25 . drwxr-xr-x. 3 root root 23 Mar 25 14:41 .. drwxr-xr-x. 2 root apache 38 Mar 10 12:56 admin drwxr-xr-x. 16 root apache 4096 Mar 10 12:56 application drwxr-xr-x. 3 root apache 4096 Mar 10 12:56 docs drwxr-xr-x. 2 root apache 4096 Mar 10 12:56 fonts drwxr-xr-x. 19 root apache 4096 Mar 10 12:56 framework -rw-r--r--. 1 root apache 429 Mar 10 12:56 .gitattributes -rw-r--r--. 1 root apache 399 Mar 10 12:56 .gitignore -rw-r--r--. 1 root apache 296 Mar 10 12:56 .htaccess drwxr-xr-x. 4 root apache 4096 Mar 10 12:56 images -rw-r--r--. 1 root apache 6652 Mar 10 12:56 index.php drwxr-xr-x. 5 root apache 39 Mar 10 12:56 installer drwxr-xr-x. 89 root apache 4096 Mar 10 12:56 locale drwxrwxr-x. 2 root apache 39 Mar 25 14:41 logs drwxr-xr-x. 4 root apache 49 Mar 10 12:56 plugins -rw-r--r--. 1 root apache 61 Mar 10 12:56 README drwxr-xr-x. 4 root apache 4096 Mar 10 12:56 scripts -rw-r--r--. 1 root apache 380 Mar 10 12:56 .scrutinizer.yml drwxr-xr-x. 5 root apache 4096 Mar 10 12:56 styles drwxr-xr-x. 5 root apache 4096 Mar 10 12:56 styles-public drwxr-xr-x. 12 root apache 4096 Mar 10 12:56 templates -rw-r--r--. 1 root apache 159 Mar 30 11:11 test.php drwxr-xr-x. 3 root apache 20 Mar 10 12:56 themes drwxr-xr-x. 26 root apache 4096 Mar 10 12:56 third_party drwxrwxr-x. 5 root apache 80 Mar 26 13:45 tmp drwxrwxr-x. 6 root apache 79 Mar 10 12:57 upload Running namei -l /var/www/html/limesurvey/tmp gives: [ec2-user@ip-x-x-x-xxx ~]$ namei -l /var/www/html/limesurvey/tmp f: /var/www/html/limesurvey/tmp drwxr-xr-x root root / drwxr-xr-x root root var drwxr-xr-x root root www drwxr-xr-x root root html drwxr-xr-x root apache limesurvey drwxrwxr-x root apache tmp","php, linux, amazon-ec2, redhat, rhel7",26,19763,5,https://stackoverflow.com/questions/29343809/php-is-writable-function-always-returns-false-for-a-writable-directory 56361133,How to fix ModuleNotFoundError: No module named 'pip._internal' with python source code installation,I have installed python3.7 on redhat machine by compiling source code but I have a problem when dealing with pip3. I have made this steps after installation: sudo ln /usr/local/bin/python3.7 /usr/bin/python3 sudo ln /usr/local/bin/pip3.7 /usr/bin/pip3 python3 -- version gives Python 3.7.3 But I have this errors by running these commands : python3 -m pip install requests gives /usr/bin/python3: No module named pip.__main__; 'pip' is a package and cannot be directly executed pip3 install requests gives ModuleNotFoundError: No module named 'pip._internal',How to fix ModuleNotFoundError: No module named 'pip._internal' with python source code installation I have installed python3.7 on redhat machine by compiling source code but I have a problem when dealing with pip3. I have made this steps after installation: sudo ln /usr/local/bin/python3.7 /usr/bin/python3 sudo ln /usr/local/bin/pip3.7 /usr/bin/pip3 python3 -- version gives Python 3.7.3 But I have this errors by running these commands : python3 -m pip install requests gives /usr/bin/python3: No module named pip.__main__; 'pip' is a package and cannot be directly executed pip3 install requests gives ModuleNotFoundError: No module named 'pip._internal',"python, python-3.x, pip, redhat",25,36431,4,https://stackoverflow.com/questions/56361133/how-to-fix-modulenotfounderror-no-module-named-pip-internal-with-python-sour 8200633,What's the difference between rpm and yum?,"Is there any difference between rpm and yum? I know the recent system prefer yum, but want to know if there is need for rpm also.","What's the difference between rpm and yum? Is there any difference between rpm and yum? I know the recent system prefer yum, but want to know if there is need for rpm also.","centos, redhat, fedora, yum, rhel",25,26801,2,https://stackoverflow.com/questions/8200633/whats-the-difference-between-rpm-and-yum 48930281,Export all users from KeyCloak,"I have a specific use case in which we want to ask Keycloak for all the users and the groups and roles for each user, on a daily basis. For reconciliation purposes with other internal systems. Currently we are using the provided Keycloak endpoints in the UsersResource for this. But we see that performance slows down after each call to a point we can't use this solution anymore. There are more then 30K users in the realm. We've also seen that Keycloak can export the database, but only on system boot (I guess for migration purposes). Given that we want to extract all the users on a daily basis we cannot use this. Are there some known functionalities or workarounds?","Export all users from KeyCloak I have a specific use case in which we want to ask Keycloak for all the users and the groups and roles for each user, on a daily basis. For reconciliation purposes with other internal systems. Currently we are using the provided Keycloak endpoints in the UsersResource for this. But we see that performance slows down after each call to a point we can't use this solution anymore. There are more then 30K users in the realm. We've also seen that Keycloak can export the database, but only on system boot (I guess for migration purposes). Given that we want to extract all the users on a daily basis we cannot use this. Are there some known functionalities or workarounds?","java, redhat, keycloak, keycloak-services, redhat-sso",25,65366,4,https://stackoverflow.com/questions/48930281/export-all-users-from-keycloak 2777737,How to set the rpmbuild destination folder,I noticed rpmbuild (-bb and --buildroot options) creates the .rpm in different locations depending of what OS are you using: GNU/Linux Ubuntu <= 9.04: /usr/src/rpm/... GNU/Linux Ubuntu >= 9.10: /home/rpmbuild/... GNU/Linux Fedora: /usr/src/redhat/... So how can I set manually the destination folder for all OS?,How to set the rpmbuild destination folder I noticed rpmbuild (-bb and --buildroot options) creates the .rpm in different locations depending of what OS are you using: GNU/Linux Ubuntu <= 9.04: /usr/src/rpm/... GNU/Linux Ubuntu >= 9.10: /home/rpmbuild/... GNU/Linux Fedora: /usr/src/redhat/... So how can I set manually the destination folder for all OS?,"redhat, rpm, rpmbuild",25,20625,5,https://stackoverflow.com/questions/2777737/how-to-set-the-rpmbuild-destination-folder 4669420,Have you ever got this message when moving a file? mv: will not overwrite just-created,"I have a bourne shell script which performs several tasks. One of these tasks is to move some files to certain directory. Today, when I ran the script I got the following message: mv: will not overwrite just-created with where filename is the original file name with its full path, and sameFilename is exactly the same file and path. I regularly use this script every day and never got this message before. Right after running the script i re-run it to see if the error persisted, and I was not able to reproduce it again. I am running this script in a Red Hat 5 Enterprise.","Have you ever got this message when moving a file? mv: will not overwrite just-created I have a bourne shell script which performs several tasks. One of these tasks is to move some files to certain directory. Today, when I ran the script I got the following message: mv: will not overwrite just-created with where filename is the original file name with its full path, and sameFilename is exactly the same file and path. I regularly use this script every day and never got this message before. Right after running the script i re-run it to see if the error persisted, and I was not able to reproduce it again. I am running this script in a Red Hat 5 Enterprise.","shell, move, redhat, overwrite",25,26118,2,https://stackoverflow.com/questions/4669420/have-you-ever-got-this-message-when-moving-a-file-mv-will-not-overwrite-just-c 40593242,Systemd: Using both After and Requires,"I have a service foo.service which depends on service bar.service . I need to make sure that bar.service is started before foo.service and that bar.service launched successfully. From this source it says that Requires : This directive lists any units upon which this unit essentially depends. If the current unit is activated, the units listed here must successfully activate as well, else this unit will fail. These units are started in parallel with the current unit by default. and that After : The units listed in this directive will be started before starting the current unit. This does not imply a dependency relationship and one must be established through the above directives if this is required. Is it correct to have both the Requires and After sections in the same unit file? Requires says that the service will be launched in parallel, but After says it will be launched before. If bar.service fails to start during the After condition, will it attempt to launch it again during the Requires section? If so I need to find another way to launch foo.service foo.service [Unit] After=bar.service Requires=bar.service","Systemd: Using both After and Requires I have a service foo.service which depends on service bar.service . I need to make sure that bar.service is started before foo.service and that bar.service launched successfully. From this source it says that Requires : This directive lists any units upon which this unit essentially depends. If the current unit is activated, the units listed here must successfully activate as well, else this unit will fail. These units are started in parallel with the current unit by default. and that After : The units listed in this directive will be started before starting the current unit. This does not imply a dependency relationship and one must be established through the above directives if this is required. Is it correct to have both the Requires and After sections in the same unit file? Requires says that the service will be launched in parallel, but After says it will be launched before. If bar.service fails to start during the After condition, will it attempt to launch it again during the Requires section? If so I need to find another way to launch foo.service foo.service [Unit] After=bar.service Requires=bar.service","linux, redhat, systemd",24,28154,2,https://stackoverflow.com/questions/40593242/systemd-using-both-after-and-requires 11688819,How to configure Django on OpenShift?,"I recently tried to export a Django project on OpenShift, but fruitlessly. The only solutions I found were ""prebuilt"" ones (such as [URL] ). I spent some hours trying to adapt it to my project but I always got an Internal Server Error. So, how to setup django on openshift?","How to configure Django on OpenShift? I recently tried to export a Django project on OpenShift, but fruitlessly. The only solutions I found were ""prebuilt"" ones (such as [URL] ). I spent some hours trying to adapt it to my project but I always got an Internal Server Error. So, how to setup django on openshift?","python, django, openshift, redhat",24,10440,1,https://stackoverflow.com/questions/11688819/how-to-configure-django-on-openshift 5974403,How to find whether MySQL is installed in Red Hat?,I am currently using Red Hat linux. I just want to find out whether MySQL is installed in that system. If yes where is it located? can anyone help please...,How to find whether MySQL is installed in Red Hat? I am currently using Red Hat linux. I just want to find out whether MySQL is installed in that system. If yes where is it located? can anyone help please...,"linux, redhat, status",23,112159,6,https://stackoverflow.com/questions/5974403/how-to-find-whether-mysql-is-installed-in-red-hat 15164520,Determine Redhat Linux Version,"How do I determine which RedHat Linux version I am running? Here's what I've read: /etc/redhat-release file contains the version, but anybody can tamper with that file. people say uname command, but you can install any kernel on Redhat. If I am running redhat 5.1 and someone upgrade it with 5.2 or 5.x, what determines the version of RedHat? even lsb_release -a read /etc/redhat-release file.","Determine Redhat Linux Version How do I determine which RedHat Linux version I am running? Here's what I've read: /etc/redhat-release file contains the version, but anybody can tamper with that file. people say uname command, but you can install any kernel on Redhat. If I am running redhat 5.1 and someone upgrade it with 5.2 or 5.x, what determines the version of RedHat? even lsb_release -a read /etc/redhat-release file.","linux, release, redhat",23,74426,9,https://stackoverflow.com/questions/15164520/determine-redhat-linux-version 1212925,On Linux - set maximum open files to unlimited. Possible?,"Is it possible to set the maximum number of open files to some ""infinite"" value or must it be a number? I had a requirement to set the descriptor limit for a daemon user to be ""unlimited"" and I'm trying to determine if that's possible or how to do it. I've seen some mailing lists refer to a ""max"" value that can be used (as in: ""myuser hard nofile max"", but so far the man pages and references I've consulted don't back that up. If I can't use 'max' or similar, I'd like to know how to determine what the max number of files is (theoretically) so I have some basis for whatever number I pick. I don't want to use 100000000 or something if there's a more reasonable way to get an upper bound. I'm using RHEL 5 if it's important. Update: I'm an idiot when it comes to writing questions. Ideally I'd like to do this in the limits.conf file (which is where ""max"" would come from). Does that change any answers? Thanks for the comments. This is for a JBOSS instance and not a daemon I'm writing so I don't know if setrlimit() is useful to me. However, Jefromi - I do like the definition of Infinity :) I saw a post that suggests a file descriptor is ""two shorts and a pointer"" so I should be able to calculate the approximate upper bound.","On Linux - set maximum open files to unlimited. Possible? Is it possible to set the maximum number of open files to some ""infinite"" value or must it be a number? I had a requirement to set the descriptor limit for a daemon user to be ""unlimited"" and I'm trying to determine if that's possible or how to do it. I've seen some mailing lists refer to a ""max"" value that can be used (as in: ""myuser hard nofile max"", but so far the man pages and references I've consulted don't back that up. If I can't use 'max' or similar, I'd like to know how to determine what the max number of files is (theoretically) so I have some basis for whatever number I pick. I don't want to use 100000000 or something if there's a more reasonable way to get an upper bound. I'm using RHEL 5 if it's important. Update: I'm an idiot when it comes to writing questions. Ideally I'd like to do this in the limits.conf file (which is where ""max"" would come from). Does that change any answers? Thanks for the comments. This is for a JBOSS instance and not a daemon I'm writing so I don't know if setrlimit() is useful to me. However, Jefromi - I do like the definition of Infinity :) I saw a post that suggests a file descriptor is ""two shorts and a pointer"" so I should be able to calculate the approximate upper bound.","linux, kernel, redhat",23,52190,2,https://stackoverflow.com/questions/1212925/on-linux-set-maximum-open-files-to-unlimited-possible 11418540,JTextField Issues with Numpad,"I've recently run into a strange issue with the Java JTextField. When I run the following code (see below), typing a ""0"" into the text field first sends a paste action, then types ""0"". For example, if ""text"" is copied to the clipboard, ""text0"" is typed when I type ""0"". Similarly, typing a ""4"" replaces the previous character with a ""4"" (I'm guessing this is a delete action, then the ""4"" is typed). Typing ""7"" clears the text field before typing ""7"". Here is the code: import javax.swing.JFrame; import javax.swing.JTextField; public class Main { public static void main(String[] args) { JFrame frame = new JFrame(); JTextField text = new JTextField(); frame.add(text); frame.setSize(500, 500); frame.setVisible(true); } } The problem is occurring on Red Hat Linux (accessed using VNC from Windows XP); everything runs as expected on Window XP. Update : No problems with the program on Ubuntu either. I've also tried using different keyboards and VNC viewers. Update 2 : Java Versions For Red Hat: java version ""1.6.0_17"" OpenJDK Runtime Environment (IcedTea6 1.7.7) (rhel-1.17.b17.el5-x86_64) OpenJDK 64-Bit Server VM (build 14.0-b16, mixed mode) For XP: java version ""1.7.0_05"" Java(TM) SE Runtime Environment (build 1.7.0_05-b05) Java HotSpot(TM) Client VM (build 23.1-b03, mixed mode, sharing) Update 3 : Tried running the program on three different Red Hat machines (all in the same group at work), and additionally tried running it from a different XP computer and restarting. Update 4 : Today I arrived at work to find that the problem had magically gone away. However, it'd really be nice to know why it happened in the first place so that I (and anyone else who many encounter this strange issue) know how to fix it in the future.","JTextField Issues with Numpad I've recently run into a strange issue with the Java JTextField. When I run the following code (see below), typing a ""0"" into the text field first sends a paste action, then types ""0"". For example, if ""text"" is copied to the clipboard, ""text0"" is typed when I type ""0"". Similarly, typing a ""4"" replaces the previous character with a ""4"" (I'm guessing this is a delete action, then the ""4"" is typed). Typing ""7"" clears the text field before typing ""7"". Here is the code: import javax.swing.JFrame; import javax.swing.JTextField; public class Main { public static void main(String[] args) { JFrame frame = new JFrame(); JTextField text = new JTextField(); frame.add(text); frame.setSize(500, 500); frame.setVisible(true); } } The problem is occurring on Red Hat Linux (accessed using VNC from Windows XP); everything runs as expected on Window XP. Update : No problems with the program on Ubuntu either. I've also tried using different keyboards and VNC viewers. Update 2 : Java Versions For Red Hat: java version ""1.6.0_17"" OpenJDK Runtime Environment (IcedTea6 1.7.7) (rhel-1.17.b17.el5-x86_64) OpenJDK 64-Bit Server VM (build 14.0-b16, mixed mode) For XP: java version ""1.7.0_05"" Java(TM) SE Runtime Environment (build 1.7.0_05-b05) Java HotSpot(TM) Client VM (build 23.1-b03, mixed mode, sharing) Update 3 : Tried running the program on three different Red Hat machines (all in the same group at work), and additionally tried running it from a different XP computer and restarting. Update 4 : Today I arrived at work to find that the problem had magically gone away. However, it'd really be nice to know why it happened in the first place so that I (and anyone else who many encounter this strange issue) know how to fix it in the future.","java, linux, jtextfield, redhat, numpad",23,1593,5,https://stackoverflow.com/questions/11418540/jtextfield-issues-with-numpad 26123740,Is it possible to install aws-cli package without root permission?,"As title suggested, I haven't been able to find a good way to install aws-cli ( [URL] ) without having the root access (or equivalent of sudo privileges). The way Homebrew setup on Mac is hinting at it may be possible, provided that a few directories and permissions are set in a way to facility future installs. However, I have yet to find any approach in Linux (specially, Red Hat Enterprise Linux or CentOS distroes). I am also aware of SCL from RHEL ( [URL] ) But again, it requires sudo .","Is it possible to install aws-cli package without root permission? As title suggested, I haven't been able to find a good way to install aws-cli ( [URL] ) without having the root access (or equivalent of sudo privileges). The way Homebrew setup on Mac is hinting at it may be possible, provided that a few directories and permissions are set in a way to facility future installs. However, I have yet to find any approach in Linux (specially, Red Hat Enterprise Linux or CentOS distroes). I am also aware of SCL from RHEL ( [URL] ) But again, it requires sudo .","amazon-web-services, centos, redhat, aws-cli",22,34267,8,https://stackoverflow.com/questions/26123740/is-it-possible-to-install-aws-cli-package-without-root-permission 13401727,rpmbuild Installed (but unpackaged) files source,"I'm trying to build an RPM from binaries on a REDHAT 6 system. I have all the files included in the %files section ( find /path/to/fake/install -type f >> specfile ) When I run rpmbuild -bb specfile --target x86_64 I get Checking for unpackaged file(s): /usr/lib/rpm/check-files /path/to/rpmbuild/BUILDROOT/Package-1.0.0-1.el6.x86_64 error: Installed (but unpackaged) file(s) found: RPM build errors: Installed (but unpackaged) file(s) found: Note that no files are listed in the error message. I'm not sure what's wrong, any ideas?","rpmbuild Installed (but unpackaged) files source I'm trying to build an RPM from binaries on a REDHAT 6 system. I have all the files included in the %files section ( find /path/to/fake/install -type f >> specfile ) When I run rpmbuild -bb specfile --target x86_64 I get Checking for unpackaged file(s): /usr/lib/rpm/check-files /path/to/rpmbuild/BUILDROOT/Package-1.0.0-1.el6.x86_64 error: Installed (but unpackaged) file(s) found: RPM build errors: Installed (but unpackaged) file(s) found: Note that no files are listed in the error message. I'm not sure what's wrong, any ideas?","linux, packaging, redhat, rpm, rpmbuild",22,46375,3,https://stackoverflow.com/questions/13401727/rpmbuild-installed-but-unpackaged-files-source 21264601,Permanently enable RHEL scl,Is there a way to permanently enable custom Software Collections for RedHat? I have installed an scl to provide python27 in RHEL6 and don't want to have to enable the custom scl every time.,Permanently enable RHEL scl Is there a way to permanently enable custom Software Collections for RedHat? I have installed an scl to provide python27 in RHEL6 and don't want to have to enable the custom scl every time.,"redhat, rhel, software-collections, rhel-scl",22,15898,3,https://stackoverflow.com/questions/21264601/permanently-enable-rhel-scl 43993890,ModuleNotFoundError: No module named '_sqlite3',"On Redhat 4.4.7-18 I am trying to run python3 code using sqlite, but I get the following import error: Traceback (most recent call last): File ""database.py"", line 7, in import sqlite3 File ""/usr/local/lib/python3.6/sqlite3/__init__.py"", line 23, in from sqlite3.dbapi2 import * File ""/usr/local/lib/python3.6/sqlite3/dbapi2.py"", line 27, in from _sqlite3 import * ModuleNotFoundError: No module named '_sqlite3' I tried to install it: >sudo pip install sqlite3 Collecting sqlite3 Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after connection broken by 'ProtocolError('Connection aborted.', error(101, 'Network is unreachable'))': /simple/sqlite3/ (while the network is reachable...) and with the following command: > sudo yum install sqlite-devel Loaded plugins: post-transaction-actions, product-id, refresh-packagekit, : rhnplugin, search-disabled-repos, security, subscription-manager This system is receiving updates from RHN Classic or RHN Satellite. Setting up Install Process Package sqlite-devel-3.6.20-1.el6_7.2.x86_64 already installed and latest version Nothing to do So it is installed and not installed? Any suggestion how I can solve the original problem?","ModuleNotFoundError: No module named '_sqlite3' On Redhat 4.4.7-18 I am trying to run python3 code using sqlite, but I get the following import error: Traceback (most recent call last): File ""database.py"", line 7, in import sqlite3 File ""/usr/local/lib/python3.6/sqlite3/__init__.py"", line 23, in from sqlite3.dbapi2 import * File ""/usr/local/lib/python3.6/sqlite3/dbapi2.py"", line 27, in from _sqlite3 import * ModuleNotFoundError: No module named '_sqlite3' I tried to install it: >sudo pip install sqlite3 Collecting sqlite3 Retrying (Retry(total=4, connect=None, read=None, redirect=None)) after connection broken by 'ProtocolError('Connection aborted.', error(101, 'Network is unreachable'))': /simple/sqlite3/ (while the network is reachable...) and with the following command: > sudo yum install sqlite-devel Loaded plugins: post-transaction-actions, product-id, refresh-packagekit, : rhnplugin, search-disabled-repos, security, subscription-manager This system is receiving updates from RHN Classic or RHN Satellite. Setting up Install Process Package sqlite-devel-3.6.20-1.el6_7.2.x86_64 already installed and latest version Nothing to do So it is installed and not installed? Any suggestion how I can solve the original problem?","python-3.x, sqlite, redhat",21,65883,4,https://stackoverflow.com/questions/43993890/modulenotfounderror-no-module-named-sqlite3 69539286,How to compile python3 on RHEL with SSL? SSL cannot be imported,"I'm trying to compile python on RHEL because my current python is using an old 1.0.2k ssl version. (test_env) [brad@reason tlscheck]$ python3 --version Python 3.9.3 (test_env) [brad@reason tlscheck]$ python3 -c ""import ssl; print(ssl.OPENSSL_VERSION)"" OpenSSL 1.0.2k-fips 26 Jan 2017 (test_env) [brad@reason tlscheck]$ openssl version OpenSSL 1.1.1l 24 Aug 2021 I think the issue is that when I compiled 3.9.3, I had not updated my OpenSSL version. I have since updated my OpenSSL and need to use it with python. So I have downloaded the newest python 3.10, but in the make stage I get an error that it will not make with ssl. I the following message: Following modules built successfully but were removed because they could not be imported: _hashlib _ssl Could not build the ssl module! Python requires a OpenSSL 1.1.1 or newer This is the full log of trying to compile: [URL] When I use the configure options that @tony-yip mentioned, I get the following in my configure. checking for openssl/ssl.h in /etc/ssl... no checking whether compiling and linking against OpenSSL works... no I'm determining my openssl location with: [brad@reason Python-3.10.0]$ openssl version -d OPENSSLDIR: ""/etc/ssl"" To configure, I'm using: ./configure --with-openssl=""/etc/ssl"" When I look for ssl.h, I find it in /usr/include/openssl . So I linked this directory to lib in /etc/ssl , but it was no help. [brad@reason Python-3.10.0]$ ls -l /etc/ssl total 40 lrwxrwxrwx 1 root root 16 Jul 16 2020 certs -> ../pki/tls/certs -rw-r--r-- 1 root root 412 Oct 12 02:53 ct_log_list.cnf -rw-r--r-- 1 root root 412 Oct 12 02:53 ct_log_list.cnf.dist lrwxrwxrwx 1 root root 20 Oct 18 10:22 lib -> /usr/include/openssl drwxr-xr-x 2 root root 4096 Oct 12 02:53 misc -rw-r--r-- 1 root root 10909 Oct 12 02:53 openssl.cnf -rw-r--r-- 1 root root 10909 Oct 12 02:53 openssl.cnf.dist drwxr-xr-x 2 root root 4096 Oct 12 02:53 private [brad@reason Python-3.10.0]$ sudo find / -name ssl.h | grep include find: ‘/tmp/.mount_jetbraAJFEnl’: Permission denied /home/brad/Downloads/freerdp-2.0.0-rc4/winpr/include/winpr/ssl.h /home/brad/Downloads/FreeRDP/winpr/include/winpr/ssl.h /home/brad/Development/tlscheck/openssl-1.1.1l/include/openssl/ssl.h /usr/include/openssl/ssl.h /var/lib/docker/overlay2/23e6f3c164ec8939352891c99393669df4ed6e66da1e04ce84616073f08c6051/diff/usr/include/openssl/ssl.h /var/lib/flatpak/runtime/org.freedesktop.Sdk/x86_64/18.08/c8075e929daaffcbe5c78c9e87c0f0463d75e90d2b59c92355fa486e79c7d0e3/files/include/nss/ssl.h /var/lib/flatpak/runtime/org.freedesktop.Sdk/x86_64/18.08/c8075e929daaffcbe5c78c9e87c0f0463d75e90d2b59c92355fa486e79c7d0e3/files/include/openssl/ssl.h find: ‘/run/user/1000/gvfs’: Permission denied This may be extraneous information, but my libssl.so is here: [brad@reason Python-3.10.0]$ ls /usr/lib64 | grep ssl libevent_openssl-2.0.so.5 libevent_openssl-2.0.so.5.1.9 libssl3.so libssl.so libssl.so.10 libssl.so.1.0.2k openssl Any thoughts on why make isn't able to include ssl, please let me know. Thanks.","How to compile python3 on RHEL with SSL? SSL cannot be imported I'm trying to compile python on RHEL because my current python is using an old 1.0.2k ssl version. (test_env) [brad@reason tlscheck]$ python3 --version Python 3.9.3 (test_env) [brad@reason tlscheck]$ python3 -c ""import ssl; print(ssl.OPENSSL_VERSION)"" OpenSSL 1.0.2k-fips 26 Jan 2017 (test_env) [brad@reason tlscheck]$ openssl version OpenSSL 1.1.1l 24 Aug 2021 I think the issue is that when I compiled 3.9.3, I had not updated my OpenSSL version. I have since updated my OpenSSL and need to use it with python. So I have downloaded the newest python 3.10, but in the make stage I get an error that it will not make with ssl. I the following message: Following modules built successfully but were removed because they could not be imported: _hashlib _ssl Could not build the ssl module! Python requires a OpenSSL 1.1.1 or newer This is the full log of trying to compile: [URL] When I use the configure options that @tony-yip mentioned, I get the following in my configure. checking for openssl/ssl.h in /etc/ssl... no checking whether compiling and linking against OpenSSL works... no I'm determining my openssl location with: [brad@reason Python-3.10.0]$ openssl version -d OPENSSLDIR: ""/etc/ssl"" To configure, I'm using: ./configure --with-openssl=""/etc/ssl"" When I look for ssl.h, I find it in /usr/include/openssl . So I linked this directory to lib in /etc/ssl , but it was no help. [brad@reason Python-3.10.0]$ ls -l /etc/ssl total 40 lrwxrwxrwx 1 root root 16 Jul 16 2020 certs -> ../pki/tls/certs -rw-r--r-- 1 root root 412 Oct 12 02:53 ct_log_list.cnf -rw-r--r-- 1 root root 412 Oct 12 02:53 ct_log_list.cnf.dist lrwxrwxrwx 1 root root 20 Oct 18 10:22 lib -> /usr/include/openssl drwxr-xr-x 2 root root 4096 Oct 12 02:53 misc -rw-r--r-- 1 root root 10909 Oct 12 02:53 openssl.cnf -rw-r--r-- 1 root root 10909 Oct 12 02:53 openssl.cnf.dist drwxr-xr-x 2 root root 4096 Oct 12 02:53 private [brad@reason Python-3.10.0]$ sudo find / -name ssl.h | grep include find: ‘/tmp/.mount_jetbraAJFEnl’: Permission denied /home/brad/Downloads/freerdp-2.0.0-rc4/winpr/include/winpr/ssl.h /home/brad/Downloads/FreeRDP/winpr/include/winpr/ssl.h /home/brad/Development/tlscheck/openssl-1.1.1l/include/openssl/ssl.h /usr/include/openssl/ssl.h /var/lib/docker/overlay2/23e6f3c164ec8939352891c99393669df4ed6e66da1e04ce84616073f08c6051/diff/usr/include/openssl/ssl.h /var/lib/flatpak/runtime/org.freedesktop.Sdk/x86_64/18.08/c8075e929daaffcbe5c78c9e87c0f0463d75e90d2b59c92355fa486e79c7d0e3/files/include/nss/ssl.h /var/lib/flatpak/runtime/org.freedesktop.Sdk/x86_64/18.08/c8075e929daaffcbe5c78c9e87c0f0463d75e90d2b59c92355fa486e79c7d0e3/files/include/openssl/ssl.h find: ‘/run/user/1000/gvfs’: Permission denied This may be extraneous information, but my libssl.so is here: [brad@reason Python-3.10.0]$ ls /usr/lib64 | grep ssl libevent_openssl-2.0.so.5 libevent_openssl-2.0.so.5.1.9 libssl3.so libssl.so libssl.so.10 libssl.so.1.0.2k openssl Any thoughts on why make isn't able to include ssl, please let me know. Thanks.","python, ssl, openssl, redhat",21,22687,5,https://stackoverflow.com/questions/69539286/how-to-compile-python3-on-rhel-with-ssl-ssl-cannot-be-imported 30665912,No ruby-devel in RHEL7?,"I have a recently installed RHEL7 system, and need to do gem install jekyll, however this fails as: Fetching: yajl-ruby-1.2.1.gem (100%) Building native extensions. This could take a while... ERROR: Error installing jekyll: ERROR: Failed to build gem native extension. /usr/bin/ruby extconf.rb mkmf.rb can't find header files for ruby at /usr/share/include/ruby.h Google suggest this is due to the lack of a ruby-devel package being installed. However there doesn't seem to be such a package in RHEL7. Do I need to move to a software collection (don't really want to do this as this will be for a production machine, not development) or can I get it some other way?","No ruby-devel in RHEL7? I have a recently installed RHEL7 system, and need to do gem install jekyll, however this fails as: Fetching: yajl-ruby-1.2.1.gem (100%) Building native extensions. This could take a while... ERROR: Error installing jekyll: ERROR: Failed to build gem native extension. /usr/bin/ruby extconf.rb mkmf.rb can't find header files for ruby at /usr/share/include/ruby.h Google suggest this is due to the lack of a ruby-devel package being installed. However there doesn't seem to be such a package in RHEL7. Do I need to move to a software collection (don't really want to do this as this will be for a production machine, not development) or can I get it some other way?","ruby, redhat",21,25593,8,https://stackoverflow.com/questions/30665912/no-ruby-devel-in-rhel7 139605,Where to find packages names and versions for RedHat?,"How can I find out whether a specific RedHat release (RHEL4, RHEL5...) contains a certain package (or a certain version of a package)? For Debian and Ubuntu, there's packages.debian.org and packages.ubuntu.com; is there a similar web site for RedHat? Note: I don't want to have to install all the releases just to check some package version :-)","Where to find packages names and versions for RedHat? How can I find out whether a specific RedHat release (RHEL4, RHEL5...) contains a certain package (or a certain version of a package)? For Debian and Ubuntu, there's packages.debian.org and packages.ubuntu.com; is there a similar web site for RedHat? Note: I don't want to have to install all the releases just to check some package version :-)","linux, package, redhat, rpm",21,25269,7,https://stackoverflow.com/questions/139605/where-to-find-packages-names-and-versions-for-redhat 44034752,how to decide the memory requirement for my elasticsearch server,"I have a scenario here, The Elasticsearch DB with about 1.4 TB of data having, _shards"": { ""total"": 202, ""successful"": 101, ""failed"": 0 } Each index size is approximately between, 3 GB to 30 GB and in near future, it is expected to have 30GB file size on a daily basis. OS information: NAME=""Red Hat Enterprise Linux Server"" VERSION=""7.2 (Maipo)"" ID=""rhel"" ID_LIKE=""fedora"" VERSION_ID=""7.2"" PRETTY_NAME=""Red Hat Enterprise Linux Server 7.2 (Maipo)"" The system has 32 GB of RAM and the filesystem is 2TB (1.4TB Utilised). I have configured a maximum of 15 GB for Elasticsearch server. But this is not enough for me to query this DB. The server hangs for a single query hit on server. I will be including 1TB on the filesystem in this server so that the total available filesystem size will be 3TB. also I am planning to increase the memory to 128GB which is an approximate estimation. Could someone help me calculate how to determine the minimum RAM required for a server to respond at least 50 requests simultaneously? It would be greatly appreciated if you can suggest any tool/ formula to analyze this requirement. also it will be helpful if you can give me any other scenario with numbers so that I can use that to determine my resource need.","how to decide the memory requirement for my elasticsearch server I have a scenario here, The Elasticsearch DB with about 1.4 TB of data having, _shards"": { ""total"": 202, ""successful"": 101, ""failed"": 0 } Each index size is approximately between, 3 GB to 30 GB and in near future, it is expected to have 30GB file size on a daily basis. OS information: NAME=""Red Hat Enterprise Linux Server"" VERSION=""7.2 (Maipo)"" ID=""rhel"" ID_LIKE=""fedora"" VERSION_ID=""7.2"" PRETTY_NAME=""Red Hat Enterprise Linux Server 7.2 (Maipo)"" The system has 32 GB of RAM and the filesystem is 2TB (1.4TB Utilised). I have configured a maximum of 15 GB for Elasticsearch server. But this is not enough for me to query this DB. The server hangs for a single query hit on server. I will be including 1TB on the filesystem in this server so that the total available filesystem size will be 3TB. also I am planning to increase the memory to 128GB which is an approximate estimation. Could someone help me calculate how to determine the minimum RAM required for a server to respond at least 50 requests simultaneously? It would be greatly appreciated if you can suggest any tool/ formula to analyze this requirement. also it will be helpful if you can give me any other scenario with numbers so that I can use that to determine my resource need.","elasticsearch, memory, filesystems, redhat",21,53692,3,https://stackoverflow.com/questions/44034752/how-to-decide-the-memory-requirement-for-my-elasticsearch-server 6298865,How to install Maven into Red Hat Enterprise Linux 6?,"I'm working on a Scientific Linux box and am trying to install Maven using the yum command. Scientific Linux for those of you who do not know is based off of Red Hat Linux Enterprise Edition 6. I'd prefer to install Maven in a way that lent itself to easy updating, that is why I have shied away from simply going to the Apache Maven site and getting the files I need. Simply running yum with root privileges was not enough. I used yum search maven which returned ""JPackage Utilities"", which I tried to install only to get: Package jpackage-utils-1.7.5-3.12.el6.noarch already installed and latest version I was assuming that something like creating a new repo file something like /etc/yum.repos.d/maven.repo would do the trick. I found a site suggesting that I point my maven.repo file to the URL [URL] , however this seems to be a fix for an older version of Linux as it did not solve my problem As always thanks in advance for any help or suggestions!","How to install Maven into Red Hat Enterprise Linux 6? I'm working on a Scientific Linux box and am trying to install Maven using the yum command. Scientific Linux for those of you who do not know is based off of Red Hat Linux Enterprise Edition 6. I'd prefer to install Maven in a way that lent itself to easy updating, that is why I have shied away from simply going to the Apache Maven site and getting the files I need. Simply running yum with root privileges was not enough. I used yum search maven which returned ""JPackage Utilities"", which I tried to install only to get: Package jpackage-utils-1.7.5-3.12.el6.noarch already installed and latest version I was assuming that something like creating a new repo file something like /etc/yum.repos.d/maven.repo would do the trick. I found a site suggesting that I point my maven.repo file to the URL [URL] , however this seems to be a fix for an older version of Linux as it did not solve my problem As always thanks in advance for any help or suggestions!","linux, maven, redhat, yum",20,46774,4,https://stackoverflow.com/questions/6298865/how-to-install-maven-into-red-hat-enterprise-linux-6 53011147,Can't resolve DNS name of EFS while mounting it on red hat ec2 instances using putty,"I am having an issue where I am unable to mount my EFS on red hat ec2 instance using the DNS names. It throws the error mount.nfs4: Failed to resolve server us-east-1a.fs-c2aXXXX.efs.us-east-1.amazon aws.com: Name or service not known I am following the instructions provided by AWS. I tried below two ways to do it and both throw the same above error. I can confirm that the DNS names are correct. 1st: mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-c2aXXXX.efs.us-east-1.amazonaws.com:/ efs 2nd: mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 $(curl -s [URL] /efs However, if I use IP instead of DNS names, I am able to mount it just fine. So below command works. mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport 10.38.X.XX:/ /efs I am fine with using IP instead of DNS as long as I am able to mount it. Now my issue is as soon as I stop and start the instance again, my mount is gone. Even after I add the below entry to the /etc/fstab , it doesn't do auto mount. 10.38.X.XXX:/ /efs efs defaults,_netdev 0 0 Can someone please help me in either resolving the issue with DNS or tell me how to auto mount using IPs?","Can't resolve DNS name of EFS while mounting it on red hat ec2 instances using putty I am having an issue where I am unable to mount my EFS on red hat ec2 instance using the DNS names. It throws the error mount.nfs4: Failed to resolve server us-east-1a.fs-c2aXXXX.efs.us-east-1.amazon aws.com: Name or service not known I am following the instructions provided by AWS. I tried below two ways to do it and both throw the same above error. I can confirm that the DNS names are correct. 1st: mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-c2aXXXX.efs.us-east-1.amazonaws.com:/ efs 2nd: mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 $(curl -s [URL] /efs However, if I use IP instead of DNS names, I am able to mount it just fine. So below command works. mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport 10.38.X.XX:/ /efs I am fine with using IP instead of DNS as long as I am able to mount it. Now my issue is as soon as I stop and start the instance again, my mount is gone. Even after I add the below entry to the /etc/fstab , it doesn't do auto mount. 10.38.X.XXX:/ /efs efs defaults,_netdev 0 0 Can someone please help me in either resolving the issue with DNS or tell me how to auto mount using IPs?","amazon-web-services, amazon-ec2, redhat, amazon-efs",20,33793,3,https://stackoverflow.com/questions/53011147/cant-resolve-dns-name-of-efs-while-mounting-it-on-red-hat-ec2-instances-using-p 53436443,Azure RedHat vm yum update fails with "SSL peer rejected your certificate as expired.","I just started a Standard RedHat 7 VM on Azure. I login and type: sudo yum update and get: Loaded plugins: langpacks, product-id, search-disabled-repos [URL] [Errno 14] curl#58 - ""SSL peer rejected your certificate as expired."" Trying other mirror. [URL] [Errno 14] curl#58 - ""SSL peer rejected your certificate as expired."" Trying other mirror. ... I thought that the PAYG license include updates? Or is the current image broken? Tried the 7.4 image too?","Azure RedHat vm yum update fails with "SSL peer rejected your certificate as expired." I just started a Standard RedHat 7 VM on Azure. I login and type: sudo yum update and get: Loaded plugins: langpacks, product-id, search-disabled-repos [URL] [Errno 14] curl#58 - ""SSL peer rejected your certificate as expired."" Trying other mirror. [URL] [Errno 14] curl#58 - ""SSL peer rejected your certificate as expired."" Trying other mirror. ... I thought that the PAYG license include updates? Or is the current image broken? Tried the 7.4 image too?","azure, redhat, yum",19,28246,6,https://stackoverflow.com/questions/53436443/azure-redhat-vm-yum-update-fails-with-ssl-peer-rejected-your-certificate-as-exp 28201475,How do I fix a PostgreSQL 9.3 Slave that Cannot Keep Up with the Master?,"We have a master-slave replication configuration as follows. On the master: postgresql.conf has replication configured as follows (commented line taken out for brevity): max_wal_senders = 1 wal_keep_segments = 8 On the slave: Same postgresql.conf as on the master. recovery.conf looks like this: standby_mode = 'on' primary_conninfo = 'host=master1 port=5432 user=replication password=replication' trigger_file = '/tmp/postgresql.trigger.5432' When this was initially setup, we performed some simple tests and confirmed the replication was working. However, when we did the initial data load, only some of the data made it to the slave. Slave's log is now filled with messages that look like this: < 2015-01-23 23:59:47.241 EST >LOG: started streaming WAL from primary at F/52000000 on timeline 1 < 2015-01-23 23:59:47.241 EST >FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 000000010000000F00000052 has already been removed < 2015-01-23 23:59:52.259 EST >LOG: started streaming WAL from primary at F/52000000 on timeline 1 < 2015-01-23 23:59:52.260 EST >FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 000000010000000F00000052 has already been removed < 2015-01-23 23:59:57.270 EST >LOG: started streaming WAL from primary at F/52000000 on timeline 1 < 2015-01-23 23:59:57.270 EST >FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 000000010000000F00000052 has already been removed After some analysis and help on the #postgresql IRC channel, I've come to the conclusion that the slave cannot keep up with the master. My proposed solution is as follows. On the master: Set max_wal_senders=5 Set wal_keep_segments=4000 . Yes I know it is very high, but I'd like to monitor the situation and see what happens. I have room on the master. On the slave: Save configuration files in the data directory (i.e. pg_hba.conf pg_ident.conf postgresql.conf recovery.conf ) Clear out the data directory ( rm -rf /var/lib/pgsql/9.3/data/* ) . This seems to be required by pg_basebackup . Run the following command: pg_basebackup -h master -D /var/lib/pgsql/9.3/data --username=replication --password Am I missing anything ? Is there a better way to bring the slave up-to-date w/o having to reload all the data ? Any help is greatly appreciated.","How do I fix a PostgreSQL 9.3 Slave that Cannot Keep Up with the Master? We have a master-slave replication configuration as follows. On the master: postgresql.conf has replication configured as follows (commented line taken out for brevity): max_wal_senders = 1 wal_keep_segments = 8 On the slave: Same postgresql.conf as on the master. recovery.conf looks like this: standby_mode = 'on' primary_conninfo = 'host=master1 port=5432 user=replication password=replication' trigger_file = '/tmp/postgresql.trigger.5432' When this was initially setup, we performed some simple tests and confirmed the replication was working. However, when we did the initial data load, only some of the data made it to the slave. Slave's log is now filled with messages that look like this: < 2015-01-23 23:59:47.241 EST >LOG: started streaming WAL from primary at F/52000000 on timeline 1 < 2015-01-23 23:59:47.241 EST >FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 000000010000000F00000052 has already been removed < 2015-01-23 23:59:52.259 EST >LOG: started streaming WAL from primary at F/52000000 on timeline 1 < 2015-01-23 23:59:52.260 EST >FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 000000010000000F00000052 has already been removed < 2015-01-23 23:59:57.270 EST >LOG: started streaming WAL from primary at F/52000000 on timeline 1 < 2015-01-23 23:59:57.270 EST >FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 000000010000000F00000052 has already been removed After some analysis and help on the #postgresql IRC channel, I've come to the conclusion that the slave cannot keep up with the master. My proposed solution is as follows. On the master: Set max_wal_senders=5 Set wal_keep_segments=4000 . Yes I know it is very high, but I'd like to monitor the situation and see what happens. I have room on the master. On the slave: Save configuration files in the data directory (i.e. pg_hba.conf pg_ident.conf postgresql.conf recovery.conf ) Clear out the data directory ( rm -rf /var/lib/pgsql/9.3/data/* ) . This seems to be required by pg_basebackup . Run the following command: pg_basebackup -h master -D /var/lib/pgsql/9.3/data --username=replication --password Am I missing anything ? Is there a better way to bring the slave up-to-date w/o having to reload all the data ? Any help is greatly appreciated.","postgresql, replication, redhat",18,36067,5,https://stackoverflow.com/questions/28201475/how-do-i-fix-a-postgresql-9-3-slave-that-cannot-keep-up-with-the-master 28994041,"IPython notebook always shows "kernel starting, please wait..."","platform: redhat x64, installed ipython notebook 3.0 through pyvenv-3.4 When I open a notebook, it always shows ""kernel starting, please wait..."". But I can open IPython console. Please help, thanks!","IPython notebook always shows "kernel starting, please wait..." platform: redhat x64, installed ipython notebook 3.0 through pyvenv-3.4 When I open a notebook, it always shows ""kernel starting, please wait..."". But I can open IPython console. Please help, thanks!","python-3.x, redhat, jupyter-notebook",18,22194,4,https://stackoverflow.com/questions/28994041/ipython-notebook-always-shows-kernel-starting-please-wait 27439910,Why cgroup’s memory subsystem use oom-killer instead of return memory allocation failure when progress allow memory over cgroup limit?,We use cgroup limit procedure use more resource。 but,when Memory is more than limit in cgroup,it will kill process。 Why cgroup’s memory subsystem use oom-killer instead of return memory allocation failure when progress allow memory over cgroup limit?,Why cgroup’s memory subsystem use oom-killer instead of return memory allocation failure when progress allow memory over cgroup limit? We use cgroup limit procedure use more resource。 but,when Memory is more than limit in cgroup,it will kill process。 Why cgroup’s memory subsystem use oom-killer instead of return memory allocation failure when progress allow memory over cgroup limit?,"linux, linux-kernel, redhat, cgroups",18,10083,1,https://stackoverflow.com/questions/27439910/why-cgroup-s-memory-subsystem-use-oom-killer-instead-of-return-memory-allocation 27778593,Installing nodejs on Red Hat,I am trying to install node.js on Red Hat Enterprise Linux Server release 6.1 using the following command: sudo yum install nodejs npm I got the following error: Error: Package: nodejs-0.10.24-1.el6.x86_64 (epel) Requires: libssl.so.10(libssl.so.10)(64bit) Error: Package: nodejs-devel-0.10.24-1.el6.x86_64 (epel) Requires: libcrypto.so.10(libcrypto.so.10)(64bit) Error: Package: nodejs-0.10.24-1.el6.x86_64 (epel) Requires: libcrypto.so.10(libcrypto.so.10)(64bit) Error: Package: nodejs-devel-0.10.24-1.el6.x86_64 (epel) Requires: libssl.so.10(libssl.so.10)(64bit) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest I tried the following command as well: sudo yum install -y nodejs I am getting the following error: Error: Package: nodejs-0.10.24-1.el6.x86_64 (epel) Requires: libssl.so.10(libssl.so.10)(64bit) Error: Package: nodejs-0.10.24-1.el6.x86_64 (epel) Requires: libcrypto.so.10(libcrypto.so.10)(64bit) How should I install it? I want to install the latest version.,Installing nodejs on Red Hat I am trying to install node.js on Red Hat Enterprise Linux Server release 6.1 using the following command: sudo yum install nodejs npm I got the following error: Error: Package: nodejs-0.10.24-1.el6.x86_64 (epel) Requires: libssl.so.10(libssl.so.10)(64bit) Error: Package: nodejs-devel-0.10.24-1.el6.x86_64 (epel) Requires: libcrypto.so.10(libcrypto.so.10)(64bit) Error: Package: nodejs-0.10.24-1.el6.x86_64 (epel) Requires: libcrypto.so.10(libcrypto.so.10)(64bit) Error: Package: nodejs-devel-0.10.24-1.el6.x86_64 (epel) Requires: libssl.so.10(libssl.so.10)(64bit) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest I tried the following command as well: sudo yum install -y nodejs I am getting the following error: Error: Package: nodejs-0.10.24-1.el6.x86_64 (epel) Requires: libssl.so.10(libssl.so.10)(64bit) Error: Package: nodejs-0.10.24-1.el6.x86_64 (epel) Requires: libcrypto.so.10(libcrypto.so.10)(64bit) How should I install it? I want to install the latest version.,"node.js, redhat, yum",17,81357,9,https://stackoverflow.com/questions/27778593/installing-nodejs-on-red-hat 20112355,Apache 2.4.x manual build and install on RHEL 6.4,"OS: Red Hat Enterprise Linux Server release 6.4 (Santiago) The current yum installation of apache on this OS is 2.2.15. I require the latest 2.4.x branch so have gone about installing it manually. I have noted the complete procedure I undertook, including unpacking apr and apr-util sources into the apache sources beforehand, but I guess the following is the most important part of the procedure: GATHER LATEST APACHE AND APR $ cd ~ $ mkdir apache-src $ cd apache-src $ wget [URL] $ tar xvf httpd-2.4.6.tar.gz $ cd httpd-2.4.6 $ cd srclib $ wget [URL] $ tar -xvzf apr-1.5.0.tar.gz $ mv apr-1.5.0 apr $ rm -f apr-1.5.0.tar.gz $ wget [URL] $ tar -xvzf apr-util-1.5.3.tar.gz $ mv apr-util-1.5.3 apr-util INSTALL DEVEL PACKAGES yum update --skip-broken (There is a dependency issue with the latest Chrome needing the latest libstdc++, which is not available for RHEL and CentOS) yum install apr-devel yum install apr-util-devel yum install pcre-devel INSTALL $ cd ~/apache-src/httpd-2.4.6 $ ./configure --prefix=/etc/httpd --enable-mods-shared=""all"" --enable-rewrite --with-included-apr $ make $ make install NOTE: At the time of running the above, /etc/http is empty. This seems to have gone fine until I attempt to start the httpd service. It seems that every module include in httpd.conf fails with a message similar to this one for mod_rewrite : httpd: Syntax error on line 148 of /etc/httpd/conf/httpd.conf: Cannot load /etc/httpd/modules/mod_rewrite.so into server: /etc/httpd/modules/mod_rewrite.so: undefined symbol: ap_global_mutex_create I've gone right through the list of enabled modules in httpd.conf and commented them out one at a time. All trigger an error as above, however the ""undefined symbol: value"" is often different (so not always ap_global_mutex_create ). Am I missing a step? Although I find a some portion of that error on Google, most of the solutions centre around the .so files not being reachable. That doesn't seem to be an issue here and the modules are present in /etc/http/modules . NOTE: At the time of running the above, /etc/http is empty.","Apache 2.4.x manual build and install on RHEL 6.4 OS: Red Hat Enterprise Linux Server release 6.4 (Santiago) The current yum installation of apache on this OS is 2.2.15. I require the latest 2.4.x branch so have gone about installing it manually. I have noted the complete procedure I undertook, including unpacking apr and apr-util sources into the apache sources beforehand, but I guess the following is the most important part of the procedure: GATHER LATEST APACHE AND APR $ cd ~ $ mkdir apache-src $ cd apache-src $ wget [URL] $ tar xvf httpd-2.4.6.tar.gz $ cd httpd-2.4.6 $ cd srclib $ wget [URL] $ tar -xvzf apr-1.5.0.tar.gz $ mv apr-1.5.0 apr $ rm -f apr-1.5.0.tar.gz $ wget [URL] $ tar -xvzf apr-util-1.5.3.tar.gz $ mv apr-util-1.5.3 apr-util INSTALL DEVEL PACKAGES yum update --skip-broken (There is a dependency issue with the latest Chrome needing the latest libstdc++, which is not available for RHEL and CentOS) yum install apr-devel yum install apr-util-devel yum install pcre-devel INSTALL $ cd ~/apache-src/httpd-2.4.6 $ ./configure --prefix=/etc/httpd --enable-mods-shared=""all"" --enable-rewrite --with-included-apr $ make $ make install NOTE: At the time of running the above, /etc/http is empty. This seems to have gone fine until I attempt to start the httpd service. It seems that every module include in httpd.conf fails with a message similar to this one for mod_rewrite : httpd: Syntax error on line 148 of /etc/httpd/conf/httpd.conf: Cannot load /etc/httpd/modules/mod_rewrite.so into server: /etc/httpd/modules/mod_rewrite.so: undefined symbol: ap_global_mutex_create I've gone right through the list of enabled modules in httpd.conf and commented them out one at a time. All trigger an error as above, however the ""undefined symbol: value"" is often different (so not always ap_global_mutex_create ). Am I missing a step? Although I find a some portion of that error on Google, most of the solutions centre around the .so files not being reachable. That doesn't seem to be an issue here and the modules are present in /etc/http/modules . NOTE: At the time of running the above, /etc/http is empty.","apache, apache2, redhat, rhel",17,48878,2,https://stackoverflow.com/questions/20112355/apache-2-4-x-manual-build-and-install-on-rhel-6-4 47826123,What is systemd PID file?,"I want to run jar file as a daemon. So I have written a shell script to ""start|stop|restart"" the daemon. I didn't get a chance to its working status. Can I use this script without creating a PID file? Why do we need a PID file at all? In which case we should use PID file? Below is my UNIT file. [Unit] Description=myApp After=network.target [Service] Environment=JAVA_HOME=/opt/java/jdk8 Environment=CATALINA_HOME=/opt/myApp/ User=nzpap Group=ngpap ExecStart=/kohls/apps/myApp/myapp-scripts/myapp-deploy.sh Restart=always [Install] WantedBy=multi-user.target I did not gain info by browsing through the internet about PID concept.","What is systemd PID file? I want to run jar file as a daemon. So I have written a shell script to ""start|stop|restart"" the daemon. I didn't get a chance to its working status. Can I use this script without creating a PID file? Why do we need a PID file at all? In which case we should use PID file? Below is my UNIT file. [Unit] Description=myApp After=network.target [Service] Environment=JAVA_HOME=/opt/java/jdk8 Environment=CATALINA_HOME=/opt/myApp/ User=nzpap Group=ngpap ExecStart=/kohls/apps/myApp/myapp-scripts/myapp-deploy.sh Restart=always [Install] WantedBy=multi-user.target I did not gain info by browsing through the internet about PID concept.","linux, redhat, centos7, systemd",17,51565,4,https://stackoverflow.com/questions/47826123/what-is-systemd-pid-file 33360920,dd command error writing No space left on device,"I am new to storage, trying to erase the data in the device '/dev/sdcd' why should I get 'No space left error' [root@ dev]# dd if=/dev/zero of=/dev/sdcd bs=4k dd: error writing ‘/dev/sdcd’: No space left on device 1310721+0 records in 1310720+0 records out 5368709120 bytes (5.4 GB) copied, 19.7749 s, 271 MB/s [root@ dev]# ls -l /dev/null crw-rw-rw-. 1 root root 1, 3 Oct 27 01:35 /dev/null if this is very basic question, I am sorry about that","dd command error writing No space left on device I am new to storage, trying to erase the data in the device '/dev/sdcd' why should I get 'No space left error' [root@ dev]# dd if=/dev/zero of=/dev/sdcd bs=4k dd: error writing ‘/dev/sdcd’: No space left on device 1310721+0 records in 1310720+0 records out 5368709120 bytes (5.4 GB) copied, 19.7749 s, 271 MB/s [root@ dev]# ls -l /dev/null crw-rw-rw-. 1 root root 1, 3 Oct 27 01:35 /dev/null if this is very basic question, I am sorry about that","linux, linux-device-driver, redhat",17,32408,1,https://stackoverflow.com/questions/33360920/dd-command-error-writing-no-space-left-on-device 12584762,mysql_connect(): No such file or directory,"I have just installed a MySQL server (version 3.23.58) on an old RedHat7. I cannot install a more recent MySQL version because of the dependencies. I cannot update librairies on this RedHat server. However, I have a problem connecting to the database with PHP. First I used PDO but I realized that PDO was not compatible with MySQL 3.23... So I used mysql_connect() . Now I have the following error: Warning: mysql_connect(): No such file or directory in /user/local/apache/htdocs/php/database.php on line 9 Error: No such file or directory My code is: $host = 'localhost'; $user = 'root'; $password = ''; $database = 'test'; $db = mysql_connect($host, $user, $password) or die('Error : ' . mysql_error()); mysql_select_db($database); I checked twice that the database exists and the login and password are correct. This is strange because the code works fine on my Windows PC with Wampp. I cannot figure out where the problem comes from. Any idea?","mysql_connect(): No such file or directory I have just installed a MySQL server (version 3.23.58) on an old RedHat7. I cannot install a more recent MySQL version because of the dependencies. I cannot update librairies on this RedHat server. However, I have a problem connecting to the database with PHP. First I used PDO but I realized that PDO was not compatible with MySQL 3.23... So I used mysql_connect() . Now I have the following error: Warning: mysql_connect(): No such file or directory in /user/local/apache/htdocs/php/database.php on line 9 Error: No such file or directory My code is: $host = 'localhost'; $user = 'root'; $password = ''; $database = 'test'; $db = mysql_connect($host, $user, $password) or die('Error : ' . mysql_error()); mysql_select_db($database); I checked twice that the database exists and the login and password are correct. This is strange because the code works fine on my Windows PC with Wampp. I cannot figure out where the problem comes from. Any idea?","php, mysql, redhat, mysql-connect",16,86156,6,https://stackoverflow.com/questions/12584762/mysql-connect-no-such-file-or-directory 68223306,Execute multiple commands with && in systemd service ExecStart on RedHat 7.9,"I have this systemd service on Red Hat Enterprise Linux Server 7.9 (Maipo) [Unit] Description = EUM Server Service PartOf=eum.service # Start this unit after the app.service start After=eum.service After=eum-db.service [Service] Type=forking User=root WorkingDirectory=/prod/appdynamics/EUMServer/eum-processor/ ExecStart=/usr/bin/sleep 45 && /bin/bash bin/eum.sh start RemainAfterExit=true ExecStop=/bin/bash bin/eum.sh stop [Install] WantedBy=multi-user.target that fails because it tries to pick everything after /usr/bin/sleep as parameters to that command. I just want to execute the /usr/bin/sleep 45 and on success execute bin/eum.sh start . How can I make it work? ● eum-server.service - EUM Server Service Loaded: loaded (/etc/systemd/system/eum-server.service; enabled; vendor preset: disabled) Active: failed (Result: exit-code) since Fri 2021-07-02 00:00:53 CEST; 9min ago Process: 13860 ExecStart=/usr/bin/sleep 45 && /bin/bash bin/eum.sh start (code=exited, status=1/FAILURE) Jul 02 00:00:53 lmlift06mnp001 systemd[1]: Starting EUM Server Service... Jul 02 00:00:53 lmlift06mnp001 sleep[13860]: /usr/bin/sleep: invalid time interval ‘&&’ Jul 02 00:00:53 lmlift06mnp001 sleep[13860]: /usr/bin/sleep: invalid time interval ‘/bin/bash’ Jul 02 00:00:53 lmlift06mnp001 sleep[13860]: /usr/bin/sleep: invalid time interval ‘bin/eum.sh’ Jul 02 00:00:53 lmlift06mnp001 sleep[13860]: /usr/bin/sleep: invalid time interval ‘start’ Jul 02 00:00:53 lmlift06mnp001 sleep[13860]: Try '/usr/bin/sleep --help' for more information. Jul 02 00:00:53 lmlift06mnp001 systemd[1]: eum-server.service: control process exited, code=exited status=1 Jul 02 00:00:53 lmlift06mnp001 systemd[1]: Failed to start EUM Server Service. Jul 02 00:00:53 lmlift06mnp001 systemd[1]: Unit eum-server.service entered failed state. Jul 02 00:00:53 lmlift06mnp001 systemd[1]: eum-server.service failed.","Execute multiple commands with && in systemd service ExecStart on RedHat 7.9 I have this systemd service on Red Hat Enterprise Linux Server 7.9 (Maipo) [Unit] Description = EUM Server Service PartOf=eum.service # Start this unit after the app.service start After=eum.service After=eum-db.service [Service] Type=forking User=root WorkingDirectory=/prod/appdynamics/EUMServer/eum-processor/ ExecStart=/usr/bin/sleep 45 && /bin/bash bin/eum.sh start RemainAfterExit=true ExecStop=/bin/bash bin/eum.sh stop [Install] WantedBy=multi-user.target that fails because it tries to pick everything after /usr/bin/sleep as parameters to that command. I just want to execute the /usr/bin/sleep 45 and on success execute bin/eum.sh start . How can I make it work? ● eum-server.service - EUM Server Service Loaded: loaded (/etc/systemd/system/eum-server.service; enabled; vendor preset: disabled) Active: failed (Result: exit-code) since Fri 2021-07-02 00:00:53 CEST; 9min ago Process: 13860 ExecStart=/usr/bin/sleep 45 && /bin/bash bin/eum.sh start (code=exited, status=1/FAILURE) Jul 02 00:00:53 lmlift06mnp001 systemd[1]: Starting EUM Server Service... Jul 02 00:00:53 lmlift06mnp001 sleep[13860]: /usr/bin/sleep: invalid time interval ‘&&’ Jul 02 00:00:53 lmlift06mnp001 sleep[13860]: /usr/bin/sleep: invalid time interval ‘/bin/bash’ Jul 02 00:00:53 lmlift06mnp001 sleep[13860]: /usr/bin/sleep: invalid time interval ‘bin/eum.sh’ Jul 02 00:00:53 lmlift06mnp001 sleep[13860]: /usr/bin/sleep: invalid time interval ‘start’ Jul 02 00:00:53 lmlift06mnp001 sleep[13860]: Try '/usr/bin/sleep --help' for more information. Jul 02 00:00:53 lmlift06mnp001 systemd[1]: eum-server.service: control process exited, code=exited status=1 Jul 02 00:00:53 lmlift06mnp001 systemd[1]: Failed to start EUM Server Service. Jul 02 00:00:53 lmlift06mnp001 systemd[1]: Unit eum-server.service entered failed state. Jul 02 00:00:53 lmlift06mnp001 systemd[1]: eum-server.service failed.","redhat, systemd",16,22481,1,https://stackoverflow.com/questions/68223306/execute-multiple-commands-with-in-systemd-service-execstart-on-redhat-7-9 25379410,Ping Service to stop OpenShift Application from IDLE?,I am running a lightweight API in the OpenShift Cloud. I just realized that after 48h the application goes into IDLE mode. Is there kind of a ping service to avoid this issue? best M,Ping Service to stop OpenShift Application from IDLE? I am running a lightweight API in the OpenShift Cloud. I just realized that after 48h the application goes into IDLE mode. Is there kind of a ping service to avoid this issue? best M,"cloud, openshift, redhat",16,6394,2,https://stackoverflow.com/questions/25379410/ping-service-to-stop-openshift-application-from-idle 8258647,RPM - Install time parameters,"I have packaged my application into an RPM package, say, myapp.rpm . While installing this application, I would like to receive some inputs from the user (an example for input could be - environment where the app is getting installed - ""dev"", ""qa"", ""uat"", ""prod""). Based on the input, the application will install the appropriate files. Is there a way to pass parameters while installing the application? P.S.: A possible solution could be to create an RPM package for each environment. However, in our scenario, this is not a viable option since we have around 20 environments and we do not wish to have 20 different packages for the same application.","RPM - Install time parameters I have packaged my application into an RPM package, say, myapp.rpm . While installing this application, I would like to receive some inputs from the user (an example for input could be - environment where the app is getting installed - ""dev"", ""qa"", ""uat"", ""prod""). Based on the input, the application will install the appropriate files. Is there a way to pass parameters while installing the application? P.S.: A possible solution could be to create an RPM package for each environment. However, in our scenario, this is not a viable option since we have around 20 environments and we do not wish to have 20 different packages for the same application.","linux, unix, build, redhat, rpm",16,17097,3,https://stackoverflow.com/questions/8258647/rpm-install-time-parameters 18827396,UnicodeDecodeError: 'ascii' codec can't decode byte 0xe7 in position 0: ordinal not in range(128),"I'm having troubles in encoding characters in utf-8. I'm using Django, and I get this error when I tried to send an Android notification with non-plain text. I tried to find where the source of the error and I managed to figure out that the source of the error is not in my project. In python shell, I type: 'ç'.encode('utf8') and I get this error: Traceback (most recent call last): File """", line 1, in UnicodeDecodeError: 'ascii' codec can't decode byte 0xe7 in position 0: ordinal not in range(128) I get the same errors with: 'á'.encode('utf-8') unicode('ç') 'ç'.encode('utf-8','ignore') I get errors with smart_text, force_text and smart_bytes too. Is that a problem with Python, my OS, or another thing? I'm running Python 2.6.6 on a Red Hat version 4.4.7-3","UnicodeDecodeError: 'ascii' codec can't decode byte 0xe7 in position 0: ordinal not in range(128) I'm having troubles in encoding characters in utf-8. I'm using Django, and I get this error when I tried to send an Android notification with non-plain text. I tried to find where the source of the error and I managed to figure out that the source of the error is not in my project. In python shell, I type: 'ç'.encode('utf8') and I get this error: Traceback (most recent call last): File """", line 1, in UnicodeDecodeError: 'ascii' codec can't decode byte 0xe7 in position 0: ordinal not in range(128) I get the same errors with: 'á'.encode('utf-8') unicode('ç') 'ç'.encode('utf-8','ignore') I get errors with smart_text, force_text and smart_bytes too. Is that a problem with Python, my OS, or another thing? I'm running Python 2.6.6 on a Red Hat version 4.4.7-3","python, django, encoding, utf-8, redhat",16,37562,2,https://stackoverflow.com/questions/18827396/unicodedecodeerror-ascii-codec-cant-decode-byte-0xe7-in-position-0-ordinal 40898077,systemd systemctl stop aggressively kills subprocesses,I've a daemon-like process that starts two subprocesses (and one of the subprocesses starts ~10 others). When I systemctl stop my process the child subprocesses appear to be 'aggressively' killed by systemctl - which doesn't give my process a chance to clean up. How do I get systemctl stop to quit the aggressive kill and thus to allow my process to orchestrate an orderly clean up? I tried timeoutSec=30 to no avail.,systemd systemctl stop aggressively kills subprocesses I've a daemon-like process that starts two subprocesses (and one of the subprocesses starts ~10 others). When I systemctl stop my process the child subprocesses appear to be 'aggressively' killed by systemctl - which doesn't give my process a chance to clean up. How do I get systemctl stop to quit the aggressive kill and thus to allow my process to orchestrate an orderly clean up? I tried timeoutSec=30 to no avail.,"redhat, systemd, systemctl",16,22582,2,https://stackoverflow.com/questions/40898077/systemd-systemctl-stop-aggressively-kills-subprocesses 6902254,stdlib.h: no such file or directory,"I am using various stdlib functions like srand(), etc. I have the line #include at the top of my code. I entered this on the command line: # find / -name stdlib.h find: `/home/dmurvihill/.gvfs: permission denied /usr/include/stdlib.h /usr/include/bits/stdlib.h So, stdlib.h is clearly in /usr/include. My preprocessor: # gcc -print-prog-name=cc1 /usr/libexec/gcc/x86_64-redhat-linux/4.5.1/cc1 My preprocessor's default search path: # /usr/libexec/gcc/x86_64-redhat-linux/4.5.1/cc1 -v ignoring nonexistent directory ""/usr/lib/gcc/x86_64-redhat-linux/4.5.1/include-fixed"" ignoring nonexistent directory ""/usr/lib/gcc/x86_64-redhat-linux/4.5.1/../../../../x86_64-redhat-linux/include"" #include ""..."" search starts here: #include <...> search starts here: /usr/local/include /usr/lib/gcc/x86_64-redhat-linux/4.5.1/include /usr/include End of search list. So, stdlib.h is clearly in /usr/include, which is most definitely supposed to be searched by my preprocessor, but I still get this error! /path/to/cpa_sample_code_main.c:15:20: fatal error: stdlib.h: No such file or directory compilation terminated Update A program I wrote to test this code: #include #include #include int main() { printf(""Hello, World!\n""); printf(""Getting time...\n""); time_t seconds; time(&seconds); printf(""Seeding generator...\n""); srand((unsigned int)seconds); printf(""Getting random number...\n""); int value = rand(); printf(""It is %d!"",value); printf(""Goodbye, cruel world!""); return 0; } The command gcc -H -v -fsyntax-only stdlib_test.c output Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/4.5.1/lto-wrapper Target: x86_64-redhat-linux Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=[URL] --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --enable-languages=c,c++,objc,obj-c++,java,fortran,ada,lto --enable-plugin --enable-java-awt=gtk --disable-dssi --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-1.5.0.0/jre --enable-libgcj-multifile --enable-java-maintainer-mode --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --disable-libjava-multilib --with-ppl --with-cloog --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux Thread model: posix gcc version 4.5.1 20100924 (Red Hat 4.5.1-4) (GCC) COLLECT_GCC_OPTIONS='-H' '-v' '-fsyntax-only' '-mtune=generic' '-march=x86-64' /usr/libexec/gcc/x86_64-redhat-linux/4.5.1/cc1 -quiet -v -H /CRF_Verify/stdlib_test.c -quiet -dumpbase stdlib_test.c -mtune=generic -march=x86-64 -auxbase stdlib_test -version -fsyntax-only -o /dev/null GNU C (GCC) version 4.5.1 20100924 (Red Hat 4.5.1-4) (x86_64-redhat-linux) compiled by GNU C version 4.5.1 20100924 (Red Hat 4.5.1-4), GMP version 4.3.1, MPFR version 2.4.2, MPC version 0.8.1 GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072 ignoring nonexistent directory ""/usr/lib/gcc/x86_64-redhat-linux/4.5.1/include-fixed"" ignoring nonexistent directory ""/usr/lib/gcc/x86_64-redhat-linux/4.5.1/../../../../x86_64-redhat-linux/include"" #include ""..."" search starts here: #include <...> search starts here: /usr/local/include /usr/lib/gcc/x86_64-redhat-linux/4.5.1/include /usr/include End of search list. GNU C (GCC) version 4.5.1 20100924 (Red Hat 4.5.1-4) (x86_64-redhat-linux) compiled by GNU C version 4.5.1 20100924 (Red Hat 4.5.1-4), GMP version 4.3.1, MPFR version 2.4.2, MPC version 0.8.1 GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072 Compiler executable checksum: ea394b69293dd698607206e8e43d607e . /usr/include/stdio.h .. /usr/include/features.h ... /usr/include/sys/cdefs.h .... /usr/include/bits/wordsize.h ... /usr/include/gnu/stubs.h .... /usr/include/bits/wordsize.h .... /usr/include/gnu/stubs-64.h .. /usr/lib/gcc/x86_64-redhat-linux/4.5.1/include/stddef.h .. /usr/include/bits/types.h ... /usr/include/bits/wordsize.h ... /usr/include/bits/typesizes.h .. /usr/include/libio.h ... /usr/include/_G_config.h .... /usr/lib/gcc/x86_64-redhat-linux/4.5.1/include/stddef.h .... /usr/include/wchar.h ... /usr/lib/gcc/x86_64-redhat-linux/4.5.1/include/stdarg.h .. /usr/include/bits/stdio_lim.h .. /usr/include/bits/sys_errlist.h . /usr/include/stdlib.h .. /usr/lib/gcc/x86_64-redhat-linux/4.5.1/include/stddef.h .. /usr/include/bits/waitflags.h .. /usr/include/bits/waitstatus.h ... /usr/include/endian.h .... /usr/include/bits/endian.h .... /usr/include/bits/byteswap.h ..... /usr/include/bits/wordsize.h .. /usr/include/sys/types.h ... /usr/include/time.h ... /usr/lib/gcc/x86_64-redhat-linux/4.5.1/include/stddef.h ... /usr/include/sys/select.h .... /usr/include/bits/select.h ..... /usr/include/bits/wordsize.h .... /usr/include/bits/sigset.h .... /usr/include/time.h .... /usr/include/bits/time.h ... /usr/include/sys/sysmacros.h ... /usr/include/bits/pthreadtypes.h .... /usr/include/bits/wordsize.h .. /usr/include/alloca.h ... /usr/lib/gcc/x86_64-redhat-linux/4.5.1/include/stddef.h . /usr/include/linux/time.h .. /usr/include/linux/types.h ... /usr/include/asm/types.h .... /usr/include/asm-generic/types.h ..... /usr/include/asm-generic/int-ll64.h ...... /usr/include/asm/bitsperlong.h ....... /usr/include/asm-generic/bitsperlong.h ... /usr/include/linux/posix_types.h .... /usr/include/linux/stddef.h .... /usr/include/asm/posix_types.h ..... /usr/include/asm/posix_types_64.h In file included from /CRF_Verify/stdlib_test.c:3:0: /usr/include/linux/time.h:9:8: error: redefinition of ‘struct timespec’ /usr/include/time.h:120:8: note: originally defined here /usr/include/linux/time.h:15:8: error: redefinition of ‘struct timeval’ /usr/include/bits/time.h:75:8: note: originally defined here Multiple include guards may be useful for: /usr/include/asm/posix_types.h /usr/include/bits/byteswap.h /usr/include/bits/endian.h /usr/include/bits/select.h /usr/include/bits/sigset.h /usr/include/bits/stdio_lim.h /usr/include/bits/sys_errlist.h /usr/include/bits/time.h /usr/include/bits/typesizes.h /usr/include/bits/waitflags.h /usr/include/bits/waitstatus.h /usr/include/gnu/stubs-64.h /usr/include/gnu/stubs.h /usr/include/wchar.h","stdlib.h: no such file or directory I am using various stdlib functions like srand(), etc. I have the line #include at the top of my code. I entered this on the command line: # find / -name stdlib.h find: `/home/dmurvihill/.gvfs: permission denied /usr/include/stdlib.h /usr/include/bits/stdlib.h So, stdlib.h is clearly in /usr/include. My preprocessor: # gcc -print-prog-name=cc1 /usr/libexec/gcc/x86_64-redhat-linux/4.5.1/cc1 My preprocessor's default search path: # /usr/libexec/gcc/x86_64-redhat-linux/4.5.1/cc1 -v ignoring nonexistent directory ""/usr/lib/gcc/x86_64-redhat-linux/4.5.1/include-fixed"" ignoring nonexistent directory ""/usr/lib/gcc/x86_64-redhat-linux/4.5.1/../../../../x86_64-redhat-linux/include"" #include ""..."" search starts here: #include <...> search starts here: /usr/local/include /usr/lib/gcc/x86_64-redhat-linux/4.5.1/include /usr/include End of search list. So, stdlib.h is clearly in /usr/include, which is most definitely supposed to be searched by my preprocessor, but I still get this error! /path/to/cpa_sample_code_main.c:15:20: fatal error: stdlib.h: No such file or directory compilation terminated Update A program I wrote to test this code: #include #include #include int main() { printf(""Hello, World!\n""); printf(""Getting time...\n""); time_t seconds; time(&seconds); printf(""Seeding generator...\n""); srand((unsigned int)seconds); printf(""Getting random number...\n""); int value = rand(); printf(""It is %d!"",value); printf(""Goodbye, cruel world!""); return 0; } The command gcc -H -v -fsyntax-only stdlib_test.c output Using built-in specs. COLLECT_GCC=gcc COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/4.5.1/lto-wrapper Target: x86_64-redhat-linux Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=[URL] --enable-bootstrap --enable-shared --enable-threads=posix --enable-checking=release --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --enable-languages=c,c++,objc,obj-c++,java,fortran,ada,lto --enable-plugin --enable-java-awt=gtk --disable-dssi --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-1.5.0.0/jre --enable-libgcj-multifile --enable-java-maintainer-mode --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --disable-libjava-multilib --with-ppl --with-cloog --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux Thread model: posix gcc version 4.5.1 20100924 (Red Hat 4.5.1-4) (GCC) COLLECT_GCC_OPTIONS='-H' '-v' '-fsyntax-only' '-mtune=generic' '-march=x86-64' /usr/libexec/gcc/x86_64-redhat-linux/4.5.1/cc1 -quiet -v -H /CRF_Verify/stdlib_test.c -quiet -dumpbase stdlib_test.c -mtune=generic -march=x86-64 -auxbase stdlib_test -version -fsyntax-only -o /dev/null GNU C (GCC) version 4.5.1 20100924 (Red Hat 4.5.1-4) (x86_64-redhat-linux) compiled by GNU C version 4.5.1 20100924 (Red Hat 4.5.1-4), GMP version 4.3.1, MPFR version 2.4.2, MPC version 0.8.1 GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072 ignoring nonexistent directory ""/usr/lib/gcc/x86_64-redhat-linux/4.5.1/include-fixed"" ignoring nonexistent directory ""/usr/lib/gcc/x86_64-redhat-linux/4.5.1/../../../../x86_64-redhat-linux/include"" #include ""..."" search starts here: #include <...> search starts here: /usr/local/include /usr/lib/gcc/x86_64-redhat-linux/4.5.1/include /usr/include End of search list. GNU C (GCC) version 4.5.1 20100924 (Red Hat 4.5.1-4) (x86_64-redhat-linux) compiled by GNU C version 4.5.1 20100924 (Red Hat 4.5.1-4), GMP version 4.3.1, MPFR version 2.4.2, MPC version 0.8.1 GGC heuristics: --param ggc-min-expand=100 --param ggc-min-heapsize=131072 Compiler executable checksum: ea394b69293dd698607206e8e43d607e . /usr/include/stdio.h .. /usr/include/features.h ... /usr/include/sys/cdefs.h .... /usr/include/bits/wordsize.h ... /usr/include/gnu/stubs.h .... /usr/include/bits/wordsize.h .... /usr/include/gnu/stubs-64.h .. /usr/lib/gcc/x86_64-redhat-linux/4.5.1/include/stddef.h .. /usr/include/bits/types.h ... /usr/include/bits/wordsize.h ... /usr/include/bits/typesizes.h .. /usr/include/libio.h ... /usr/include/_G_config.h .... /usr/lib/gcc/x86_64-redhat-linux/4.5.1/include/stddef.h .... /usr/include/wchar.h ... /usr/lib/gcc/x86_64-redhat-linux/4.5.1/include/stdarg.h .. /usr/include/bits/stdio_lim.h .. /usr/include/bits/sys_errlist.h . /usr/include/stdlib.h .. /usr/lib/gcc/x86_64-redhat-linux/4.5.1/include/stddef.h .. /usr/include/bits/waitflags.h .. /usr/include/bits/waitstatus.h ... /usr/include/endian.h .... /usr/include/bits/endian.h .... /usr/include/bits/byteswap.h ..... /usr/include/bits/wordsize.h .. /usr/include/sys/types.h ... /usr/include/time.h ... /usr/lib/gcc/x86_64-redhat-linux/4.5.1/include/stddef.h ... /usr/include/sys/select.h .... /usr/include/bits/select.h ..... /usr/include/bits/wordsize.h .... /usr/include/bits/sigset.h .... /usr/include/time.h .... /usr/include/bits/time.h ... /usr/include/sys/sysmacros.h ... /usr/include/bits/pthreadtypes.h .... /usr/include/bits/wordsize.h .. /usr/include/alloca.h ... /usr/lib/gcc/x86_64-redhat-linux/4.5.1/include/stddef.h . /usr/include/linux/time.h .. /usr/include/linux/types.h ... /usr/include/asm/types.h .... /usr/include/asm-generic/types.h ..... /usr/include/asm-generic/int-ll64.h ...... /usr/include/asm/bitsperlong.h ....... /usr/include/asm-generic/bitsperlong.h ... /usr/include/linux/posix_types.h .... /usr/include/linux/stddef.h .... /usr/include/asm/posix_types.h ..... /usr/include/asm/posix_types_64.h In file included from /CRF_Verify/stdlib_test.c:3:0: /usr/include/linux/time.h:9:8: error: redefinition of ‘struct timespec’ /usr/include/time.h:120:8: note: originally defined here /usr/include/linux/time.h:15:8: error: redefinition of ‘struct timeval’ /usr/include/bits/time.h:75:8: note: originally defined here Multiple include guards may be useful for: /usr/include/asm/posix_types.h /usr/include/bits/byteswap.h /usr/include/bits/endian.h /usr/include/bits/select.h /usr/include/bits/sigset.h /usr/include/bits/stdio_lim.h /usr/include/bits/sys_errlist.h /usr/include/bits/time.h /usr/include/bits/typesizes.h /usr/include/bits/waitflags.h /usr/include/bits/waitstatus.h /usr/include/gnu/stubs-64.h /usr/include/gnu/stubs.h /usr/include/wchar.h","c, gcc, include, c-preprocessor, redhat",16,81032,3,https://stackoverflow.com/questions/6902254/stdlib-h-no-such-file-or-directory 72690495,Interact with podman docker via socket in Redhat 9,"I'm trying to migrate one of my dev boxes over from centos 8 to RHEL9. I rely heavily on docker and noticed when I tried to run a docker command on the RHEL box it installed podman-docker. This seemed to go smoothly; I was able to pull an image, launch, build, commit a new version without problem using the docker commands I knew already. The problem I have encountered though is I can't seem to interact with it via the docker socket (which seems to be a link to the podman one). If I run the docker command: [@rhel9 ~]$ docker images Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg. REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/redhat/ubi9 dev_image de371523ca26 6 hours ago 805 MB docker.io/redhat/ubi9 latest 9ad46cd10362 6 days ago 230 MB it has my images listed as expected. I should be able to also run: [@rhel9 ~]$ curl --unix-socket /var/run/docker.sock -H 'Content-Type: application/json' [URL] | jq . % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 3 100 3 0 0 55 0 --:--:-- --:--:-- --:--:-- 55 [] but as you can see, nothing is coming back. The socket is up and running as I can ping it without issue: [@rhel9 ~]$ curl -H ""Content-Type: application/json"" --unix-socket /var/run/docker.sock [URL] OK I also tried the curl commands using the podman socket directly but it had the same results. Is there something I am missing or a trick to getting it to work so that I can interact with docker/podman via the socket?","Interact with podman docker via socket in Redhat 9 I'm trying to migrate one of my dev boxes over from centos 8 to RHEL9. I rely heavily on docker and noticed when I tried to run a docker command on the RHEL box it installed podman-docker. This seemed to go smoothly; I was able to pull an image, launch, build, commit a new version without problem using the docker commands I knew already. The problem I have encountered though is I can't seem to interact with it via the docker socket (which seems to be a link to the podman one). If I run the docker command: [@rhel9 ~]$ docker images Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg. REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/redhat/ubi9 dev_image de371523ca26 6 hours ago 805 MB docker.io/redhat/ubi9 latest 9ad46cd10362 6 days ago 230 MB it has my images listed as expected. I should be able to also run: [@rhel9 ~]$ curl --unix-socket /var/run/docker.sock -H 'Content-Type: application/json' [URL] | jq . % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 3 100 3 0 0 55 0 --:--:-- --:--:-- --:--:-- 55 [] but as you can see, nothing is coming back. The socket is up and running as I can ping it without issue: [@rhel9 ~]$ curl -H ""Content-Type: application/json"" --unix-socket /var/run/docker.sock [URL] OK I also tried the curl commands using the podman socket directly but it had the same results. Is there something I am missing or a trick to getting it to work so that I can interact with docker/podman via the socket?","docker, redhat, podman",15,59735,3,https://stackoverflow.com/questions/72690495/interact-with-podman-docker-via-socket-in-redhat-9 45008355,Elasticsearch process memory locking failed,"I have set boostrap.memory_lock=true Updated /etc/security/limits.conf added memlock unlimited for elastic search user My elastic search was running fine for many months. Suddenly it failed 1 day back. In logs I can see below error and process never starts ERROR: bootstrap checks failed memory locking requested for elasticsearch process but memory is not locked I hit ulimit -as and I can see max locked memory set to unlimited. What is going wrong here? I have been trying for hours but all in vain. Please help. OS is RHEL 7.2 Elasticsearch 5.1.2 ulimit -as output core file size (blocks -c) 0 data seg size (kbytes -d) unlimited scheduling policy (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 83552 max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 65536 pipe size (512 bytes, -q) 8 POSIX message queues (bytes,-q) 819200 real-time priority (-r) 0 stack size kbytes, -s) 8192 cpu time seconds, -t) unlimited max user processes (-u) 4096 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited","Elasticsearch process memory locking failed I have set boostrap.memory_lock=true Updated /etc/security/limits.conf added memlock unlimited for elastic search user My elastic search was running fine for many months. Suddenly it failed 1 day back. In logs I can see below error and process never starts ERROR: bootstrap checks failed memory locking requested for elasticsearch process but memory is not locked I hit ulimit -as and I can see max locked memory set to unlimited. What is going wrong here? I have been trying for hours but all in vain. Please help. OS is RHEL 7.2 Elasticsearch 5.1.2 ulimit -as output core file size (blocks -c) 0 data seg size (kbytes -d) unlimited scheduling policy (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 83552 max locked memory (kbytes, -l) unlimited max memory size (kbytes, -m) unlimited open files (-n) 65536 pipe size (512 bytes, -q) 8 POSIX message queues (bytes,-q) 819200 real-time priority (-r) 0 stack size kbytes, -s) 8192 cpu time seconds, -t) unlimited max user processes (-u) 4096 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited","linux, elasticsearch, redhat",15,33681,9,https://stackoverflow.com/questions/45008355/elasticsearch-process-memory-locking-failed 3892282,Will Java compiled in windows work in Linux?,"My Java program is in working order when i use it under Windows(Eclipse and Bluej). I compress it to a Jar and send it to my red hat and bang. nothing works. It breaks on the weirdest things, such as text field set text will not show, JPasswordfield just disappeared, Java AWT ROBOT dies too... the list goes on, first i thought it must be my Linux JRE is out of date, but i installed latest JRE then the JDK with no improvement at all. I have a feeling that i miss understood the Java cross plat ability. I also tried to remove all of my functions and guts to see what is breaking but it seems every second thing is breaking, other than the some of the major GUI components and most of the back end stuff. basically any thing that uses some thing fancy will blowup in my face, such as making a text field in to a password field... This is my first time posting ;) please be nice to the newbie! Thanks!!! SOLVED!!! Yay. Problem solved!!! It was because my Java path isn't set, so my GCC/GCJ jumped in instead of my oracle java, even tho i used java -jar xxx.jar. so I put in the java directory path from of my java -jar xxx.jar and worked like a charm. unless you set the path, you have have to do this manually /usr/java/jdk1.6.0_21/jre/bin/java -jar xxxxx.jar java -version to check if your real java is running or if it s still GCJ","Will Java compiled in windows work in Linux? My Java program is in working order when i use it under Windows(Eclipse and Bluej). I compress it to a Jar and send it to my red hat and bang. nothing works. It breaks on the weirdest things, such as text field set text will not show, JPasswordfield just disappeared, Java AWT ROBOT dies too... the list goes on, first i thought it must be my Linux JRE is out of date, but i installed latest JRE then the JDK with no improvement at all. I have a feeling that i miss understood the Java cross plat ability. I also tried to remove all of my functions and guts to see what is breaking but it seems every second thing is breaking, other than the some of the major GUI components and most of the back end stuff. basically any thing that uses some thing fancy will blowup in my face, such as making a text field in to a password field... This is my first time posting ;) please be nice to the newbie! Thanks!!! SOLVED!!! Yay. Problem solved!!! It was because my Java path isn't set, so my GCC/GCJ jumped in instead of my oracle java, even tho i used java -jar xxx.jar. so I put in the java directory path from of my java -jar xxx.jar and worked like a charm. unless you set the path, you have have to do this manually /usr/java/jdk1.6.0_21/jre/bin/java -jar xxxxx.jar java -version to check if your real java is running or if it s still GCJ","linux, cross-platform, redhat, java",15,28271,10,https://stackoverflow.com/questions/3892282/will-java-compiled-in-windows-work-in-linux 40231172,How to install vim on RedHat via commmandline,"I am running a RHEL 7.2 (Maipo) on an AWS instance with commandline access. To my greatest surprise, vim needs to be installed and as I am fairly new to RedHat, I was at a loss initially as to the easiest way to install it, so I am adding it below for future reference so beginners like myself can just crack on with it.","How to install vim on RedHat via commmandline I am running a RHEL 7.2 (Maipo) on an AWS instance with commandline access. To my greatest surprise, vim needs to be installed and as I am fairly new to RedHat, I was at a loss initially as to the easiest way to install it, so I am adding it below for future reference so beginners like myself can just crack on with it.","vim, installation, command-line-interface, redhat",15,33301,1,https://stackoverflow.com/questions/40231172/how-to-install-vim-on-redhat-via-commmandline 22538185,Openshift app redirecting to [URL],I have hosted an app on Redhat Open shift. I didn't change anything but it started redirecting to [URL] and throwing 404 error. Can anyone help me in solving this?,Openshift app redirecting to [URL] I have hosted an app on Redhat Open shift. I didn't change anything but it started redirecting to [URL] and throwing 404 error. Can anyone help me in solving this?,"http-status-code-404, redhat, openshift, cname",15,3498,6,https://stackoverflow.com/questions/22538185/openshift-app-redirecting-to-https-domain-name-app 8747533,Jenkins / Hudson CI Minimum Requirements for a linux RH installation,"We are planning on using Jenkins (used to be Hudson) for the automated builds of our project. I need to find out what it needs from a system requirements standpoint (RAM, disk, CPU) for a Linux RH installation. We will be testing a Mobile application project. I did check this post but couldn't find a response.","Jenkins / Hudson CI Minimum Requirements for a linux RH installation We are planning on using Jenkins (used to be Hudson) for the automated builds of our project. I need to find out what it needs from a system requirements standpoint (RAM, disk, CPU) for a Linux RH installation. We will be testing a Mobile application project. I did check this post but couldn't find a response.","linux, hudson, redhat",15,20618,1,https://stackoverflow.com/questions/8747533/jenkins-hudson-ci-minimum-requirements-for-a-linux-rh-installation 32264427,What is the difference between ~/ and ~ in Linux?,"I am novice to Linux, using it a little more than a year. Can anybody help me resolve my question? When I use ~/ only it shows user home directory. Why does it not work in the case of using ~ alone to specify path to a file or directory?","What is the difference between ~/ and ~ in Linux? I am novice to Linux, using it a little more than a year. Can anybody help me resolve my question? When I use ~/ only it shows user home directory. Why does it not work in the case of using ~ alone to specify path to a file or directory?","linux, redhat",15,24883,1,https://stackoverflow.com/questions/32264427/what-is-the-difference-between-and-in-linux 60622192,Keycloak: Session cookies are missing within the token request with the new Chrome SameSite/Secure cookie enforcement,"Recently my application using Keycloak stopped working with a 400 token request after authenticating. What I found so far is that within the token request, the Keycloak cookies (AUTH_SESSION_ID, KEYCLOAK_IDENTITY, KEYCLOAK_SESSION) are not sent within the request headers causing the request for a token to fail and the application gets a session error. By digging more, I found that Chrome blocks now cookies without SameSite attribute set, which is the case for the keycloak cookies and that's why they are never parsed within the token acquisition request after authenticating. The error I get:- [URL] [URL] This is very serious as it blocks applications secured by Keycloak library to be able to communicate with the keycloak server. Update : With the new google chrome cookie SameSite attribute, any third party library using cookies without SameSite attribute properly set, the cookie will be ignored. [URL] [URL]","Keycloak: Session cookies are missing within the token request with the new Chrome SameSite/Secure cookie enforcement Recently my application using Keycloak stopped working with a 400 token request after authenticating. What I found so far is that within the token request, the Keycloak cookies (AUTH_SESSION_ID, KEYCLOAK_IDENTITY, KEYCLOAK_SESSION) are not sent within the request headers causing the request for a token to fail and the application gets a session error. By digging more, I found that Chrome blocks now cookies without SameSite attribute set, which is the case for the keycloak cookies and that's why they are never parsed within the token acquisition request after authenticating. The error I get:- [URL] [URL] This is very serious as it blocks applications secured by Keycloak library to be able to communicate with the keycloak server. Update : With the new google chrome cookie SameSite attribute, any third party library using cookies without SameSite attribute properly set, the cookie will be ignored. [URL] [URL]","google-chrome, single-sign-on, redhat, keycloak, keycloak-services",15,36362,3,https://stackoverflow.com/questions/60622192/keycloak-session-cookies-are-missing-within-the-token-request-with-the-new-chro 28802298,Yum repositories don't work unless there are exceptions in the AWS firewall. How do I make the exceptions based on a DNS name?,"When I try to install something via yum (e.g., yum install java), I get the following: Could not contact CDS load balancer rhui2-cds01.us-west-2.aws.ce.redhat.com, trying others. Could not contact any CDS load balancers: rhui2-cds01.us-west-2.aws.ce.redhat.com, rhui2-cds02.us-west-2.aws.ce.redhat.com. Earlier today I installed various yum packages. This evening I tried several, but none worked. This link explains that certain firewall rules need to be made: [URL] I don't have an explanation why all Yum install commands were working earlier today. Several different ones later stopped working. Here is the solution: via the AWS console, I opened all traffic over port 443 (inbound and outbound traffic). This isn't an ideal solution or a permanent solution. The security groups in the AWS console only permit filtering based on IP addresses and IP address ranges. DNS names aren't part of the filtering. Using AWS, how can I open port 443 and port 80 to specific DNS names?","Yum repositories don't work unless there are exceptions in the AWS firewall. How do I make the exceptions based on a DNS name? When I try to install something via yum (e.g., yum install java), I get the following: Could not contact CDS load balancer rhui2-cds01.us-west-2.aws.ce.redhat.com, trying others. Could not contact any CDS load balancers: rhui2-cds01.us-west-2.aws.ce.redhat.com, rhui2-cds02.us-west-2.aws.ce.redhat.com. Earlier today I installed various yum packages. This evening I tried several, but none worked. This link explains that certain firewall rules need to be made: [URL] I don't have an explanation why all Yum install commands were working earlier today. Several different ones later stopped working. Here is the solution: via the AWS console, I opened all traffic over port 443 (inbound and outbound traffic). This isn't an ideal solution or a permanent solution. The security groups in the AWS console only permit filtering based on IP addresses and IP address ranges. DNS names aren't part of the filtering. Using AWS, how can I open port 443 and port 80 to specific DNS names?","amazon-web-services, redhat, yum",14,29746,5,https://stackoverflow.com/questions/28802298/yum-repositories-dont-work-unless-there-are-exceptions-in-the-aws-firewall-how 41156556,What exact command is to install pm2 on offline RHEL,First of all it's not a duplicate question of below:- How to install npm -g on offline server I install npmbox ( [URL] ) on my offline REHL server but I'm still do not know how to install pm2 or any other package using that. Please advise.,What exact command is to install pm2 on offline RHEL First of all it's not a duplicate question of below:- How to install npm -g on offline server I install npmbox ( [URL] ) on my offline REHL server but I'm still do not know how to install pm2 or any other package using that. Please advise.,"node.js, linux, ubuntu, redhat, pm2",14,23579,5,https://stackoverflow.com/questions/41156556/what-exact-command-is-to-install-pm2-on-offline-rhel 21671552,How to install Xvfb (X virtual framebuffer) on Redhat 6.5?,I have tried to install the Xvfb on red-hat 6.5 using yum -y install xorg-x11-server-Xvfb but it is not installed and it is giving msg that No package xorg-x11-server-Xvfb available. Error: Nothing to do Plese help me to install Xvfb on Redhat 6.5 to remove the headless exception in the Applet. Thanks.,How to install Xvfb (X virtual framebuffer) on Redhat 6.5? I have tried to install the Xvfb on red-hat 6.5 using yum -y install xorg-x11-server-Xvfb but it is not installed and it is giving msg that No package xorg-x11-server-Xvfb available. Error: Nothing to do Plese help me to install Xvfb on Redhat 6.5 to remove the headless exception in the Applet. Thanks.,"linux, installation, redhat",14,49213,2,https://stackoverflow.com/questions/21671552/how-to-install-xvfb-x-virtual-framebuffer-on-redhat-6-5 31523030,Where is javac after installing new openjdk?,"An additional jdk was installed and configured on RHEL5. yum install java-1.7.0-openjdk.x86_64 update-alternatives It appeared to work: java -version points to desired 1.7. However, javac -version still points to old 1.6. sudo update-alternatives --config javac only lists one option. I could not find the additional javac . How do I install or configure a 1.7 javac ?","Where is javac after installing new openjdk? An additional jdk was installed and configured on RHEL5. yum install java-1.7.0-openjdk.x86_64 update-alternatives It appeared to work: java -version points to desired 1.7. However, javac -version still points to old 1.6. sudo update-alternatives --config javac only lists one option. I could not find the additional javac . How do I install or configure a 1.7 javac ?","java, redhat, javac, rhel5",14,19632,2,https://stackoverflow.com/questions/31523030/where-is-javac-after-installing-new-openjdk 58616161,postcss-svgo: TypeError: Cannot set property 'multipassCount' of undefined (Gatsby),"On a Gatsby 2.17.6 project, when building: Building production JavaScript and CSS bundles [==== ] 1.940 s 1/6 17% run queries failed Building production JavaScript and CSS bundles - 75.519s ERROR #98123 WEBPACK Generating JavaScript bundles failed postcss-svgo: TypeError: Cannot set property 'multipassCount' of undefined not finished run queries - 77.639s npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! gatsby-starter-default@1.0.0 build: node node_modules/gatsby/dist/bin/gatsby.js build` npm ERR! Exit status 1 These are some of my dependencies: ""dependencies"": { ""babel-plugin-styled-components"": ""^1.8.0"", : ""gatsby"": ""^2.0.19"", ""gatsby-plugin-favicon"": ""^3.1.4"", ""gatsby-plugin-google-fonts"": ""0.0.4"", ""gatsby-plugin-offline"": ""^2.0.5"", ""gatsby-plugin-react-helmet"": ""^3.0.0"", ""gatsby-plugin-styled-components"": ""^3.0.1"", : ""react"": ""^16.5.1"", ""react-dom"": ""^16.5.1"", ""react-helmet"": ""^5.2.0"", ""react-leaflet"": ""^2.1.1"", ""styled-components"": ""^4.1.1"" } I don't see any configurations about postcss on gatsby-config.js, I guess it's a default behaviour of Gatsby. npm ls postcss-svgo throw this: gatsby-starter-default@1.0.0 //source └─┬ gatsby@2.17.6 └─┬ optimize-css-assets-webpack-plugin@5.0.3 └─┬ cssnano@4.1.10 └─┬ cssnano-preset-default@4.0.7 └── postcss-svgo@4.0.2 I wouldn't mind to disable postcss-svgo if that's a solution, but I don't know how.","postcss-svgo: TypeError: Cannot set property 'multipassCount' of undefined (Gatsby) On a Gatsby 2.17.6 project, when building: Building production JavaScript and CSS bundles [==== ] 1.940 s 1/6 17% run queries failed Building production JavaScript and CSS bundles - 75.519s ERROR #98123 WEBPACK Generating JavaScript bundles failed postcss-svgo: TypeError: Cannot set property 'multipassCount' of undefined not finished run queries - 77.639s npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! gatsby-starter-default@1.0.0 build: node node_modules/gatsby/dist/bin/gatsby.js build` npm ERR! Exit status 1 These are some of my dependencies: ""dependencies"": { ""babel-plugin-styled-components"": ""^1.8.0"", : ""gatsby"": ""^2.0.19"", ""gatsby-plugin-favicon"": ""^3.1.4"", ""gatsby-plugin-google-fonts"": ""0.0.4"", ""gatsby-plugin-offline"": ""^2.0.5"", ""gatsby-plugin-react-helmet"": ""^3.0.0"", ""gatsby-plugin-styled-components"": ""^3.0.1"", : ""react"": ""^16.5.1"", ""react-dom"": ""^16.5.1"", ""react-helmet"": ""^5.2.0"", ""react-leaflet"": ""^2.1.1"", ""styled-components"": ""^4.1.1"" } I don't see any configurations about postcss on gatsby-config.js, I guess it's a default behaviour of Gatsby. npm ls postcss-svgo throw this: gatsby-starter-default@1.0.0 //source └─┬ gatsby@2.17.6 └─┬ optimize-css-assets-webpack-plugin@5.0.3 └─┬ cssnano@4.1.10 └─┬ cssnano-preset-default@4.0.7 └── postcss-svgo@4.0.2 I wouldn't mind to disable postcss-svgo if that's a solution, but I don't know how.","node.js, webpack, redhat, gatsby, postcss",14,3512,4,https://stackoverflow.com/questions/58616161/postcss-svgo-typeerror-cannot-set-property-multipasscount-of-undefined-gats 21742227,RedHat daemon function usage,"I'm working on an init script for Jetty on RHEL. Trying to use the daemon function provided by the init library ( /etc/rc.d/init.d/functions ). I found this terse documentation , and an online example (I've also been looking at other init scripts on the system for examples). Look at this snippet from online to start the daemon daemon --user=""$DAEMON_USER"" --pidfile=""$PIDFILE"" ""$DAEMON $DAEMON_ARGS &"" RETVAL=$? pid=ps -A | grep $NAME | cut -d"" "" -f2 pid=echo $pid | cut -d"" "" -f2 if [ -n ""$pid"" ]; then echo $pid > ""$PIDFILE"" fi Why bother looking up the $PID and writing it to the $PIDFILE by hand? I guess I'm wondering what the point of the --pidfile option to the daemon function is.","RedHat daemon function usage I'm working on an init script for Jetty on RHEL. Trying to use the daemon function provided by the init library ( /etc/rc.d/init.d/functions ). I found this terse documentation , and an online example (I've also been looking at other init scripts on the system for examples). Look at this snippet from online to start the daemon daemon --user=""$DAEMON_USER"" --pidfile=""$PIDFILE"" ""$DAEMON $DAEMON_ARGS &"" RETVAL=$? pid=ps -A | grep $NAME | cut -d"" "" -f2 pid=echo $pid | cut -d"" "" -f2 if [ -n ""$pid"" ]; then echo $pid > ""$PIDFILE"" fi Why bother looking up the $PID and writing it to the $PIDFILE by hand? I guess I'm wondering what the point of the --pidfile option to the daemon function is.","linux, bash, daemon, redhat, init",14,30565,1,https://stackoverflow.com/questions/21742227/redhat-daemon-function-usage 14400595,Java OracleDB connection taking too long the first time,"I'm having a problem when connecting to an Oracle database, it takes a long time (about ~5 minutes) and it sends the below shown exception. Most of the time, after the first error, the next connections for the same process work correctly. It is a RHEL 6 machine, with two different network interfaces and ip addresses. NOTE: I am not using an url like: ""jdbc:oracle:thin:@xxxx:yyy, it is actually: ""jdbc:oracle:thin:@xxxx:yyyy:zzz. The SID is not missing, sorry for that :( This is roughly what I've isolated: bin/java -classpath ojdbc6_g.jar -Djavax.net.debug=all -Djava.util.logging.config.file=logging.properties Class.forName (""oracle.jdbc.OracleDriver"") DriverManager.getConnection(""jdbc:oracle:thin:@xxxx:yyyy"", ""aaaa"", ""bbbb"") Error StackTrace: java.sql.SQLRecoverableException: IO Error: Connection reset at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:533) at oracle.jdbc.driver.PhysicalConnection.(PhysicalConnection.java:557) at oracle.jdbc.driver.T4CConnection.(T4CConnection.java:233) at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:29) at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:556) at java.sql.DriverManager.getConnection(DriverManager.java:579) at java.sql.DriverManager.getConnection(DriverManager.java:221) at test.jdbc.Main(Test.java:120) Caused by: java.net.SocketException: Connection reset at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113) at java.net.SocketOutputStream.write(SocketOutputStream.java:153) at oracle.net.ns.DataPacket.send(DataPacket.java:248) at oracle.net.ns.NetOutputStream.flush(NetOutputStream.java:227) at oracle.net.ns.NetInputStream.getNextPacket(NetInputStream.java:309) at oracle.net.ns.NetInputStream.read(NetInputStream.java:257) at oracle.net.ns.NetInputStream.read(NetInputStream.java:182) at oracle.net.ns.NetInputStream.read(NetInputStream.java:99) at oracle.jdbc.driver.T4CSocketInputStreamWrapper.readNextPacket(T4CSocketInputStreamWrapper.java:121) at oracle.jdbc.driver.T4CSocketInputStreamWrapper.read(T4CSocketInputStreamWrapper.java:77) at oracle.jdbc.driver.T4CMAREngine.unmarshalUB1(T4CMAREngine.java:1173) at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:309) at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:200) at oracle.jdbc.driver.T4CTTIoauthenticate.doOSESSKEY(T4CTTIoauthenticate.java:404) at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:430) ... 35 more There's a very verbose log of what happens over here: [URL] The line that says GET STUCK HERE represents the 5 minute waiting time","Java OracleDB connection taking too long the first time I'm having a problem when connecting to an Oracle database, it takes a long time (about ~5 minutes) and it sends the below shown exception. Most of the time, after the first error, the next connections for the same process work correctly. It is a RHEL 6 machine, with two different network interfaces and ip addresses. NOTE: I am not using an url like: ""jdbc:oracle:thin:@xxxx:yyy, it is actually: ""jdbc:oracle:thin:@xxxx:yyyy:zzz. The SID is not missing, sorry for that :( This is roughly what I've isolated: bin/java -classpath ojdbc6_g.jar -Djavax.net.debug=all -Djava.util.logging.config.file=logging.properties Class.forName (""oracle.jdbc.OracleDriver"") DriverManager.getConnection(""jdbc:oracle:thin:@xxxx:yyyy"", ""aaaa"", ""bbbb"") Error StackTrace: java.sql.SQLRecoverableException: IO Error: Connection reset at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:533) at oracle.jdbc.driver.PhysicalConnection.(PhysicalConnection.java:557) at oracle.jdbc.driver.T4CConnection.(T4CConnection.java:233) at oracle.jdbc.driver.T4CDriverExtension.getConnection(T4CDriverExtension.java:29) at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:556) at java.sql.DriverManager.getConnection(DriverManager.java:579) at java.sql.DriverManager.getConnection(DriverManager.java:221) at test.jdbc.Main(Test.java:120) Caused by: java.net.SocketException: Connection reset at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:113) at java.net.SocketOutputStream.write(SocketOutputStream.java:153) at oracle.net.ns.DataPacket.send(DataPacket.java:248) at oracle.net.ns.NetOutputStream.flush(NetOutputStream.java:227) at oracle.net.ns.NetInputStream.getNextPacket(NetInputStream.java:309) at oracle.net.ns.NetInputStream.read(NetInputStream.java:257) at oracle.net.ns.NetInputStream.read(NetInputStream.java:182) at oracle.net.ns.NetInputStream.read(NetInputStream.java:99) at oracle.jdbc.driver.T4CSocketInputStreamWrapper.readNextPacket(T4CSocketInputStreamWrapper.java:121) at oracle.jdbc.driver.T4CSocketInputStreamWrapper.read(T4CSocketInputStreamWrapper.java:77) at oracle.jdbc.driver.T4CMAREngine.unmarshalUB1(T4CMAREngine.java:1173) at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:309) at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:200) at oracle.jdbc.driver.T4CTTIoauthenticate.doOSESSKEY(T4CTTIoauthenticate.java:404) at oracle.jdbc.driver.T4CConnection.logon(T4CConnection.java:430) ... 35 more There's a very verbose log of what happens over here: [URL] The line that says GET STUCK HERE represents the 5 minute waiting time","java, oracle11g, redhat",13,9486,2,https://stackoverflow.com/questions/14400595/java-oracledb-connection-taking-too-long-the-first-time 43265767,Difference between noarch rpm and a rpm,Can someone explain difference between noarch rpm and rpm. Is these two are dependents. I have Jenkins rpm and there are some noarch rpm too. what I can do with noarch rpm. Thanks for your help,Difference between noarch rpm and a rpm Can someone explain difference between noarch rpm and rpm. Is these two are dependents. I have Jenkins rpm and there are some noarch rpm too. what I can do with noarch rpm. Thanks for your help,"linux, centos, operating-system, redhat, rpm",13,22100,1,https://stackoverflow.com/questions/43265767/difference-between-noarch-rpm-and-a-rpm 24676687,top 'xterm': unknown terminal type,I have an error when run TOP command: >top 'xterm': unknown terminal type. > echo $TERM xterm > echo $DISPLAY DYSPLAY: Undefined variable. > cat /etc/redhat-release Red Hat Enterprise Linux Server release 6.3 (Santiago) > ls /usr/share/terminfo/ 1 2 3 4 5 6 7 8 9 a A b c d e E f g h i j k l L m M n N o p P q Q r s t u v w x X z > ls /usr/share/terminfo/x/xterm /usr/share/terminfo/x/xterm i have that problem also with Root. does TOP use xterm? How can i do?,top 'xterm': unknown terminal type I have an error when run TOP command: >top 'xterm': unknown terminal type. > echo $TERM xterm > echo $DISPLAY DYSPLAY: Undefined variable. > cat /etc/redhat-release Red Hat Enterprise Linux Server release 6.3 (Santiago) > ls /usr/share/terminfo/ 1 2 3 4 5 6 7 8 9 a A b c d e E f g h i j k l L m M n N o p P q Q r s t u v w x X z > ls /usr/share/terminfo/x/xterm /usr/share/terminfo/x/xterm i have that problem also with Root. does TOP use xterm? How can i do?,"linux, shell, command, redhat, terminfo",13,32915,3,https://stackoverflow.com/questions/24676687/top-xterm-unknown-terminal-type 51745010,"ldap_modify: Other (e.g., implementation specific) error (80)","I followed RHEL7: Configure a LDAP directory service for user connection to configure openldap on CentOS Linux release 7. First I create the /etc/openldap/changes.ldif file and paste the content with replacing the password of course with the previously created password. Then I get to send the new configuration to the slapd server using the command # ldapmodify -Y EXTERNAL -H ldapi:/// -f /etc/openldap/changes.ldif Once I do that I get the following error: # ldapmodify -Y EXTERNAL -H ldapi:/// -f /etc/openldap/changes.ldif SASL/EXTERNAL authentication started SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth SASL SSF: 0 modifying entry ""olcDatabase={2}hdb,cn=config"" modifying entry ""olcDatabase={2}hdb,cn=config"" modifying entry ""olcDatabase={2}hdb,cn=config"" modifying entry ""cn=config"" ldap_modify: Other (e.g., implementation specific) error (80) All the files are readable for the user slapd is running as. What's wrong there? I couldn't find anything useful to feed SEARCHENGINE with. It's been a while that I've been looking for a solution but at the moment all what I found is two people Re: Error 80 with ldapmodify ldap_modify: Other (e.g., implementation specific) error (80) Having the same problem and asking the same question but no answers.","ldap_modify: Other (e.g., implementation specific) error (80) I followed RHEL7: Configure a LDAP directory service for user connection to configure openldap on CentOS Linux release 7. First I create the /etc/openldap/changes.ldif file and paste the content with replacing the password of course with the previously created password. Then I get to send the new configuration to the slapd server using the command # ldapmodify -Y EXTERNAL -H ldapi:/// -f /etc/openldap/changes.ldif Once I do that I get the following error: # ldapmodify -Y EXTERNAL -H ldapi:/// -f /etc/openldap/changes.ldif SASL/EXTERNAL authentication started SASL username: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth SASL SSF: 0 modifying entry ""olcDatabase={2}hdb,cn=config"" modifying entry ""olcDatabase={2}hdb,cn=config"" modifying entry ""olcDatabase={2}hdb,cn=config"" modifying entry ""cn=config"" ldap_modify: Other (e.g., implementation specific) error (80) All the files are readable for the user slapd is running as. What's wrong there? I couldn't find anything useful to feed SEARCHENGINE with. It's been a while that I've been looking for a solution but at the moment all what I found is two people Re: Error 80 with ldapmodify ldap_modify: Other (e.g., implementation specific) error (80) Having the same problem and asking the same question but no answers.","ldap, redhat, centos7, rhel, slapd",13,28771,1,https://stackoverflow.com/questions/51745010/ldap-modify-other-e-g-implementation-specific-error-80 39464203,sed + how to append lines with indent,"I use the following sed command in order to append the lines: rotate 1 size 1k after the word missingok the little esthetic problem is that ""rotate 1"" isn’t alignment like the other lines # sed '/missingok/a rotate 1\n size 1k' /etc/logrotate.d/httpd /var/log/httpd/*log { missingok rotate 1 size 1k notifempty sharedscripts delaycompress postrotate /sbin/service httpd reload > /dev/null 2>/dev/null || true endscript } someone have advice how to indent the string ""rotate 1"" under missingok string ? the original file /var/log/httpd/*log { missingok notifempty sharedscripts delaycompress postrotate /sbin/service httpd reload > /dev/null 2>/dev/null || true endscript }","sed + how to append lines with indent I use the following sed command in order to append the lines: rotate 1 size 1k after the word missingok the little esthetic problem is that ""rotate 1"" isn’t alignment like the other lines # sed '/missingok/a rotate 1\n size 1k' /etc/logrotate.d/httpd /var/log/httpd/*log { missingok rotate 1 size 1k notifempty sharedscripts delaycompress postrotate /sbin/service httpd reload > /dev/null 2>/dev/null || true endscript } someone have advice how to indent the string ""rotate 1"" under missingok string ? the original file /var/log/httpd/*log { missingok notifempty sharedscripts delaycompress postrotate /sbin/service httpd reload > /dev/null 2>/dev/null || true endscript }","linux, sed, redhat",13,9823,3,https://stackoverflow.com/questions/39464203/sed-how-to-append-lines-with-indent 65947327,"Ansible 'no_log' for specific values in debug output, not entire module","I am studying for the RedHat Certified Specialist in Ansible Automation (EX407) and I'm playing around with the no_log module parameter. I have a sample playbook structured as so; --- - hosts: webservers tasks: - name: Query vCenter vmware_guest: hostname: ""{{ vcenter['host'] }}"" username: ""{{ vcenter['username'] }}"" password: ""{{ vcenter['password'] }}"" name: ""{{ inventory_hostname }}"" validate_certs: no delegate_to: localhost no_log: yes ... When no_log is disabled, I get a lot of helpful debug information about my VM, but when no_log is disabled I obviously can't protect my playbooks vaulted data (in this case that is the vcenter['username'] and vcenter['password'] values). Enabling no_log cripples my playbooks debug output to just; ""censored"": ""the output has been hidden due to the fact that 'no_log: true' was specified for this result"", I would like to know how it is possible to censor only some of the debug output. I know this is possible because vcenter['password'] is protected in it's output regardless of my no_log state. I see this in the verbose output when no_log is disabled; ""invocation"": { ""module_args"": { ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""username"": ""administrator@vsphere.local"" } } What are your thoughts?","Ansible 'no_log' for specific values in debug output, not entire module I am studying for the RedHat Certified Specialist in Ansible Automation (EX407) and I'm playing around with the no_log module parameter. I have a sample playbook structured as so; --- - hosts: webservers tasks: - name: Query vCenter vmware_guest: hostname: ""{{ vcenter['host'] }}"" username: ""{{ vcenter['username'] }}"" password: ""{{ vcenter['password'] }}"" name: ""{{ inventory_hostname }}"" validate_certs: no delegate_to: localhost no_log: yes ... When no_log is disabled, I get a lot of helpful debug information about my VM, but when no_log is disabled I obviously can't protect my playbooks vaulted data (in this case that is the vcenter['username'] and vcenter['password'] values). Enabling no_log cripples my playbooks debug output to just; ""censored"": ""the output has been hidden due to the fact that 'no_log: true' was specified for this result"", I would like to know how it is possible to censor only some of the debug output. I know this is possible because vcenter['password'] is protected in it's output regardless of my no_log state. I see this in the verbose output when no_log is disabled; ""invocation"": { ""module_args"": { ""password"": ""VALUE_SPECIFIED_IN_NO_LOG_PARAMETER"", ""username"": ""administrator@vsphere.local"" } } What are your thoughts?","automation, ansible, yaml, redhat, vmware",13,29563,1,https://stackoverflow.com/questions/65947327/ansible-no-log-for-specific-values-in-debug-output-not-entire-module 17337749,puppet log file in redhat and centos,"I am running puppet agent in CentOS and Redhat. I would like to see its log file but cannot find it. In these operating systems, I clearly specify logdir = /var/log/puppet in the puppet.conf, but upon checking this directory, it is empty. Note that I did similar thing for Ubuntu and SUSE and it worked well. The issue only happened in Redhat and CentOS. Any idea of where to look for the log file in these cases? Thanks, Henry","puppet log file in redhat and centos I am running puppet agent in CentOS and Redhat. I would like to see its log file but cannot find it. In these operating systems, I clearly specify logdir = /var/log/puppet in the puppet.conf, but upon checking this directory, it is empty. Note that I did similar thing for Ubuntu and SUSE and it worked well. The issue only happened in Redhat and CentOS. Any idea of where to look for the log file in these cases? Thanks, Henry","logging, centos, redhat, puppet",13,32403,2,https://stackoverflow.com/questions/17337749/puppet-log-file-in-redhat-and-centos 9741574,RedHat 6/Oracle Linux 6 is not allowing key authentication via ssh,Keys are properly deployed in ~/.ssh/authorized_keys Yet ssh keeps on prompting for a password.,RedHat 6/Oracle Linux 6 is not allowing key authentication via ssh Keys are properly deployed in ~/.ssh/authorized_keys Yet ssh keeps on prompting for a password.,"redhat, selinux, sshd, oracle-enterprise-linux",12,22097,4,https://stackoverflow.com/questions/9741574/redhat-6-oracle-linux-6-is-not-allowing-key-authentication-via-ssh 97142,Ruby on Rails: no such file to load -- openssl on RedHat Linux Enterprise,"I am trying to do 'rake db:migrate' and getting the error message 'no such file to load -- openssl'. Both 'openssl' and 'openssl-devel' packages are installed. Others on Debian or Ubuntu seem to be able to get rid of this by installing 'libopenssl-ruby', which is not available for RedHat. Has anybody run into this and have a solution for it?","Ruby on Rails: no such file to load -- openssl on RedHat Linux Enterprise I am trying to do 'rake db:migrate' and getting the error message 'no such file to load -- openssl'. Both 'openssl' and 'openssl-devel' packages are installed. Others on Debian or Ubuntu seem to be able to get rid of this by installing 'libopenssl-ruby', which is not available for RedHat. Has anybody run into this and have a solution for it?","ruby-on-rails, ruby, openssl, rake, redhat",12,14078,5,https://stackoverflow.com/questions/97142/ruby-on-rails-no-such-file-to-load-openssl-on-redhat-linux-enterprise 8854882,Why does service stop after RPM is updated,"I have a software package for which I created an RPM. I can't paste the entire RPM here for IP reasons, but here is the gist of the problem: %pre /sbin/pidof program if [ ""$?"" -eq ""0"" ] then /sbin/service program stop fi %post /sbin/chkconfig program on /sbin/service program start %preun /sbin/service program stop /sbin/chkconfig program off %postun rm -rf /program_folder Everytime I try to upgrade the package, it stops the program service, installs everything, starts the service, and then stops it again and deletes the folder...any ideas?","Why does service stop after RPM is updated I have a software package for which I created an RPM. I can't paste the entire RPM here for IP reasons, but here is the gist of the problem: %pre /sbin/pidof program if [ ""$?"" -eq ""0"" ] then /sbin/service program stop fi %post /sbin/chkconfig program on /sbin/service program start %preun /sbin/service program stop /sbin/chkconfig program off %postun rm -rf /program_folder Everytime I try to upgrade the package, it stops the program service, installs everything, starts the service, and then stops it again and deletes the folder...any ideas?","redhat, rpm",12,6739,1,https://stackoverflow.com/questions/8854882/why-does-service-stop-after-rpm-is-updated 23215710,error: could not find function install_github for R version 2.15.2,"I'm having multiple problems with R right now but I want to start asking one of the most fundamental questions. I want to install GitHub files into R, but for some reason the install_github function doesn't seem to exist. For example, when I type: install_github(""devtools"") I get error: could not find function install_github The install_packages function worked perfectly fine. How can I solve this problem? To add, I want to ask whether there is a way to upgrade R, since version 2.15.2 doesn't seem to be compatible for most of the packages I want to work with. I'm currently using Linux version 3.6.11-1 RedHat 4.7.2-2 fedora linux 17.0 x86-64. I checked the CRAN website but they seemed to have the most unupdated versions of R (if that is even possible) that dates all the way back to '09. I would seriously love to update myself from this old version of R. Any advice on this too?","error: could not find function install_github for R version 2.15.2 I'm having multiple problems with R right now but I want to start asking one of the most fundamental questions. I want to install GitHub files into R, but for some reason the install_github function doesn't seem to exist. For example, when I type: install_github(""devtools"") I get error: could not find function install_github The install_packages function worked perfectly fine. How can I solve this problem? To add, I want to ask whether there is a way to upgrade R, since version 2.15.2 doesn't seem to be compatible for most of the packages I want to work with. I'm currently using Linux version 3.6.11-1 RedHat 4.7.2-2 fedora linux 17.0 x86-64. I checked the CRAN website but they seemed to have the most unupdated versions of R (if that is even possible) that dates all the way back to '09. I would seriously love to update myself from this old version of R. Any advice on this too?","linux, r, redhat, devtools",12,29836,2,https://stackoverflow.com/questions/23215710/error-could-not-find-function-install-github-for-r-version-2-15-2 79479879,Avoiding strcpy overflow destination warning,"With a structure such as the following typedef struct { size_t StringLength; char String[1]; } mySTRING; and use of this structure along these lines mySTRING * CreateString(char * Input) { size_t Len = strlen(Input); int Needed = sizeof(mySTRING) + Len; mySTRING * pString = malloc(Needed); : strcpy(pString->String, Input); } results, on Red Hat Linux cc compiler, in the following warning, which is fair enough. strings.c:59:3: warning: âstrcpyâ writing 14 bytes into a region of size 1 overflows the destination [-Wstringop-overflow=] strcpy(pString->String, Input); I know that, in this instance of code, this warning is something I don't need to correct. How can I tell the compiler this without turning off these warnings which might usefully find something, somewhere else, in the future. What changes can I make to the code to show the compiler this one is OK.","Avoiding strcpy overflow destination warning With a structure such as the following typedef struct { size_t StringLength; char String[1]; } mySTRING; and use of this structure along these lines mySTRING * CreateString(char * Input) { size_t Len = strlen(Input); int Needed = sizeof(mySTRING) + Len; mySTRING * pString = malloc(Needed); : strcpy(pString->String, Input); } results, on Red Hat Linux cc compiler, in the following warning, which is fair enough. strings.c:59:3: warning: âstrcpyâ writing 14 bytes into a region of size 1 overflows the destination [-Wstringop-overflow=] strcpy(pString->String, Input); I know that, in this instance of code, this warning is something I don't need to correct. How can I tell the compiler this without turning off these warnings which might usefully find something, somewhere else, in the future. What changes can I make to the code to show the compiler this one is OK.","c, linux, redhat, compiler-warnings, cc",12,408,1,https://stackoverflow.com/questions/79479879/avoiding-strcpy-overflow-destination-warning 23285339,What is JBPM? Why use it?,"I am java developer. I am developing a new application. In this application am going to integrate JBPM, spring and hibernate also. So please, answer my below questions, what is JBPM? Why use it? What is workflow engine? please give any example. Thanks for your answer.","What is JBPM? Why use it? I am java developer. I am developing a new application. In this application am going to integrate JBPM, spring and hibernate also. So please, answer my below questions, what is JBPM? Why use it? What is workflow engine? please give any example. Thanks for your answer.","java, jboss, frameworks, redhat, jbpm",12,16634,2,https://stackoverflow.com/questions/23285339/what-is-jbpm-why-use-it 61662403,microdnf update command installs new packages instead of just updating existing packages,"My Dockerfile uses base image registry.access.redhat.com/ubi8/ubi-minimal which has microdnf package manager. When I include following snippet in docker file to have latest updates on existing packages, RUN true \ && microdnf clean all \ && microdnf update --nodocs \ && microdnf clean all \ && true It's not just upgrades 4 existing packages but also install 33 new packages, Transaction Summary: Installing: 33 packages Reinstalling: 0 packages Upgrading: 4 packages Removing: 0 packages Downgrading: 0 packages The dnf documentation does not suggest that it should install new packages. Is it a bug in microdnf ? microdnf update also increases the new image size by ~75MB","microdnf update command installs new packages instead of just updating existing packages My Dockerfile uses base image registry.access.redhat.com/ubi8/ubi-minimal which has microdnf package manager. When I include following snippet in docker file to have latest updates on existing packages, RUN true \ && microdnf clean all \ && microdnf update --nodocs \ && microdnf clean all \ && true It's not just upgrades 4 existing packages but also install 33 new packages, Transaction Summary: Installing: 33 packages Reinstalling: 0 packages Upgrading: 4 packages Removing: 0 packages Downgrading: 0 packages The dnf documentation does not suggest that it should install new packages. Is it a bug in microdnf ? microdnf update also increases the new image size by ~75MB","dockerfile, redhat, dnf, ubi",12,29283,1,https://stackoverflow.com/questions/61662403/microdnf-update-command-installs-new-packages-instead-of-just-updating-existing 41810222,"pure virtual function called" on gcc 4.4 but not on newer version or clang 3.4,"I've got an MCVE which, on some of my machines crashes when compiled with g++ version 4.4.7 but does work with clang++ version 3.4.2 and g++ version 6.3. I'd like some help to know if it comes from undefined behavior or from an actual bug of this ancient version of gcc. Code #include class BaseType { public: BaseType() : _present( false ) {} virtual ~BaseType() {} virtual void clear() {} virtual void setString(const char* value, const char* fieldName) { _present = (*value != '\0'); } protected: virtual void setStrNoCheck(const char* value) = 0; protected: bool _present; }; // ---------------------------------------------------------------------------------- class TypeTextFix : public BaseType { public: virtual void clear() {} virtual void setString(const char* value, const char* fieldName) { clear(); BaseType::setString(value, fieldName); if( _present == false ) { return; // commenting this return fix the crash. Yes it does! } setStrNoCheck(value); } protected: virtual void setStrNoCheck(const char* value) {} }; // ---------------------------------------------------------------------------------- struct Wrapper { TypeTextFix _text; }; int main() { { Wrapper wrapped; wrapped._text.setString(""123456789012"", NULL); } // if I add a write to stdout here, it does not crash oO { Wrapper wrapped; wrapped._text.setString(""123456789012"", NULL); // without this line (or any one), the program runs just fine! } } Compile & run g++ -O1 -Wall -Werror thebug.cpp && ./a.out pure virtual method called terminate called without an active exception Aborted (core dumped) This is actually minimal, if one removes any feature of this code, it runs correctly. Analyse The code snippet works fine when compiled with -O0 , BUT it still works fine when compiled with -O0 +flag for every flag of -O1 as defined on GnuCC documentation . A core dump is generated from which one can extract the backtrace: (gdb) bt #0 0x0000003f93e32625 in raise () from /lib64/libc.so.6 #1 0x0000003f93e33e05 in abort () from /lib64/libc.so.6 #2 0x0000003f98ebea7d in __gnu_cxx::__verbose_terminate_handler() () from /usr/lib64/libstdc++.so.6 #3 0x0000003f98ebcbd6 in ?? () from /usr/lib64/libstdc++.so.6 #4 0x0000003f98ebcc03 in std::terminate() () from /usr/lib64/libstdc++.so.6 #5 0x0000003f98ebd55f in __cxa_pure_virtual () from /usr/lib64/libstdc++.so.6 #6 0x00000000004007b6 in main () Feel free to ask for tests or details in the comments. Asked: Is it the actual code? Yes! it is! byte for byte. I've checked and rechecked. What exact version of GnuCC du you use? $ g++ --version g++ (GCC) 4.4.7 20120313 (Red Hat 4.4.7-16) Copyright (C) 2010 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Can we see the generated assembly? Yes, here it is on pastebin.com",""pure virtual function called" on gcc 4.4 but not on newer version or clang 3.4 I've got an MCVE which, on some of my machines crashes when compiled with g++ version 4.4.7 but does work with clang++ version 3.4.2 and g++ version 6.3. I'd like some help to know if it comes from undefined behavior or from an actual bug of this ancient version of gcc. Code #include class BaseType { public: BaseType() : _present( false ) {} virtual ~BaseType() {} virtual void clear() {} virtual void setString(const char* value, const char* fieldName) { _present = (*value != '\0'); } protected: virtual void setStrNoCheck(const char* value) = 0; protected: bool _present; }; // ---------------------------------------------------------------------------------- class TypeTextFix : public BaseType { public: virtual void clear() {} virtual void setString(const char* value, const char* fieldName) { clear(); BaseType::setString(value, fieldName); if( _present == false ) { return; // commenting this return fix the crash. Yes it does! } setStrNoCheck(value); } protected: virtual void setStrNoCheck(const char* value) {} }; // ---------------------------------------------------------------------------------- struct Wrapper { TypeTextFix _text; }; int main() { { Wrapper wrapped; wrapped._text.setString(""123456789012"", NULL); } // if I add a write to stdout here, it does not crash oO { Wrapper wrapped; wrapped._text.setString(""123456789012"", NULL); // without this line (or any one), the program runs just fine! } } Compile & run g++ -O1 -Wall -Werror thebug.cpp && ./a.out pure virtual method called terminate called without an active exception Aborted (core dumped) This is actually minimal, if one removes any feature of this code, it runs correctly. Analyse The code snippet works fine when compiled with -O0 , BUT it still works fine when compiled with -O0 +flag for every flag of -O1 as defined on GnuCC documentation . A core dump is generated from which one can extract the backtrace: (gdb) bt #0 0x0000003f93e32625 in raise () from /lib64/libc.so.6 #1 0x0000003f93e33e05 in abort () from /lib64/libc.so.6 #2 0x0000003f98ebea7d in __gnu_cxx::__verbose_terminate_handler() () from /usr/lib64/libstdc++.so.6 #3 0x0000003f98ebcbd6 in ?? () from /usr/lib64/libstdc++.so.6 #4 0x0000003f98ebcc03 in std::terminate() () from /usr/lib64/libstdc++.so.6 #5 0x0000003f98ebd55f in __cxa_pure_virtual () from /usr/lib64/libstdc++.so.6 #6 0x00000000004007b6 in main () Feel free to ask for tests or details in the comments. Asked: Is it the actual code? Yes! it is! byte for byte. I've checked and rechecked. What exact version of GnuCC du you use? $ g++ --version g++ (GCC) 4.4.7 20120313 (Red Hat 4.4.7-16) Copyright (C) 2010 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Can we see the generated assembly? Yes, here it is on pastebin.com","c++, g++, redhat, undefined-behavior",12,1269,2,https://stackoverflow.com/questions/41810222/pure-virtual-function-called-on-gcc-4-4-but-not-on-newer-version-or-clang-3-4 41467908,CentOS 7 + PHP7 -- php not rendering in browser,"I have a clean install of apache/httpd and php7.1.0 running on CentOS 7. When I execute from the command line: php -v I get the expected response: PHP 7.1.0 (cli) (built: Dec 1 2016 08:13:15) ( NTS ) Copyright (c) 1997-2016 The PHP Group Zend Engine v3.1.0-dev, Copyright (c) 1998-2016 Zend Technologies But when I try to hit my phpinfo.php page, all I get is... literally outputted to the screen - can someone tell me what I'm missing, did I forget to enable a mod?","CentOS 7 + PHP7 -- php not rendering in browser I have a clean install of apache/httpd and php7.1.0 running on CentOS 7. When I execute from the command line: php -v I get the expected response: PHP 7.1.0 (cli) (built: Dec 1 2016 08:13:15) ( NTS ) Copyright (c) 1997-2016 The PHP Group Zend Engine v3.1.0-dev, Copyright (c) 1998-2016 Zend Technologies But when I try to hit my phpinfo.php page, all I get is... literally outputted to the screen - can someone tell me what I'm missing, did I forget to enable a mod?","php, apache, centos, redhat, php-7",12,40015,6,https://stackoverflow.com/questions/41467908/centos-7-php7-php-not-rendering-in-browser 8267437,Amazon Linux vs Red Hat Linux,I have developed a web service(using ruby/sinatra/sqs) which runs on Linux Red Hat. I am planning to move this on a EC2 instance. I see that Amazon provides a linux version of its own. Is there any reason why I should use Amazon Linux on EC2 instead of Red Hat?,Amazon Linux vs Red Hat Linux I have developed a web service(using ruby/sinatra/sqs) which runs on Linux Red Hat. I am planning to move this on a EC2 instance. I see that Amazon provides a linux version of its own. Is there any reason why I should use Amazon Linux on EC2 instead of Red Hat?,"linux, amazon-ec2, redhat",12,10436,1,https://stackoverflow.com/questions/8267437/amazon-linux-vs-red-hat-linux 55363823,Redhat/CentOS - `GLIBC_2.18' not found,"I was trying to run redis server (on a CentOS server) with specific module: redis-server --loadmodule ./redisql_v0.9.1_x86_64.so and getting error: Module ./redisql_v0.9.1_x86_64.so failed to load: /lib64/libc.so.6: version `GLIBC_2.18' not found (required by ./redisql_v0.9.1_x86_64.so) this is the linux version: uname Linux cat /etc/*release CentOS Linux release 7.6.1810 (Core) NAME=""CentOS Linux"" VERSION=""7 (Core)"" ID=""centos"" ID_LIKE=""rhel fedora"" VERSION_ID=""7"" PRETTY_NAME=""CentOS Linux 7 (Core)"" ANSI_COLOR=""0;31"" CPE_NAME=""cpe:/o:centos:centos:7"" HOME_URL=""[URL] BUG_REPORT_URL=""[URL] CENTOS_MANTISBT_PROJECT=""CentOS-7"" CENTOS_MANTISBT_PROJECT_VERSION=""7"" REDHAT_SUPPORT_PRODUCT=""centos"" REDHAT_SUPPORT_PRODUCT_VERSION=""7"" CentOS Linux release 7.6.1810 (Core) CentOS Linux release 7.6.1810 (Core) Also this is what is showing for /lib64/libc.so.6 : /lib64/libc.so.6 GNU C Library (GNU libc) stable release version 2.17, by Roland McGrath et al. Copyright (C) 2012 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Compiled by GNU CC version 4.8.5 20150623 (Red Hat 4.8.5-36). Compiled on a Linux 3.10.0 system on 2019-01-29. Available extensions: The C stubs add-on version 2.1.2. crypt add-on version 2.1 by Michael Glad and others GNU Libidn by Simon Josefsson Native POSIX Threads Library by Ulrich Drepper et al BIND-8.2.3-T5B RT using linux kernel aio libc ABIs: UNIQUE IFUNC For bug reporting instructions, please see: <[URL] Also: rpm -qa | grep glibc glibc-common-2.17-260.el7_6.3.x86_64 glibc-devel-2.17-260.el7_6.3.x86_64 glibc-2.17-260.el7_6.3.x86_64 glibc-headers-2.17-260.el7_6.3.x86_64 Tried as well: yum install glibc* -y Loaded plugins: fastestmirror, ovl Loading mirror speeds from cached hostfile * base: repos-va.psychz.net * extras: repos-va.psychz.net * updates: repos-va.psychz.net Package glibc-devel-2.17-260.el7_6.3.x86_64 already installed and latest version Package glibc-utils-2.17-260.el7_6.3.x86_64 already installed and latest version Package glibc-2.17-260.el7_6.3.x86_64 already installed and latest version Package glibc-headers-2.17-260.el7_6.3.x86_64 already installed and latest version Package glibc-static-2.17-260.el7_6.3.x86_64 already installed and latest version Package glibc-common-2.17-260.el7_6.3.x86_64 already installed and latest version Nothing to do What is the process of installing/setting GLIBC_2.18 on Centos/Redhat servers? Thanks..","Redhat/CentOS - `GLIBC_2.18' not found I was trying to run redis server (on a CentOS server) with specific module: redis-server --loadmodule ./redisql_v0.9.1_x86_64.so and getting error: Module ./redisql_v0.9.1_x86_64.so failed to load: /lib64/libc.so.6: version `GLIBC_2.18' not found (required by ./redisql_v0.9.1_x86_64.so) this is the linux version: uname Linux cat /etc/*release CentOS Linux release 7.6.1810 (Core) NAME=""CentOS Linux"" VERSION=""7 (Core)"" ID=""centos"" ID_LIKE=""rhel fedora"" VERSION_ID=""7"" PRETTY_NAME=""CentOS Linux 7 (Core)"" ANSI_COLOR=""0;31"" CPE_NAME=""cpe:/o:centos:centos:7"" HOME_URL=""[URL] BUG_REPORT_URL=""[URL] CENTOS_MANTISBT_PROJECT=""CentOS-7"" CENTOS_MANTISBT_PROJECT_VERSION=""7"" REDHAT_SUPPORT_PRODUCT=""centos"" REDHAT_SUPPORT_PRODUCT_VERSION=""7"" CentOS Linux release 7.6.1810 (Core) CentOS Linux release 7.6.1810 (Core) Also this is what is showing for /lib64/libc.so.6 : /lib64/libc.so.6 GNU C Library (GNU libc) stable release version 2.17, by Roland McGrath et al. Copyright (C) 2012 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Compiled by GNU CC version 4.8.5 20150623 (Red Hat 4.8.5-36). Compiled on a Linux 3.10.0 system on 2019-01-29. Available extensions: The C stubs add-on version 2.1.2. crypt add-on version 2.1 by Michael Glad and others GNU Libidn by Simon Josefsson Native POSIX Threads Library by Ulrich Drepper et al BIND-8.2.3-T5B RT using linux kernel aio libc ABIs: UNIQUE IFUNC For bug reporting instructions, please see: <[URL] Also: rpm -qa | grep glibc glibc-common-2.17-260.el7_6.3.x86_64 glibc-devel-2.17-260.el7_6.3.x86_64 glibc-2.17-260.el7_6.3.x86_64 glibc-headers-2.17-260.el7_6.3.x86_64 Tried as well: yum install glibc* -y Loaded plugins: fastestmirror, ovl Loading mirror speeds from cached hostfile * base: repos-va.psychz.net * extras: repos-va.psychz.net * updates: repos-va.psychz.net Package glibc-devel-2.17-260.el7_6.3.x86_64 already installed and latest version Package glibc-utils-2.17-260.el7_6.3.x86_64 already installed and latest version Package glibc-2.17-260.el7_6.3.x86_64 already installed and latest version Package glibc-headers-2.17-260.el7_6.3.x86_64 already installed and latest version Package glibc-static-2.17-260.el7_6.3.x86_64 already installed and latest version Package glibc-common-2.17-260.el7_6.3.x86_64 already installed and latest version Nothing to do What is the process of installing/setting GLIBC_2.18 on Centos/Redhat servers? Thanks..","linux, redis, centos, redhat, glibc",12,44285,2,https://stackoverflow.com/questions/55363823/redhat-centos-glibc-2-18-not-found 49369065,RHEL: This system is currently not set up to build kernel modules,"I am trying to install virtualbox5.2 on a RHEL 7 VM When I try to rebuild kernels modules I get the following error: [root@myserver~]# /usr/lib/virtualbox/vboxdrv.sh setup vboxdrv.sh: Stopping VirtualBox services. vboxdrv.sh: Building VirtualBox kernel modules. This system is currently not set up to build kernel modules. Please install the Linux kernel ""header"" files matching the current kernel for adding new hardware support to the system. The distribution packages containing the headers are probably: kernel-devel kernel-devel-3.10.0-693.11.1.el7.x86_64 I tried install kernet-devel and got success message Installed: kernel-devel.x86_64 0:3.10.0-693.21.1.el7 Complete! But still the setup fails. Any idea what is missing here?","RHEL: This system is currently not set up to build kernel modules I am trying to install virtualbox5.2 on a RHEL 7 VM When I try to rebuild kernels modules I get the following error: [root@myserver~]# /usr/lib/virtualbox/vboxdrv.sh setup vboxdrv.sh: Stopping VirtualBox services. vboxdrv.sh: Building VirtualBox kernel modules. This system is currently not set up to build kernel modules. Please install the Linux kernel ""header"" files matching the current kernel for adding new hardware support to the system. The distribution packages containing the headers are probably: kernel-devel kernel-devel-3.10.0-693.11.1.el7.x86_64 I tried install kernet-devel and got success message Installed: kernel-devel.x86_64 0:3.10.0-693.21.1.el7 Complete! But still the setup fails. Any idea what is missing here?","virtualbox, redhat, rhel7",11,61669,9,https://stackoverflow.com/questions/49369065/rhel-this-system-is-currently-not-set-up-to-build-kernel-modules 8254705,Redhat Linux - change directory color,"I am using Redhat Linux and the problem I am facing is that the ""blue"" colour of the directories is hardly visible on the black background. I found some posts on the web which asks to change some settings in the file /etc/profile.d/colorls.sh and /etc/profile.d/colorls.csh . However, this will change the colour settings for everyone who logs into the system. Could someone please let me know how I can change the colour settings that will affect only me?","Redhat Linux - change directory color I am using Redhat Linux and the problem I am facing is that the ""blue"" colour of the directories is hardly visible on the black background. I found some posts on the web which asks to change some settings in the file /etc/profile.d/colorls.sh and /etc/profile.d/colorls.csh . However, this will change the colour settings for everyone who logs into the system. Could someone please let me know how I can change the colour settings that will affect only me?","linux, bash, shell, unix, redhat",11,31537,4,https://stackoverflow.com/questions/8254705/redhat-linux-change-directory-color 52417318,Why does the free() function not return memory to the operating system?,"When I use the top terminal program at Linux, I can't see the result of free. My expectation is: free map and list. The memory usage that I can see at the top(Linux function) or /proc/meminfo get smaller than past. sleep is start. program exit. But The usage of memory only gets smaller when the program ends. Would you explain the logic of free function? Below is my code. for(mapIter = bufMap->begin(); mapIter != bufMap -> end();mapIter++) { list *buffList = mapIter->second; list::iterator listIter; for(listIter = buffList->begin(); listIter != buffList->end();listIter++) { free(listIter->argu1); free(listIter->argu2); free(listIter->argu3); } delete buffList; } delete bufMap; printf(""Free Complete!\n""); sleep(10); printf(""endend\n""); Thanks you.","Why does the free() function not return memory to the operating system? When I use the top terminal program at Linux, I can't see the result of free. My expectation is: free map and list. The memory usage that I can see at the top(Linux function) or /proc/meminfo get smaller than past. sleep is start. program exit. But The usage of memory only gets smaller when the program ends. Would you explain the logic of free function? Below is my code. for(mapIter = bufMap->begin(); mapIter != bufMap -> end();mapIter++) { list *buffList = mapIter->second; list::iterator listIter; for(listIter = buffList->begin(); listIter != buffList->end();listIter++) { free(listIter->argu1); free(listIter->argu2); free(listIter->argu3); } delete buffList; } delete bufMap; printf(""Free Complete!\n""); sleep(10); printf(""endend\n""); Thanks you.","c++, linux, redhat",11,5560,2,https://stackoverflow.com/questions/52417318/why-does-the-free-function-not-return-memory-to-the-operating-system 27491467,How to read file through ssh/scp directly,I have a program written in C/C++ that reads two files and then generate some reports. The typical workflow is as follows: 1> scp user@server01:/temp/file1.txt ~/ then input my password for the prompty 2> my_program file1.txt localfile.txt Is there a way that I can let my program directly handle the remote file without explicitly copying the file to local first? I have tried the following command but it doesn't work for me. > my_program <(ssh user@server01:/temp/file1.txt) localfile.txt,How to read file through ssh/scp directly I have a program written in C/C++ that reads two files and then generate some reports. The typical workflow is as follows: 1> scp user@server01:/temp/file1.txt ~/ then input my password for the prompty 2> my_program file1.txt localfile.txt Is there a way that I can let my program directly handle the remote file without explicitly copying the file to local first? I have tried the following command but it doesn't work for me. > my_program <(ssh user@server01:/temp/file1.txt) localfile.txt,"linux, redhat",11,40856,2,https://stackoverflow.com/questions/27491467/how-to-read-file-through-ssh-scp-directly 44159793,Trusted Root Certificates in DotNet Core on Linux (RHEL 7.1),"I'm currently deploying a .net-core web-api to an docker container on rhel 7.1. Everything works as expected, but from my application I need to call other services via https and those hosts use certificates signed by self-maintained root certificates. In this constellation I get ssl-errors while calling this services (ssl-not valid) and therefore I need to install this root-certificate in the docker-container or somehow use the root-certificate in the .net-core application. How can this be done? Is there a best practice to handle this situation? Will .net-core access the right keystore on the rhel-system?","Trusted Root Certificates in DotNet Core on Linux (RHEL 7.1) I'm currently deploying a .net-core web-api to an docker container on rhel 7.1. Everything works as expected, but from my application I need to call other services via https and those hosts use certificates signed by self-maintained root certificates. In this constellation I get ssl-errors while calling this services (ssl-not valid) and therefore I need to install this root-certificate in the docker-container or somehow use the root-certificate in the .net-core application. How can this be done? Is there a best practice to handle this situation? Will .net-core access the right keystore on the rhel-system?","ssl, ssl-certificate, .net-core, redhat, root-certificate",11,11423,1,https://stackoverflow.com/questions/44159793/trusted-root-certificates-in-dotnet-core-on-linux-rhel-7-1 11228078,How do I get libpam.so.0 (32 bit) on my 64bit RHEL6?,"I am trying to install DB2 Enterprise Server on my RHEL6 machine. Unfortunately, it seems that it needs the 32bit version of libpam.so.0 for some routines. The machine runs the 64 bit version which seems to have the lib installed... I assume it's the 64 version. Is there any way to get and install the 32 bit version to be used by the DB2 installer?","How do I get libpam.so.0 (32 bit) on my 64bit RHEL6? I am trying to install DB2 Enterprise Server on my RHEL6 machine. Unfortunately, it seems that it needs the 32bit version of libpam.so.0 for some routines. The machine runs the 64 bit version which seems to have the lib installed... I assume it's the 64 version. Is there any way to get and install the 32 bit version to be used by the DB2 installer?","linux, db2, redhat",11,72131,3,https://stackoverflow.com/questions/11228078/how-do-i-get-libpam-so-0-32-bit-on-my-64bit-rhel6 20792829,How to check recently installed rpms?,I am trying to find some recently installed rpms on my RedHat Linux system. Does RPM provide any way to do this? I have tried # rpm -qa But it only provides installed rpms. What are the options available for this?,How to check recently installed rpms? I am trying to find some recently installed rpms on my RedHat Linux system. Does RPM provide any way to do this? I have tried # rpm -qa But it only provides installed rpms. What are the options available for this?,"linux, centos, redhat, rpm",11,14893,2,https://stackoverflow.com/questions/20792829/how-to-check-recently-installed-rpms 21683138,Unable to install rgdal and rgeos R libraries on Red hat linux,"I have error while compiling rgdal adn rgoes package on our redhat linux machine. I tried to do some research but couldn't find a possible solution. Could you please help me with this as this is very important for me to solve. **ERROR WHILE COMPILING RGDAL in R 3.0** **strong text** * installing *source* package ârgdalâ ... ** package ârgdalâ successfully unpacked and MD5 sums checked configure: CC: gcc -std=gnu99 configure: CXX: g++ configure: rgdal: 0.8-10 checking for /usr/bin/svnversion... yes configure: svn revision: 496 configure: gdal-config: gdal-config checking gdal-config usability... ./configure: line 1397: gdal-config: command not found no Error: gdal-config not found The gdal-config script distributed with GDAL could not be found. If you have not installed the GDAL libraries, you can download the source from [URL] If you have installed the GDAL libraries, then make sure that gdal-config is in your path. Try typing gdal-config at a shell prompt and see if it runs. If not, use: --configure-args='--with-gdal-config=/usr/local/bin/gdal-config' with appropriate values for your installation. ERROR: configuration failed for package ârgdalâ *****ERROR WHILE COMPILING RGEOS:***** **strong text** * installing *source* package ârgeosâ ... ** package ârgeosâ successfully unpacked and MD5 sums checked configure: CC: gcc -std=gnu99 configure: CXX: g++ configure: rgeos: 0.2-17 checking for /usr/bin/svnversion... yes configure: svn revision: 413M checking geos-config usability... ./configure: line 1385: geos-config: command not found no configure: error: geos-config not usable ERROR: configuration failed for package ârgeosâ","Unable to install rgdal and rgeos R libraries on Red hat linux I have error while compiling rgdal adn rgoes package on our redhat linux machine. I tried to do some research but couldn't find a possible solution. Could you please help me with this as this is very important for me to solve. **ERROR WHILE COMPILING RGDAL in R 3.0** **strong text** * installing *source* package ârgdalâ ... ** package ârgdalâ successfully unpacked and MD5 sums checked configure: CC: gcc -std=gnu99 configure: CXX: g++ configure: rgdal: 0.8-10 checking for /usr/bin/svnversion... yes configure: svn revision: 496 configure: gdal-config: gdal-config checking gdal-config usability... ./configure: line 1397: gdal-config: command not found no Error: gdal-config not found The gdal-config script distributed with GDAL could not be found. If you have not installed the GDAL libraries, you can download the source from [URL] If you have installed the GDAL libraries, then make sure that gdal-config is in your path. Try typing gdal-config at a shell prompt and see if it runs. If not, use: --configure-args='--with-gdal-config=/usr/local/bin/gdal-config' with appropriate values for your installation. ERROR: configuration failed for package ârgdalâ *****ERROR WHILE COMPILING RGEOS:***** **strong text** * installing *source* package ârgeosâ ... ** package ârgeosâ successfully unpacked and MD5 sums checked configure: CC: gcc -std=gnu99 configure: CXX: g++ configure: rgeos: 0.2-17 checking for /usr/bin/svnversion... yes configure: svn revision: 413M checking geos-config usability... ./configure: line 1385: geos-config: command not found no configure: error: geos-config not usable ERROR: configuration failed for package ârgeosâ","r, redhat, geos, rgdal",11,8755,2,https://stackoverflow.com/questions/21683138/unable-to-install-rgdal-and-rgeos-r-libraries-on-red-hat-linux 12961336,I am unable to run a C++ program in Debian(Ubuntu) that works in Redhat(Centos),"TLDR: Having trouble compiling a C++ program that worked in Centos Redhat in Ubuntu Debian. Is there anything I Should be aware of between these two that would make a C++ program compiled using the same compiler not work? Hello, I'm trying to compile and run Germline ([URL] It works fine in RedHat Centos, but because Centos isn't as supported as Ubuntu is for most things I switched. And now this program does not work. It's entirely possible it's using some kind of RedHat only functionality, but I'm using the same compiler (g++) to compile it in both environments. I've been pulling my hair out just trying to get this thing to work on Ubuntu as it is much nicer to work with, but as of now when I ""make all"" the project in ubuntu it will compile and the tests spin(Don't ever finish) forever. No matter what binaries I use (Compiled in Centos and copied, the failed test binaries I just mentioned etc), the program just always freezes. Kinda long, sorry. My main question is this: Is there any other C++ compiler alternatives I can try? Is there any Red-hat C++ libraries I might be missing. Or major differences in their C++ implementations that mighjt cause this?","I am unable to run a C++ program in Debian(Ubuntu) that works in Redhat(Centos) TLDR: Having trouble compiling a C++ program that worked in Centos Redhat in Ubuntu Debian. Is there anything I Should be aware of between these two that would make a C++ program compiled using the same compiler not work? Hello, I'm trying to compile and run Germline ([URL] It works fine in RedHat Centos, but because Centos isn't as supported as Ubuntu is for most things I switched. And now this program does not work. It's entirely possible it's using some kind of RedHat only functionality, but I'm using the same compiler (g++) to compile it in both environments. I've been pulling my hair out just trying to get this thing to work on Ubuntu as it is much nicer to work with, but as of now when I ""make all"" the project in ubuntu it will compile and the tests spin(Don't ever finish) forever. No matter what binaries I use (Compiled in Centos and copied, the failed test binaries I just mentioned etc), the program just always freezes. Kinda long, sorry. My main question is this: Is there any other C++ compiler alternatives I can try? Is there any Red-hat C++ libraries I might be missing. Or major differences in their C++ implementations that mighjt cause this?","c++, ubuntu, centos, debian, redhat",11,3089,5,https://stackoverflow.com/questions/12961336/i-am-unable-to-run-a-c-program-in-debianubuntu-that-works-in-redhatcentos 55457902,Keycloak Customization to run custom java in authentication flow,"Please let me know if this is not the right place to post, but I have been looking all over for information regarding this and can't seem to find a concise answer. I have been attempting to use keycloak to meet our application's user management requirements. While I have found keycloak to be very capable and quite effective, I have run into what may be a dead end for our usage. Background: Traditionally, our application has used a very basic login framework that would verify the authentication. Then using a third party application, that we cannot change , identify the roles that user would have via a wsdl operation and insert into our applications database. For example, if we verify the user John Doe exists and authenticate his credentials, we call the wsdl in our java code to get what roles that user should have (super user, guest, regular user). Obviously this entire framework is pretty flawed and at the end of the day, this is why weve chosen to use keycloak. Problem Unfortunately, as I mentioned we cannot change the third party application, and we must get user role mappings from this wsdl operation. I know there is a way to create/modify keycloak's users and roles via java functions. However, in order to keep this architecture modular is there a way to configure the authentication flow to reach out to this WSDL on keycloaks side for role mapping ? (i.e. not in the application code but maybe in a scriplet in the authentication flow) What I am looking for is essentially how to configure the authentication flow to run something as simple as ""hello world"" in java after the credentials are verified but before access is granted. Not sure if the Authentication SPI could be used","Keycloak Customization to run custom java in authentication flow Please let me know if this is not the right place to post, but I have been looking all over for information regarding this and can't seem to find a concise answer. I have been attempting to use keycloak to meet our application's user management requirements. While I have found keycloak to be very capable and quite effective, I have run into what may be a dead end for our usage. Background: Traditionally, our application has used a very basic login framework that would verify the authentication. Then using a third party application, that we cannot change , identify the roles that user would have via a wsdl operation and insert into our applications database. For example, if we verify the user John Doe exists and authenticate his credentials, we call the wsdl in our java code to get what roles that user should have (super user, guest, regular user). Obviously this entire framework is pretty flawed and at the end of the day, this is why weve chosen to use keycloak. Problem Unfortunately, as I mentioned we cannot change the third party application, and we must get user role mappings from this wsdl operation. I know there is a way to create/modify keycloak's users and roles via java functions. However, in order to keep this architecture modular is there a way to configure the authentication flow to reach out to this WSDL on keycloaks side for role mapping ? (i.e. not in the application code but maybe in a scriplet in the authentication flow) What I am looking for is essentially how to configure the authentication flow to run something as simple as ""hello world"" in java after the credentials are verified but before access is granted. Not sure if the Authentication SPI could be used","java, security, architecture, redhat, keycloak",11,8718,2,https://stackoverflow.com/questions/55457902/keycloak-customization-to-run-custom-java-in-authentication-flow 71089827,Is there any easy way to convert (CRD) CustomResourceDefinition to json schema?,"Developing CRDs for Kubernetes, using VScode as an IDE. Want to provide autocompletion and Intellisense in IDE. It needs a JSON schema to do so. I have a huge number of CRDs to support. I want to do it in an easy way to convert CRDs to JSON schema.","Is there any easy way to convert (CRD) CustomResourceDefinition to json schema? Developing CRDs for Kubernetes, using VScode as an IDE. Want to provide autocompletion and Intellisense in IDE. It needs a JSON schema to do so. I have a huge number of CRDs to support. I want to do it in an easy way to convert CRDs to JSON schema.","kubernetes, visual-studio-code, redhat",11,6664,2,https://stackoverflow.com/questions/71089827/is-there-any-easy-way-to-convert-crd-customresourcedefinition-to-json-schema 20992356,GDB jumps to wrong lines in out of order fashion,"Application Setup : I've C++11 application consuming the following 3rd party libraries : boost 1.51.0 cppnetlib 0.9.4 jsoncpp 0.5.0 The application code relies on several in-house shared objects, all of them developed by my team (classical link time against those shared objects is carried out, no usage of dlopen etc.) I'm using GCC 4.6.2 and the issue appears when using GDB 7.4 and 7.6. OS - Red Hat Linux release 7.0 (Guinness) x86-64 The issue While hitting breakpoints within the shared objects code, and issuing gdb next command, sometimes GDB jumps backward to certain lines w/o any plausible reason (especially after exceptions are thrown, for those exceptions there suitable catch blocks) Similar issues in the web are answered in something along the lines 'turn off any GCC optimization) but my GCC CL clearly doesn't use any optimization and asked to have debug information, pls note the -O0 & -g switches : COLLECT_GCC_OPTIONS= '-D' '_DEBUG' '-O0' '-g' '-Wall' '-fmessage-length=0' '-v' '-fPIC' '-D' 'BOOST_ALL_DYN_LINK' '-D' 'BOOST_PARAMETER_MAX_ARITY=15' '-D' '_GLIBCXX_USE_NANOSLEEP' '-Wno-deprecated' '-std=c++0x' '-fvisibility=hidden' '-c' '-MMD' '-MP' '-MF' 'Debug_x64/AgentRegisterer.d' '-MT' 'Debug_x64/AgentRegisterer.d' '-MT' 'Debug_x64/AgentRegisterer.o' '-o' 'Debug_x64/AgentRegisterer.o' '-shared-libgcc' '-mtune=generic' '-march=x86-64' Please also note as per Linux DSO best known methods , we have hidden visibility of symbols, only classes we would like to expose are being exposed (maybe this is related ???) What should be the next steps in root causing this issue ?","GDB jumps to wrong lines in out of order fashion Application Setup : I've C++11 application consuming the following 3rd party libraries : boost 1.51.0 cppnetlib 0.9.4 jsoncpp 0.5.0 The application code relies on several in-house shared objects, all of them developed by my team (classical link time against those shared objects is carried out, no usage of dlopen etc.) I'm using GCC 4.6.2 and the issue appears when using GDB 7.4 and 7.6. OS - Red Hat Linux release 7.0 (Guinness) x86-64 The issue While hitting breakpoints within the shared objects code, and issuing gdb next command, sometimes GDB jumps backward to certain lines w/o any plausible reason (especially after exceptions are thrown, for those exceptions there suitable catch blocks) Similar issues in the web are answered in something along the lines 'turn off any GCC optimization) but my GCC CL clearly doesn't use any optimization and asked to have debug information, pls note the -O0 & -g switches : COLLECT_GCC_OPTIONS= '-D' '_DEBUG' '-O0' '-g' '-Wall' '-fmessage-length=0' '-v' '-fPIC' '-D' 'BOOST_ALL_DYN_LINK' '-D' 'BOOST_PARAMETER_MAX_ARITY=15' '-D' '_GLIBCXX_USE_NANOSLEEP' '-Wno-deprecated' '-std=c++0x' '-fvisibility=hidden' '-c' '-MMD' '-MP' '-MF' 'Debug_x64/AgentRegisterer.d' '-MT' 'Debug_x64/AgentRegisterer.d' '-MT' 'Debug_x64/AgentRegisterer.o' '-o' 'Debug_x64/AgentRegisterer.o' '-shared-libgcc' '-mtune=generic' '-march=x86-64' Please also note as per Linux DSO best known methods , we have hidden visibility of symbols, only classes we would like to expose are being exposed (maybe this is related ???) What should be the next steps in root causing this issue ?","c++, c++11, gdb, g++, redhat",11,4457,3,https://stackoverflow.com/questions/20992356/gdb-jumps-to-wrong-lines-in-out-of-order-fashion 54470463,Is there a specification for the YUM metadata?,"I'm trying to find a trusted point of truth for the following yum metadata files: primary.xml.gz filelists.xml.gz other.xml.gz repomd.gz groups.xml.gz I've been looking around the Internet, but I haven't found a definitive reference, or guide. Is there a concrete specification, or RFC for this, or is this open for interpretation and implementation? I've come across these useful links: Anatomy of YUM Repositories: A Look Under The Hood YUM Repository And Package Management: Complete Tutorial openSUSE: Standards RPM Metadata But I haven't managed to find an actual specification for this. Does anybody know if there is one, or where to find more details?","Is there a specification for the YUM metadata? I'm trying to find a trusted point of truth for the following yum metadata files: primary.xml.gz filelists.xml.gz other.xml.gz repomd.gz groups.xml.gz I've been looking around the Internet, but I haven't found a definitive reference, or guide. Is there a concrete specification, or RFC for this, or is this open for interpretation and implementation? I've come across these useful links: Anatomy of YUM Repositories: A Look Under The Hood YUM Repository And Package Management: Complete Tutorial openSUSE: Standards RPM Metadata But I haven't managed to find an actual specification for this. Does anybody know if there is one, or where to find more details?","redhat, rpm, yum",11,1222,0,https://stackoverflow.com/questions/54470463/is-there-a-specification-for-the-yum-metadata 36545206,How to install specific version of Docker on Centos?,I tried to install docker 1.8.2 on Centos7. The docs don't tell anything about versioning. Someone who can help me? I tried wget -qO- [URL] | sed 's/lxc-docker/lxc-docker-1.8.2/' | sh + sh -c 'sleep 3; yum -y -q install docker-engine' but didn't work. EDIT: I performed: yum install -y [URL] That works but I miss options as docker-storage-setup and docker-fetch,How to install specific version of Docker on Centos? I tried to install docker 1.8.2 on Centos7. The docs don't tell anything about versioning. Someone who can help me? I tried wget -qO- [URL] | sed 's/lxc-docker/lxc-docker-1.8.2/' | sh + sh -c 'sleep 3; yum -y -q install docker-engine' but didn't work. EDIT: I performed: yum install -y [URL] That works but I miss options as docker-storage-setup and docker-fetch,"docker, centos, redhat",10,45907,5,https://stackoverflow.com/questions/36545206/how-to-install-specific-version-of-docker-on-centos 54034302,Creating mailbox file: File exists,I added user through command adduser satya I deleted the same user by userdel satya When I tried adding again useradd satya I got the following error: Creating mailbox file: File exists,Creating mailbox file: File exists I added user through command adduser satya I deleted the same user by userdel satya When I tried adding again useradd satya I got the following error: Creating mailbox file: File exists,"linux, redhat",10,22646,2,https://stackoverflow.com/questions/54034302/creating-mailbox-file-file-exists 45569367,Upgrade RHEL from 7.3 to 7.4: ArrayIndexOutOfBoundsException in sun.font.CompositeStrike.getStrikeForSlot,"We just upgraded a server from RHEL v7.3 to v7.4 . This simple program works in RHEL v7.3 and fails in v7.4 public class TestJava { public static void main(String[] args) { Font font = new Font(""SansSerif"", Font.PLAIN, 12); FontRenderContext frc = new FontRenderContext(null, false, false); TextLayout layout = new TextLayout(""\ude00"", font, frc); layout.getCaretShapes(0); System.out.println(layout); } } The exception in RHEL 7.4 is : Exception in thread ""main"" java.lang.ArrayIndexOutOfBoundsException: 0 at sun.font.CompositeStrike.getStrikeForSlot(CompositeStrike.java:75) at sun.font.CompositeStrike.getFontMetrics(CompositeStrike.java:93) at sun.font.Font2D.getFontMetrics(Font2D.java:415) at java.awt.Font.defaultLineMetrics(Font.java:2176) at java.awt.Font.getLineMetrics(Font.java:2283) at java.awt.font.TextLayout.fastInit(TextLayout.java:598) at java.awt.font.TextLayout.(TextLayout.java:393) Te result on RHEL v7.3 is: sun.font.StandardTextSource@7ba4f24f[start:0, len:1, cstart:0, clen:1, chars:""de00"", level:0, flags:0, font:java.awt.Font[family=SansSerif,name=SansSerif,style=plain,size=12], frc:java.awt.font.FontRenderContext@c14b833b, cm:sun.font.CoreMetrics@412ae196] The update of RHEL v7.4 includes an update of openjdk from 1.8.0.131 to 1.8.0.141 but this does not seems to be related to the version of openjdk , as the problem is the same with the IBM JDK coming with WebSphere v9.0 ( v1.8.0 SR4 FP6 ). With the same version of the IBM JDK on a RHEL v7.3 and RHEL v7.4 server, the program works in RH 7.3 and fails in RH 7.4 the same way as with openjdk Any idea what's going on?","Upgrade RHEL from 7.3 to 7.4: ArrayIndexOutOfBoundsException in sun.font.CompositeStrike.getStrikeForSlot We just upgraded a server from RHEL v7.3 to v7.4 . This simple program works in RHEL v7.3 and fails in v7.4 public class TestJava { public static void main(String[] args) { Font font = new Font(""SansSerif"", Font.PLAIN, 12); FontRenderContext frc = new FontRenderContext(null, false, false); TextLayout layout = new TextLayout(""\ude00"", font, frc); layout.getCaretShapes(0); System.out.println(layout); } } The exception in RHEL 7.4 is : Exception in thread ""main"" java.lang.ArrayIndexOutOfBoundsException: 0 at sun.font.CompositeStrike.getStrikeForSlot(CompositeStrike.java:75) at sun.font.CompositeStrike.getFontMetrics(CompositeStrike.java:93) at sun.font.Font2D.getFontMetrics(Font2D.java:415) at java.awt.Font.defaultLineMetrics(Font.java:2176) at java.awt.Font.getLineMetrics(Font.java:2283) at java.awt.font.TextLayout.fastInit(TextLayout.java:598) at java.awt.font.TextLayout.(TextLayout.java:393) Te result on RHEL v7.3 is: sun.font.StandardTextSource@7ba4f24f[start:0, len:1, cstart:0, clen:1, chars:""de00"", level:0, flags:0, font:java.awt.Font[family=SansSerif,name=SansSerif,style=plain,size=12], frc:java.awt.font.FontRenderContext@c14b833b, cm:sun.font.CoreMetrics@412ae196] The update of RHEL v7.4 includes an update of openjdk from 1.8.0.131 to 1.8.0.141 but this does not seems to be related to the version of openjdk , as the problem is the same with the IBM JDK coming with WebSphere v9.0 ( v1.8.0 SR4 FP6 ). With the same version of the IBM JDK on a RHEL v7.3 and RHEL v7.4 server, the program works in RH 7.3 and fails in RH 7.4 the same way as with openjdk Any idea what's going on?","awt, redhat, java, ibm-jdk",10,22372,4,https://stackoverflow.com/questions/45569367/upgrade-rhel-from-7-3-to-7-4-arrayindexoutofboundsexception-in-sun-font-composi 22014397,C - Implicit declaration of the function "pthread_timedjoin_np",I am porting a windows library to linux. I need to use timed join to wait for the thread to join in a specific timeout. When I compile the library on Linux I am getting the warning Implicit declaration of the function - pthread_timedjoin_np I have included pthread.h and have compiled with -lpthread link. I know that pthread_timedjoin_np is a non-standard GNU function. The function first appeared in glibc in version 2.3.3. and somewhere in BCD v6. I even checked the Man Page for Linux but got no help. How do I avoid this warning? Any help? Edit-1: My system is RedHat 5.,C - Implicit declaration of the function "pthread_timedjoin_np" I am porting a windows library to linux. I need to use timed join to wait for the thread to join in a specific timeout. When I compile the library on Linux I am getting the warning Implicit declaration of the function - pthread_timedjoin_np I have included pthread.h and have compiled with -lpthread link. I know that pthread_timedjoin_np is a non-standard GNU function. The function first appeared in glibc in version 2.3.3. and somewhere in BCD v6. I even checked the Man Page for Linux but got no help. How do I avoid this warning? Any help? Edit-1: My system is RedHat 5.,"c, linux, multithreading, redhat, porting",10,13396,1,https://stackoverflow.com/questions/22014397/c-implicit-declaration-of-the-function-pthread-timedjoin-np 18766930,Resolve GCC error when installing python-ldap on Redhat Enterprise Server,"Python-LDAP + Redhat = Gnashing of Teeth Recently, I spent a few hours tearing my hair (or what's left of it) out attempting to install python-ldap (via pip) onto a Redhat Enterprise server. Here's the error message that I would get (look familiar?): Modules/constants.c:365: error: ‘LDAP_CONTROL_RELAX’ undeclared (first use in this function) error: command 'gcc' failed with exit status 1 If only there was someone out there that could help me!","Resolve GCC error when installing python-ldap on Redhat Enterprise Server Python-LDAP + Redhat = Gnashing of Teeth Recently, I spent a few hours tearing my hair (or what's left of it) out attempting to install python-ldap (via pip) onto a Redhat Enterprise server. Here's the error message that I would get (look familiar?): Modules/constants.c:365: error: ‘LDAP_CONTROL_RELAX’ undeclared (first use in this function) error: command 'gcc' failed with exit status 1 If only there was someone out there that could help me!","python, redhat, python-ldap",10,9799,2,https://stackoverflow.com/questions/18766930/resolve-gcc-error-when-installing-python-ldap-on-redhat-enterprise-server 27862664,(13)Permission denied: Error retrieving pid file run/httpd.pid,"I have installed httpd-2.2.29 using commands: ./configure --prefix=/home/user/httpd make make install I configured httpd.conf and tried to start with apache: apachectl start . But got following error: (13)Permission denied: Error retrieving pid file run/httpd.pid Remove it before continuing if it is corrupted. I tried to find file httpd.pid , but where is no such file. Could someone help me resolve such issue?","(13)Permission denied: Error retrieving pid file run/httpd.pid I have installed httpd-2.2.29 using commands: ./configure --prefix=/home/user/httpd make make install I configured httpd.conf and tried to start with apache: apachectl start . But got following error: (13)Permission denied: Error retrieving pid file run/httpd.pid Remove it before continuing if it is corrupted. I tried to find file httpd.pid , but where is no such file. Could someone help me resolve such issue?","apache, redhat",10,29108,4,https://stackoverflow.com/questions/27862664/13permission-denied-error-retrieving-pid-file-run-httpd-pid 540907,How can I tell if I'm running in a VMWARE virtual machine (from linux)?,"I have a VMWARE ESX server. I have Redhat VMs running on that server. I need a way of programatically testing if I'm running in a VM. Ideally, I'd like to know how to do this from Perl.","How can I tell if I'm running in a VMWARE virtual machine (from linux)? I have a VMWARE ESX server. I have Redhat VMs running on that server. I need a way of programatically testing if I'm running in a VM. Ideally, I'd like to know how to do this from Perl.","perl, vmware, redhat, esx",10,27031,5,https://stackoverflow.com/questions/540907/how-can-i-tell-if-im-running-in-a-vmware-virtual-machine-from-linux 25695346,How can I auto-deploy my git repo's submodules on push?,"I have a PHP Cartridge that is operating normally, except I can't find a straightforward way to get OpenShift to (recursively) push the files for my git submodules when/after it pushes my core repo files. This seems like it should be a super straightforward and common use-case. Am I overlooking something? I could probably ssh into my server and pull them manually, but I'd like to automate this completely, so that if I update the submodule's reference in my repo these changes will be reflected when I deploy.","How can I auto-deploy my git repo's submodules on push? I have a PHP Cartridge that is operating normally, except I can't find a straightforward way to get OpenShift to (recursively) push the files for my git submodules when/after it pushes my core repo files. This seems like it should be a super straightforward and common use-case. Am I overlooking something? I could probably ssh into my server and pull them manually, but I'd like to automate this completely, so that if I update the submodule's reference in my repo these changes will be reflected when I deploy.","git, deployment, openshift, redhat",10,2623,2,https://stackoverflow.com/questions/25695346/how-can-i-auto-deploy-my-git-repos-submodules-on-push 32746419,When and Why run alternatives --install java jar javac javaws on installing jdk in linux,"To install java in linux (I used CentOS, RHEL is same too), I used this command rpm -Uvh /path/to/binary/jdk-7u55-linux-x64.rpm and verified java java -version Looking at a tutorial, it says to run following 4 commands, not sure why ## java ## alternatives --install /usr/bin/java java /usr/java/latest/jre/bin/java 200000 ## javaws ## alternatives --install /usr/bin/javaws javaws /usr/java/latest/jre/bin/javaws 200000 ## Install javac only alternatives --install /usr/bin/javac javac /usr/java/latest/bin/javac 200000 ## jar ## alternatives --install /usr/bin/jar jar /usr/java/latest/bin/jar 200000 I know if there are multiple versions of java installed, you can select version to use from alternatives --config java then why to run alternative --install separately for each executable. I've seen this question but doesn't get my answer","When and Why run alternatives --install java jar javac javaws on installing jdk in linux To install java in linux (I used CentOS, RHEL is same too), I used this command rpm -Uvh /path/to/binary/jdk-7u55-linux-x64.rpm and verified java java -version Looking at a tutorial, it says to run following 4 commands, not sure why ## java ## alternatives --install /usr/bin/java java /usr/java/latest/jre/bin/java 200000 ## javaws ## alternatives --install /usr/bin/javaws javaws /usr/java/latest/jre/bin/javaws 200000 ## Install javac only alternatives --install /usr/bin/javac javac /usr/java/latest/bin/javac 200000 ## jar ## alternatives --install /usr/bin/jar jar /usr/java/latest/bin/jar 200000 I know if there are multiple versions of java installed, you can select version to use from alternatives --config java then why to run alternative --install separately for each executable. I've seen this question but doesn't get my answer","java, linux, redhat",10,22886,6,https://stackoverflow.com/questions/32746419/when-and-why-run-alternatives-install-java-jar-javac-javaws-on-installing-jdk 25855331,Installing rabbitmq-server on RHEL,When trying to install rabbitmq-server on RHEL: [ec2-user@ip-172-31-34-1XX ~]$ sudo rpm -i rabbitmq-server-3.3.5-1.noarch.rpm error: Failed dependencies: erlang >= R13B-03 is needed by rabbitmq-server-3.3.5-1.noarch [ec2-user@ip-172-31-34-1XX ~]$ rpm -i rabbitmq-server-3.3.5-1.noarch.rpm error: Failed dependencies: erlang >= R13B-03 is needed by rabbitmq-server-3.3.5-1.noarch I'm unsure why trying to rpm install isn't recognizing my erlang install since running $ erl gives: [ec2-user@ip-172-31-34-1XX ~]$ which erl /usr/local/bin/erl [ec2-user@ip-172-31-34-1XX ~]$ sudo which erl /bin/erl,Installing rabbitmq-server on RHEL When trying to install rabbitmq-server on RHEL: [ec2-user@ip-172-31-34-1XX ~]$ sudo rpm -i rabbitmq-server-3.3.5-1.noarch.rpm error: Failed dependencies: erlang >= R13B-03 is needed by rabbitmq-server-3.3.5-1.noarch [ec2-user@ip-172-31-34-1XX ~]$ rpm -i rabbitmq-server-3.3.5-1.noarch.rpm error: Failed dependencies: erlang >= R13B-03 is needed by rabbitmq-server-3.3.5-1.noarch I'm unsure why trying to rpm install isn't recognizing my erlang install since running $ erl gives: [ec2-user@ip-172-31-34-1XX ~]$ which erl /usr/local/bin/erl [ec2-user@ip-172-31-34-1XX ~]$ sudo which erl /bin/erl,"erlang, rabbitmq, redhat, rhel",10,16715,2,https://stackoverflow.com/questions/25855331/installing-rabbitmq-server-on-rhel 18338045,Enabling "Software collections". RedHat developer toolset,"I just found out that RedHat provides this ""Developer toolset"" which allows me to install (and of course use) the most up-to-date gcc-4.7.2. I use it on Centos, but the process is the same. Once installed, you can start a new bash session with this toolset enabled by issuing: scl enable devtoolset-1.1 bash That works all right. Now, could I somehow add this to my bashrc since this actually starts a new bash session? Or should I better place it inside my makefiles to avoid starting a new bash session. Would there be a way to issue this within a makefile?","Enabling "Software collections". RedHat developer toolset I just found out that RedHat provides this ""Developer toolset"" which allows me to install (and of course use) the most up-to-date gcc-4.7.2. I use it on Centos, but the process is the same. Once installed, you can start a new bash session with this toolset enabled by issuing: scl enable devtoolset-1.1 bash That works all right. Now, could I somehow add this to my bashrc since this actually starts a new bash session? Or should I better place it inside my makefiles to avoid starting a new bash session. Would there be a way to issue this within a makefile?","makefile, centos, redhat, devtoolset, redhat-dts",10,5344,2,https://stackoverflow.com/questions/18338045/enabling-software-collections-redhat-developer-toolset 45326347,How to know that docker installed in redhat is community or enterprise edition?,"Some person has install docker in my Redhat system . I want to know whether it is community edition or enterprise edition . How can i do so? I know community edition is not for Redhat . May be some person would have created centos.repo in Redhat and installed docker ce . This is what docker version gives When i do ""rpm -qif /usr/bin/docker""","How to know that docker installed in redhat is community or enterprise edition? Some person has install docker in my Redhat system . I want to know whether it is community edition or enterprise edition . How can i do so? I know community edition is not for Redhat . May be some person would have created centos.repo in Redhat and installed docker ce . This is what docker version gives When i do ""rpm -qif /usr/bin/docker""","docker, redhat",10,11443,3,https://stackoverflow.com/questions/45326347/how-to-know-that-docker-installed-in-redhat-is-community-or-enterprise-edition 9317683,What would cause PHP variables to be rewritten by the server?,"I was given a VM at my company to install web software on. But I came across a rather bizarre issue where PHP variables would be overwritten (rewritten) by the server if they matched a specific pattern. What could rewrite PHP variables like this? The following is as an entire standalone script. Essentially any variable which contains a subdomain and matches on the domain name would be rewritten. This isn't something mod_rewrite would be able to touch, so it has to be something at the server level that is parsing out PHP and rewriting a string if it matches a RegEx.","What would cause PHP variables to be rewritten by the server? I was given a VM at my company to install web software on. But I came across a rather bizarre issue where PHP variables would be overwritten (rewritten) by the server if they matched a specific pattern. What could rewrite PHP variables like this? The following is as an entire standalone script. Essentially any variable which contains a subdomain and matches on the domain name would be rewritten. This isn't something mod_rewrite would be able to touch, so it has to be something at the server level that is parsing out PHP and rewriting a string if it matches a RegEx.","php, apache, url-rewriting, redhat",10,388,1,https://stackoverflow.com/questions/9317683/what-would-cause-php-variables-to-be-rewritten-by-the-server 70458779,RHEL8.5 shell "BASH_FUNC_which%%" environment variable causes K8S pods to fail,"Problem After moving to RHEL 8.5 from 8.4, started having the issue of K8S pods failure. spec.template.spec.containers[0].env[52].name: Invalid value: ""BASH_FUNC_which%%"": a valid environment variable name must consist of alphabetic characters, digits, '_', '-', or '.', and must not start with a digit (e.g. 'my.env-name', or 'MY_ENV.NAME', or 'MyEnvName1', regex used for validation is '[-._a-zA-Z][-._a-zA-Z0-9]*') The env command in the login shell shows BASH_FUNC_which%% defined as below. BASH_FUNC_which%%=() { ( alias; eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot ""$@"" } Suggeted that /etc/profile.d/which2.sh is the one that sets up the BASH_FUNC_which%% . /etc/profile.d/which2.sh # shellcheck shell=sh # Initialization script for bash, sh, mksh and ksh which_declare=""declare -f"" which_opt=""-f"" which_shell=""$(cat /proc/$$/comm)"" if [ ""$which_shell"" = ""ksh"" ] || [ ""$which_shell"" = ""mksh"" ] || [ ""$which_shell"" = ""zsh"" ] ; then which_declare=""typeset -f"" which_opt="""" fi which () { (alias; eval ${which_declare}) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot ""$@"" } export which_declare export ${which_opt} which By removing it, the issue was fixed. Question Please help understand where exactly BASH_FUNC_which%% is setup in RHEL8.5 and what is the purpose of this BASH_FUNC_which%% , why is has been introduced in RHEL.","RHEL8.5 shell "BASH_FUNC_which%%" environment variable causes K8S pods to fail Problem After moving to RHEL 8.5 from 8.4, started having the issue of K8S pods failure. spec.template.spec.containers[0].env[52].name: Invalid value: ""BASH_FUNC_which%%"": a valid environment variable name must consist of alphabetic characters, digits, '_', '-', or '.', and must not start with a digit (e.g. 'my.env-name', or 'MY_ENV.NAME', or 'MyEnvName1', regex used for validation is '[-._a-zA-Z][-._a-zA-Z0-9]*') The env command in the login shell shows BASH_FUNC_which%% defined as below. BASH_FUNC_which%%=() { ( alias; eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot ""$@"" } Suggeted that /etc/profile.d/which2.sh is the one that sets up the BASH_FUNC_which%% . /etc/profile.d/which2.sh # shellcheck shell=sh # Initialization script for bash, sh, mksh and ksh which_declare=""declare -f"" which_opt=""-f"" which_shell=""$(cat /proc/$$/comm)"" if [ ""$which_shell"" = ""ksh"" ] || [ ""$which_shell"" = ""mksh"" ] || [ ""$which_shell"" = ""zsh"" ] ; then which_declare=""typeset -f"" which_opt="""" fi which () { (alias; eval ${which_declare}) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot ""$@"" } export which_declare export ${which_opt} which By removing it, the issue was fixed. Question Please help understand where exactly BASH_FUNC_which%% is setup in RHEL8.5 and what is the purpose of this BASH_FUNC_which%% , why is has been introduced in RHEL.","kubernetes, environment-variables, redhat",10,5970,1,https://stackoverflow.com/questions/70458779/rhel8-5-shell-bash-func-which-environment-variable-causes-k8s-pods-to-fail 33817481,Large PCIe DMA Linux x86-64,"I am working with a high speed serial card for high rate data transfers from an external source to a Linux box with a PCIe card. The PCIe card came with some 3rd party drivers that use dma_alloc_coherent to allocate the dma buffers to receive the data. Due to Linux limitations however, this approach limits data transfers to 4MB. I have been reading and trying multiple methods for allocating a large DMA buffer and haven't been able to get one to work. This system has 32GB of memory and is running Red Hat with a kernel version of 3.10 and I would like to make 4GB of that available for a contiguous DMA. I know the preferred method is scatter/gather, but this is not possible in my situation as there is a hardware chip that translated the serial protocol into a DMA beyond my control, where the only thing that I can control is adding an offset to the incoming addresses (ie, address zero as seen from the external system can be mapped to address 0x700000000 on the local bus). Since this is a one-off lab machine I think the fastest/easiest approach would be to use mem=28GB boot configuration parameter. I have this working fine, but the next step to access that memory from virtual space is where I am having problems. Here is my code condensed to the relevant components: In the kernel module: size_t len = 0x100000000ULL; // 4GB size_t phys = 0x700000000ULL; // 28GB size_t virt = ioremap_nocache( phys, len ); // address not usable via direct reference size_t bus = (size_t)virt_to_bus( (void*)virt ); // this should be the same as phys for x86-64, shouldn't it? // OLD WAY /*size_t len = 0x400000; // 4MB size_t bus; size_t virt = dma_alloc_coherent( devHandle, len, &bus, GFP_ATOMIC ); size_t phys = (size_t)virt_to_phys( (void*)virt );*/ In the application: // Attempt to make a usable virtual pointer u32 pSize = sysconf(_SC_PAGESIZE); void* mapAddr = mmap(0, len+(phys%pSize), PROT_READ|PROT_WRITE, MAP_SHARED, devHandle, phys-(phys%pSize)); virt = (size_t)mapAddr + (phys%pSize); // do DMA to 0x700000000 bus address printf(""Value %x\n"", *((u32*)virt)); // this is returning zero Another interesting thing is that before doing all of this, the physical address returned from dma_alloc_coherent is greater than the amount of RAM on the system(0x83d000000). I thought that in x86 the RAM will always be the lowest addresses and therefore I would expect an address less than 32GB. Any help would be appreciated.","Large PCIe DMA Linux x86-64 I am working with a high speed serial card for high rate data transfers from an external source to a Linux box with a PCIe card. The PCIe card came with some 3rd party drivers that use dma_alloc_coherent to allocate the dma buffers to receive the data. Due to Linux limitations however, this approach limits data transfers to 4MB. I have been reading and trying multiple methods for allocating a large DMA buffer and haven't been able to get one to work. This system has 32GB of memory and is running Red Hat with a kernel version of 3.10 and I would like to make 4GB of that available for a contiguous DMA. I know the preferred method is scatter/gather, but this is not possible in my situation as there is a hardware chip that translated the serial protocol into a DMA beyond my control, where the only thing that I can control is adding an offset to the incoming addresses (ie, address zero as seen from the external system can be mapped to address 0x700000000 on the local bus). Since this is a one-off lab machine I think the fastest/easiest approach would be to use mem=28GB boot configuration parameter. I have this working fine, but the next step to access that memory from virtual space is where I am having problems. Here is my code condensed to the relevant components: In the kernel module: size_t len = 0x100000000ULL; // 4GB size_t phys = 0x700000000ULL; // 28GB size_t virt = ioremap_nocache( phys, len ); // address not usable via direct reference size_t bus = (size_t)virt_to_bus( (void*)virt ); // this should be the same as phys for x86-64, shouldn't it? // OLD WAY /*size_t len = 0x400000; // 4MB size_t bus; size_t virt = dma_alloc_coherent( devHandle, len, &bus, GFP_ATOMIC ); size_t phys = (size_t)virt_to_phys( (void*)virt );*/ In the application: // Attempt to make a usable virtual pointer u32 pSize = sysconf(_SC_PAGESIZE); void* mapAddr = mmap(0, len+(phys%pSize), PROT_READ|PROT_WRITE, MAP_SHARED, devHandle, phys-(phys%pSize)); virt = (size_t)mapAddr + (phys%pSize); // do DMA to 0x700000000 bus address printf(""Value %x\n"", *((u32*)virt)); // this is returning zero Another interesting thing is that before doing all of this, the physical address returned from dma_alloc_coherent is greater than the amount of RAM on the system(0x83d000000). I thought that in x86 the RAM will always be the lowest addresses and therefore I would expect an address less than 32GB. Any help would be appreciated.","c++, linux, redhat, dma, pci-e",10,2645,1,https://stackoverflow.com/questions/33817481/large-pcie-dma-linux-x86-64 15719605,mysql_install_db giving error,"I have downloaded the mysql-5.1.38-linux-x86_64-glibc23.tar.gz from here and then i have executed it by using below command groupadd mysql useradd -g mysql mysql123 cp mysql-5.1.38-linux-x86_64-glibc23.tar.gz /home /mysql123/ su - mysql123 tar -zxvf mysql-5.1.38-linux-x86_64-glibc23.tar.gz mv mysql-5.1.38-linux-x86_64-glibc23 mysql mkdir tmp cd mysql/ mv suppport-files/my-medium.cnf my.cnf cp support-files/mysql.server bin/ and then i have edited the my.cnf and set the basedir and datadir to /home/mysql123/mysql and /home/mysql123/mysql/data and innodb_home_dir and logfile directory to datadir Now edited mysql.server and set the datadir and basedir in them properly and then initiated mysql_install_db as [mysql123@localhost mysql]$ ./scripts/mysql_install_db ./scripts/mysql_install_db: line 244: ./bin/my_print_defaults: cannot execute binary file Neither host '127.0.0.1' nor 'localhost' could be looked up with ./bin/resolveip Please configure the 'hostname' command to return a correct hostname. If you want to solve this at a later stage, restart this script with the --force option on seeing the error i thought it may be confused with basedir and executed the same as below [mysql123@localhost mysql]$ ./scripts/mysql_install_db -–user=mysql123 -–basedir=/home/mysql123/mysql ./scripts/mysql_install_db: line 244: ./bin/my_print_defaults: cannot execute binary file Neither host '127.0.0.1' nor 'localhost' could be looked up with ./bin/resolveip Please configure the 'hostname' command to return a correct hostname. If you want to solve this at a later stage, restart this script with the --force option i am not gettin what is going internally and showing this kind of message and i am sure that i have enough diskspace ( df -h ) and i have proper ownership ( chown mysq123:mysql /home/mysql123/ -R ) and proper permissions ( chmod 755 . ) and the lines in mysql_install_db are like below please any help to solve this problem is very useful ( and i have to follow the same installation process) i am using redhat 6","mysql_install_db giving error I have downloaded the mysql-5.1.38-linux-x86_64-glibc23.tar.gz from here and then i have executed it by using below command groupadd mysql useradd -g mysql mysql123 cp mysql-5.1.38-linux-x86_64-glibc23.tar.gz /home /mysql123/ su - mysql123 tar -zxvf mysql-5.1.38-linux-x86_64-glibc23.tar.gz mv mysql-5.1.38-linux-x86_64-glibc23 mysql mkdir tmp cd mysql/ mv suppport-files/my-medium.cnf my.cnf cp support-files/mysql.server bin/ and then i have edited the my.cnf and set the basedir and datadir to /home/mysql123/mysql and /home/mysql123/mysql/data and innodb_home_dir and logfile directory to datadir Now edited mysql.server and set the datadir and basedir in them properly and then initiated mysql_install_db as [mysql123@localhost mysql]$ ./scripts/mysql_install_db ./scripts/mysql_install_db: line 244: ./bin/my_print_defaults: cannot execute binary file Neither host '127.0.0.1' nor 'localhost' could be looked up with ./bin/resolveip Please configure the 'hostname' command to return a correct hostname. If you want to solve this at a later stage, restart this script with the --force option on seeing the error i thought it may be confused with basedir and executed the same as below [mysql123@localhost mysql]$ ./scripts/mysql_install_db -–user=mysql123 -–basedir=/home/mysql123/mysql ./scripts/mysql_install_db: line 244: ./bin/my_print_defaults: cannot execute binary file Neither host '127.0.0.1' nor 'localhost' could be looked up with ./bin/resolveip Please configure the 'hostname' command to return a correct hostname. If you want to solve this at a later stage, restart this script with the --force option i am not gettin what is going internally and showing this kind of message and i am sure that i have enough diskspace ( df -h ) and i have proper ownership ( chown mysq123:mysql /home/mysql123/ -R ) and proper permissions ( chmod 755 . ) and the lines in mysql_install_db are like below please any help to solve this problem is very useful ( and i have to follow the same installation process) i am using redhat 6","mysql, database, installation, redhat, database-administration",10,16535,5,https://stackoverflow.com/questions/15719605/mysql-install-db-giving-error 15660887,Detect host operating system distro in chef-solo deploy bash script,When deploying a chef-solo setup you need to switch between using sudo or not eg: bash install.sh and sudo bash install.sh Depending on the distro on the host server. How can this be automated?,Detect host operating system distro in chef-solo deploy bash script When deploying a chef-solo setup you need to switch between using sudo or not eg: bash install.sh and sudo bash install.sh Depending on the distro on the host server. How can this be automated?,"linux, bash, ubuntu, redhat, chef-solo",9,13717,2,https://stackoverflow.com/questions/15660887/detect-host-operating-system-distro-in-chef-solo-deploy-bash-script 39119472,Rename file in docker container,"I'm having a weird Error when i try to run a simple script on docker container on redhat machine, this is the Docker file From tomcat:7.0.70-jre7 ENV CLIENTNAME geocontact ADD tomcat-users.xml /usr/local/tomcat/conf/ ADD app.war /usr/local/tomcat/webapps/ COPY app.sh / ENTRYPOINT [""/app.sh""] and app.sh is the script that cause the problem ""only on redhat"" #!/bin/bash set -e mv /usr/local/tomcat/webapps/app.war /usr/local/tomcat/webapps/client1.war catalina.sh run and the error message : mv cannot move '/usr/local/tomcat/webapps/app.war to a subdirectory of itself, '/usr/local/tomcat/webapps/client1.war' a screenshot for the error and this only on redhat, i run the same image on ubuntu and centos with no problems.","Rename file in docker container I'm having a weird Error when i try to run a simple script on docker container on redhat machine, this is the Docker file From tomcat:7.0.70-jre7 ENV CLIENTNAME geocontact ADD tomcat-users.xml /usr/local/tomcat/conf/ ADD app.war /usr/local/tomcat/webapps/ COPY app.sh / ENTRYPOINT [""/app.sh""] and app.sh is the script that cause the problem ""only on redhat"" #!/bin/bash set -e mv /usr/local/tomcat/webapps/app.war /usr/local/tomcat/webapps/client1.war catalina.sh run and the error message : mv cannot move '/usr/local/tomcat/webapps/app.war to a subdirectory of itself, '/usr/local/tomcat/webapps/client1.war' a screenshot for the error and this only on redhat, i run the same image on ubuntu and centos with no problems.","linux, docker, redhat",9,55790,3,https://stackoverflow.com/questions/39119472/rename-file-in-docker-container 64381744,AWS ECR Login with podman,"Good morning/afternoon/night! Can you help me, please? I'm working with RHEL 8.2 and this version doesn't support Docker. I installled Podman and everything was ok until I use the following command: $(aws ecr get-login --no-include-email --region us-east-1) But, it doesn't work because it's from Docker (I thought it was from AWS Cli). The error is: # $(aws ecr get-login --no-include-email --region us-east-1) -bash: docker: command not found I've been searching for an answer and some people used a command like this: podman login -u AWS -p .... But I tried some flags and the image, but nothing is working! What is the equivalent command for podman? Thanks!","AWS ECR Login with podman Good morning/afternoon/night! Can you help me, please? I'm working with RHEL 8.2 and this version doesn't support Docker. I installled Podman and everything was ok until I use the following command: $(aws ecr get-login --no-include-email --region us-east-1) But, it doesn't work because it's from Docker (I thought it was from AWS Cli). The error is: # $(aws ecr get-login --no-include-email --region us-east-1) -bash: docker: command not found I've been searching for an answer and some people used a command like this: podman login -u AWS -p .... But I tried some flags and the image, but nothing is working! What is the equivalent command for podman? Thanks!","amazon-web-services, redhat, amazon-ecr, podman",9,15455,3,https://stackoverflow.com/questions/64381744/aws-ecr-login-with-podman 38926063,"How do you remove the deploymentConfig, image streams, etc using Openshift OC?","After creating a new app using oc new-app location/nameofapp , many things are created: a deploymentConfig, an imagestream, a service, etc. I know you can run oc delete