Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use inttypes.h and stdint.h where appropriate (fixes 920) #116

Closed
wants to merge 1 commit into from

Conversation

ddeclerck
Copy link
Contributor

I also had to handle cob_s64_t and cob_s64_t using stdint.h (when available), to silence spurious warnings (they were defined as long long int, but the PRId64 macro causes printf to expect long int on 64-bit Linux (long int and long long int are the same size on this platform, yet they are considered different types, hence warnings).

@ddeclerck
Copy link
Contributor Author

ddeclerck commented Oct 17, 2023

Yet I'm not fully satisfied.

#define	COB_S64_C(x)		x ## LL
#define	COB_U64_C(x)		x ## ULL

Here, the suffix should depend on the actual type used to represent 64-bit integers (for instance long int on 64-bit Linux and long long int on Windows).

That needs more work.

@GitMensch GitMensch marked this pull request as draft October 17, 2023 15:44
libcob/common.h Outdated
#define COB_S64_C(x) x ## LL
#define COB_U64_C(x) x ## ULL

#elif defined(_WIN32) && !defined(__MINGW32__)
Copy link
Collaborator

@GitMensch GitMensch Oct 17, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can drop the MINGW part here, even over 10 year old MinGW ships a conforming stdint header from 2000-12-02 which has those types, similar below with inttypes.h.

As soon as you declare this as a not a draft any more I'll give the CI generated tarball a try on that environment.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Out of curiosity, do you see any reason why one would want to compile GnuCobol with any C compiler that is not at least C99-compliant ? (besides people using outdated MSVC versions...)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Legacy MSVC was one of the reasons years ago, but as this finally improved since years (somewhere 2013/2015, I think) this isn't much of a reason any more.
The likely oldest environment would be GCCMVS - That's GCC 3.2.3 as 31bit compiler (but I never tried compiling any GnuCOBOL version using that compiler).

COBOL tackles quite some legacy systems. And it is easier to first only move out a proprietary compiler (which may have license issues) to GnuCOBOL (so all your files are where expected scripts work as before, ...) than to move out everything.

My direct main comparison to ACUCOBOL for example is a SUSE 6.4 VM (which ships its compiler and runtime :-) and that is GCC but not C99. [more a hobby machine, but there are likely people out there which have other ancient environments running their COBOL].

We may require C99 with GC 4.0+. The more important "legacy" systems I'm aware of like AIX and Solaris don't do C99 by default, but you can tweak this with the appropriate options (then specified together with the compiler as CC during configure), but even there we have no clear date; which is also one of the reasons that we have a CI using an old std for GCC (which doesn't set limits on all parts, so that's a "help", not a "guarantee").

Copy link
Contributor Author

@ddeclerck ddeclerck Oct 18, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, that's interesting ; I have to try that on the Hercules emulator ;)
Although compiling for such constrained targets would probably be better/easier via cross-compilation (cf. https://sourceforge.net/p/gnucobol/feature-requests/450/ ;).

I'm just a bit concerned about the long long type, which is not part of C89 but C99 - even though it is available as a GCC extension to C89 (and possibly many other compilers ; for instance it seems that Sun WorkShop Compiler C 4.2 supports long long too, while claiming to be an ANSI C compiler). Do we expect a 64-bit type to be available everywhere we want to compile GnuCobol, or should we provide an alternative when such type is not available ? (although this is way beyond the scope of this PR). More generally, we also have to be sure about the size of each integer type we use, as they may vary depending on the target.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As-is GnuCOBOL expects int to be an integer type > 16bit and expects an integer type > 32 bit to be available (both since at least 3.x, possibly also since 2.2) .

long long is very portable (outside of Windows where its own type may be used).
Your concern is that the newly used headers are available, but miss long long as definition for a 64bit type, right?

In this case the new code could have something like #ifdef ADD_COB64 then placing the previously used long long definition in.

OT: I've learned yesterday that Hercules won't work as cross-compile, as there is no full cross-compiling environment available (you can cross-compile, but this leads to ELF files, so you would only generate to C then copy over, then assemble with the machine native assembler).

Copy link
Contributor Author

@ddeclerck ddeclerck Oct 18, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your concern is that the newly used headers are available, but miss long long as definition for a 64bit type, right?

My concern is that stdint.h and inttypes.h always assume the smallest possible type for 64-bit integers (int64_t), so it will use long on 64-bit Linux but long long on 32-bit Linux and 32/64-bit Windows with MinGW (don't know about MSVC).
However GnuCobol takes the approach of always using long long (ignoring MSVC), which is safe and avoids the trouble of having to choose the correct type. However even when sizeof(long) = sizeof(long long), the two are considered different types and may generate warnings if one is use in place of the other (this also affects the L and LL suffixes). That's what I'm trying to avoid.

In the first place stdint.h and inttypes.h got involved because I wanted to use the PRId64 macro to avoid having to discriminate between the standard lld and MSVCRT-specific I64d format specifiers.

Maybe we can do that : use int64_t and PRId64 from stdint.h and inttypes.h where available (and in particular to deal with the mess about MSVC), and just fallback to plain lond long and lld otherwise (safe fallback for all those C89 compilers that do support long long as an extension). This would break compilation on old versions of Visual C++ (< 2010 or something) (although we could use MSC_VER to add a fallback for these versions) ; not sure about MinGW (though the oldest versions I could find does have stdint.h and inttypes.h).

@GitMensch
Copy link
Collaborator

Note, also @lefessan as the original issue is also about the UCRT used definitions, it would be good to adjust the Windows CI in msys2-setup to use a build matrix of at least one additional ucrt environment.
I don't know how much those additional runs take, you may add all of the environments in the example https://github.com/marketplace/actions/setup-msys2#build-matrix

@GitMensch
Copy link
Collaborator

GitMensch commented Oct 18, 2023 via email

@codecov-commenter
Copy link

Codecov Report

Merging #116 (6c1a45a) into gcos4gnucobol-3.x (c0d64ad) will not change coverage.
The diff coverage is 100.00%.

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

@@                Coverage Diff                 @@
##           gcos4gnucobol-3.x     #116   +/-   ##
==================================================
  Coverage              65.74%   65.74%           
==================================================
  Files                     32       32           
  Lines                  59092    59092           
  Branches               15575    15575           
==================================================
  Hits                   38849    38849           
  Misses                 14262    14262           
  Partials                5981     5981           
Files Coverage Δ
cobc/codegen.c 75.99% <100.00%> (ø)

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

libcob/common.h Outdated Show resolved Hide resolved
@ddeclerck
Copy link
Contributor Author

Took longer than expected, but I'm now satisfied.
Compiles and runs the testsuite fine under Linux with GCC 11 and Clang 14 both with and without using the stdint/inttypes headers.
Also compiles fine under Windows/Cygwin with GCC 11 and MinGW 11. However, I had a few failures on the testsuite, though these also occur without my patch.
Although off-topic, my failures under Cygwin/GCC are tests number 20-22 and 24 (not sure why), and under MinGW they are tests number 8, 9, 20,22, 24, 30, 40, 43, 47, 50, 762, 771, 797-799, 806-809, 845 and a few others (most of these are likely due to improper configuration of some path variables, which I had trouble getting right: I had to manually fill COB_CONFIG_DIR, COB_COPY_DIR, COB_RUNTIME_DIR and C_INCLUDE_PATH with Windows-style paths, otherwise they would use Unix-style paths pointing in my Cygwin installation). Anyway I'll check these issues separately.

@GitMensch GitMensch marked this pull request as ready for review October 19, 2023 15:39
@GitMensch
Copy link
Collaborator

While still not finding the time to check-in some missing commits to svn... I'd like to try that tomorrow on old MinGW and old MSVC - can you please try with MSYS2 ucrt?

@ddeclerck
Copy link
Contributor Author

While still not finding the time to check-in some missing commits to svn... I'd like to try that tomorrow on old MinGW and old MSVC - can you please try with MSYS2 ucrt?

Just finished testing: it almost works fine with MSYS2/UCRT (on Windows 10).
I have a failure on test "771: direct CALL in from C w/wo error; no exit".
Here is the log:
testsuite_0771.log
I'm investigating.

I also have 3 failed tests though, probably because of path-related issues, but those also fail without my patch:

  • 8: temporary path invalid
  • 808: stack and dump feature
  • 809: dump feature with NULL address

Here are the logs, for reference:
testsuite_0008.log
testsuite_0808.log
testsuite_0809.log

@GitMensch
Copy link
Collaborator

Thank you for your tests and further inspecting this.

Test 8 is rooted in TEMP being bad and (likely the C compiler after cobc?) breaking because of that.

Tests 808+809 are known false positives (MSYS2 executes the program with an absolute file name); it may be possible to set some MSYS environment variable to change that.

@ddeclerck
Copy link
Contributor Author

ddeclerck commented Oct 20, 2023

771 is giving me a hard time. In fact it also occurs without my patch. The worst part is that it is totally random : if I run testsuite 771 in a loop, sometimes I get "ok", sometimes I get "FAILED"...

EDIT: The problem occurs specifically with STOP RUN (replacing by an EXIT program gives no error). Also, it does NOT occur if I define COB_WITHOUT_JMP. Investigating a bit, it seems this is because the COBOL module executing the STOP RUN statement is unloaded (with lt_dlclose) before cob_stop_run has finished its execution. This might be okay when calling exit, but when using longjmp, this probably messes up the stack frame. Since this seems only used with cob_call_with_exception_check, I'm tempted to suggest deferring the unloading of the currently running module just before returning from cob_call_with_exception_check. Or, why not, adding a pair of calls to lt_dlopen and lt_dlclose in cob_call_with_exception_check before setjmp and before return, so as to keep the module alive a bit longer.

But in any case, this is not related to this PR.

@GitMensch
Copy link
Collaborator

Ah, an edit... your analysis on 771 sounds very reasonable.

I'm not sure about the fix on this - before doing the longjmp we should do all the cleanup...
Any further idea how to process?
Note: we cannot do an dlopen in cob_call_with_exception_check as the program can be found in several places, including "already in memory from the last invocation". We possibly could do if (name) cob_resolve (name, 0, 0); as this ensures that the modules is loaded, the later cob_call will then get that. This increments the library counter also before setjmp which may be useful.

I try to come back to the actual PR content soon.

Copy link
Collaborator

@GitMensch GitMensch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now also checked on old MSYS/MinGW - no new compile warning, all expected tests pass (920 doesn't error).

So good to go!

@ddeclerck ddeclerck closed this Jan 22, 2024
@ddeclerck ddeclerck deleted the use_inttypes branch July 15, 2024 12:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants