Skip to content

[IR][AsmParser] Revamp how floating-point literals work in LLVM IR. #121838

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

jcranmer-intel
Copy link
Contributor

@jcranmer-intel jcranmer-intel commented Jan 6, 2025

This adds support for the following kinds of formats:

  • Hexadecimal literals like 0x1.fp13
  • Special values +inf/-inf, +qnan/-qnan
  • NaN values with payloads like +nan(0x1) or -snan(0x2)

Additionally, the floating-point hexadecimal format that records the bitpattern exactly no longer requires the 0xL or 0xK or similar code for the floating-point type. This format is removed from the documentation, but is still supported as a legacy format in the parser.

@llvmbot
Copy link
Member

llvmbot commented Jan 6, 2025

@llvm/pr-subscribers-llvm-adt
@llvm/pr-subscribers-backend-directx
@llvm/pr-subscribers-llvm-transforms
@llvm/pr-subscribers-llvm-analysis
@llvm/pr-subscribers-llvm-globalisel
@llvm/pr-subscribers-backend-loongarch
@llvm/pr-subscribers-debuginfo

@llvm/pr-subscribers-backend-hexagon

Author: Joshua Cranmer (jcranmer-intel)

Changes

This adds support for the following kinds of formats:

  • Hexadecimal literals like 0x1.fp13
  • Special values +inf/-inf, +qnan/-qnan
  • NaN values with payloads like +nan(0x1)

Additionally, the floating-point hexadecimal format that records the bitpattern exactly no longer requires the 0xL or 0xK or similar code for the floating-point type. This format is removed from the documentation, but is still supported as a legacy format in the parser.


Patch is 1.28 MiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/121838.diff

532 Files Affected:

  • (modified) clang/test/C/C11/n1396.c (+20-20)
  • (modified) clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c (+10-10)
  • (modified) clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c (+5-5)
  • (modified) clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c (+12-12)
  • (modified) clang/test/CodeGen/PowerPC/ppc64-complex-parms.c (+4-4)
  • (modified) clang/test/CodeGen/RISCV/riscv64-vararg.c (+3-3)
  • (modified) clang/test/CodeGen/SystemZ/atomic_is_lock_free.c (+1-1)
  • (modified) clang/test/CodeGen/X86/Float16-arithmetic.c (+1-1)
  • (modified) clang/test/CodeGen/X86/Float16-complex.c (+29-29)
  • (modified) clang/test/CodeGen/X86/avx512fp16-builtins.c (+10-10)
  • (modified) clang/test/CodeGen/X86/avx512vlfp16-builtins.c (+11-11)
  • (modified) clang/test/CodeGen/X86/long-double-config-size.c (+2-2)
  • (modified) clang/test/CodeGen/X86/x86-atomic-long_double.c (+20-20)
  • (modified) clang/test/CodeGen/X86/x86_64-longdouble.c (+4-4)
  • (modified) clang/test/CodeGen/atomic.c (+2-2)
  • (modified) clang/test/CodeGen/builtin-complex.c (+2-2)
  • (modified) clang/test/CodeGen/builtin_Float16.c (+4-4)
  • (modified) clang/test/CodeGen/builtins-elementwise-math.c (+1-1)
  • (modified) clang/test/CodeGen/builtins-nvptx.c (+8-8)
  • (modified) clang/test/CodeGen/builtins.c (+9-9)
  • (modified) clang/test/CodeGen/catch-undef-behavior.c (+2-2)
  • (modified) clang/test/CodeGen/const-init.c (+1-1)
  • (modified) clang/test/CodeGen/fp16-ops-strictfp.c (+7-7)
  • (modified) clang/test/CodeGen/fp16-ops.c (+3-3)
  • (modified) clang/test/CodeGen/isfpclass.c (+1-1)
  • (modified) clang/test/CodeGen/math-builtins-long.c (+8-8)
  • (modified) clang/test/CodeGen/mingw-long-double.c (+4-4)
  • (modified) clang/test/CodeGen/spir-half-type.cpp (+20-20)
  • (modified) clang/test/CodeGenCUDA/types.cu (+1-1)
  • (modified) clang/test/CodeGenCXX/auto-var-init.cpp (+7-7)
  • (modified) clang/test/CodeGenCXX/cxx11-user-defined-literal.cpp (+1-1)
  • (modified) clang/test/CodeGenCXX/float128-declarations.cpp (+24-24)
  • (modified) clang/test/CodeGenCXX/float16-declarations.cpp (+16-16)
  • (modified) clang/test/CodeGenCXX/ibm128-declarations.cpp (+1-1)
  • (modified) clang/test/CodeGenHLSL/builtins/rcp.hlsl (+4-4)
  • (modified) clang/test/CodeGenOpenCL/amdgpu-alignment.cl (+4-4)
  • (modified) clang/test/CodeGenOpenCL/half.cl (+4-4)
  • (modified) clang/test/Frontend/fixed_point_conversions_half.c (+9-9)
  • (modified) clang/test/Headers/__clang_hip_math_deprecated.hip (+2-2)
  • (modified) clang/test/OpenMP/atomic_capture_codegen.cpp (+1-1)
  • (modified) clang/test/OpenMP/atomic_update_codegen.cpp (+1-1)
  • (modified) llvm/docs/LangRef.rst (+37-30)
  • (modified) llvm/include/llvm/AsmParser/LLLexer.h (+1)
  • (modified) llvm/include/llvm/AsmParser/LLToken.h (+2)
  • (modified) llvm/lib/AsmParser/LLLexer.cpp (+159-37)
  • (modified) llvm/lib/AsmParser/LLParser.cpp (+32-2)
  • (modified) llvm/lib/CodeGen/MIRParser/MILexer.cpp (+18)
  • (modified) llvm/lib/IR/AsmWriter.cpp (+4-9)
  • (modified) llvm/lib/Support/APFloat.cpp (+1-1)
  • (modified) llvm/test/Analysis/CostModel/AArch64/arith-fp.ll (+3-3)
  • (modified) llvm/test/Analysis/CostModel/AArch64/insert-extract.ll (+4-4)
  • (modified) llvm/test/Analysis/CostModel/AArch64/reduce-fadd.ll (+48-48)
  • (modified) llvm/test/Analysis/CostModel/AMDGPU/fdiv.ll (+80-80)
  • (modified) llvm/test/Analysis/CostModel/ARM/divrem.ll (+40-40)
  • (modified) llvm/test/Analysis/CostModel/ARM/reduce-fp.ll (+48-48)
  • (modified) llvm/test/Analysis/CostModel/RISCV/phi-const.ll (+1-1)
  • (modified) llvm/test/Analysis/CostModel/RISCV/reduce-fadd.ll (+140-140)
  • (modified) llvm/test/Analysis/CostModel/RISCV/reduce-fmul.ll (+126-126)
  • (modified) llvm/test/Analysis/CostModel/RISCV/rvv-phi-const.ll (+3-3)
  • (modified) llvm/test/Analysis/Lint/scalable.ll (+1-1)
  • (modified) llvm/test/Assembler/bfloat.ll (+13-13)
  • (modified) llvm/test/Assembler/constant-splat.ll (+10-10)
  • (added) llvm/test/Assembler/float-literals.ll (+40)
  • (modified) llvm/test/Assembler/half-constprop.ll (+3-3)
  • (modified) llvm/test/Assembler/half-conv.ll (+1-1)
  • (modified) llvm/test/Assembler/invalid-fp80hex.ll (+1-1)
  • (modified) llvm/test/Assembler/short-hexpair.ll (+1-1)
  • (modified) llvm/test/Assembler/unnamed.ll (+1-1)
  • (modified) llvm/test/Bitcode/compatibility-3.8.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility-3.9.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility-4.0.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility-5.0.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility-6.0.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility.ll (+2-2)
  • (modified) llvm/test/Bitcode/constant-splat.ll (+10-10)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/arm64-irtranslator.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fabs.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-flog2.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fminimum-fmaximum.mir (+8-8)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fminnum-fmaxnum.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fneg.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fptrunc.mir (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fsqrt.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/fp128-legalize-crash-pr35690.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/legalize-fp128-fconstant.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/legalize-fp16-fconstant.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/prelegalizer-combiner-select-to-fminmax.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/select-fp16-fconstant.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-aapcs.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-build-vector.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-fp-imm-size.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-fp-imm.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-fp128.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/bf16-imm.ll (+8-8)
  • (modified) llvm/test/CodeGen/AArch64/bf16-instructions.ll (+3-3)
  • (modified) llvm/test/CodeGen/AArch64/bf16-v4-instructions.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/bf16.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/f16-imm.ll (+8-8)
  • (modified) llvm/test/CodeGen/AArch64/f16-instructions.ll (+3-3)
  • (modified) llvm/test/CodeGen/AArch64/fcopysign-noneon.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/fp16-v4-instructions.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/fp16-vector-nvcast.ll (+6-6)
  • (modified) llvm/test/CodeGen/AArch64/fp16_intrinsic_lane.ll (+15-15)
  • (modified) llvm/test/CodeGen/AArch64/fp16_intrinsic_scalar_1op.ll (+5-5)
  • (modified) llvm/test/CodeGen/AArch64/half.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/isinf.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/mattr-all.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/sve-pred-selectop3.ll (+6-6)
  • (modified) llvm/test/CodeGen/AArch64/vecreduce-fadd-legalization-strict.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/vecreduce-fadd-legalization.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/vecreduce-fmul-legalization-strict.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/amdgpu-prelegalizer-combiner-crash.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fcanonicalize.mir (+12-12)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fdiv-sqrt-to-rsq.mir (+11-11)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-foldable-fneg.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fsub-fneg.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-rsq.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslate-bf16.ll (+6-6)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslator-atomicrmw.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslator-call.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fconstant.mir (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fcos.mir (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fdiv.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fmaxnum.mir (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fminnum.mir (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fsin.mir (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-intrinsic-round.mir (+36-36)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-sitofp.mir (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-uitofp.mir (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/llvm.amdgcn.wqm.demote.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-clamp-fmed3-const.mir (+5-5)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-clamp-minmax-const.mir (+20-20)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-fmed3-minmax-const.mir (+20-20)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/regbankselect-default.mir (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgpu-codegenprepare-fold-binop-select.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-pow.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-rootn.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/br_cc.f16.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/build-vector-insert-elt-infloop.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/dagcombine-fmul-sel.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/extract-subvector-16bit.ll (+6-6)
  • (modified) llvm/test/CodeGen/AMDGPU/fcanonicalize.f16.ll (+18-18)
  • (modified) llvm/test/CodeGen/AMDGPU/flat-offset-bug.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/fma.f16.ll (+10-10)
  • (modified) llvm/test/CodeGen/AMDGPU/fmul-to-ldexp.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/fneg-combines.f16.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/fneg-combines.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/fneg-combines.new.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/fold-imm-f16-f32.mir (+11-11)
  • (modified) llvm/test/CodeGen/AMDGPU/fold-int-pow2-with-fmul-or-fdiv.ll (+7-7)
  • (modified) llvm/test/CodeGen/AMDGPU/fp-classify.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/fract-match.ll (+30-30)
  • (modified) llvm/test/CodeGen/AMDGPU/imm16.ll (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/immv216.ll (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/inline-constraints.ll (+7-7)
  • (modified) llvm/test/CodeGen/AMDGPU/insert_vector_elt.v2bf16.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/insert_vector_elt.v2i16.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.amdgcn.wqm.demote.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/mad-mix.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/mai-inline.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/mixed-vmem-types.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/multi-divergent-exit-region.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/pack.v2f16.ll (+5-5)
  • (modified) llvm/test/CodeGen/AMDGPU/pk_max_f16_literal.ll (+10-10)
  • (modified) llvm/test/CodeGen/AMDGPU/private-memory-atomics.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/promote-alloca-vector-to-vector.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/select-fabs-fneg-extract.f16.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/select-fabs-fneg-extract.v2f16.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/select.f16.ll (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/simplify-libcalls.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/arm-half-promote.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/armv8.2a-fp16-vector-intrinsics.ll (+4-4)
  • (modified) llvm/test/CodeGen/ARM/bf16-imm.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/const-load-align-thumb.mir (+2-2)
  • (modified) llvm/test/CodeGen/ARM/constant-island-SOImm-limit16.mir (+2-2)
  • (modified) llvm/test/CodeGen/ARM/fp16-bitcast.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/fp16-instructions.ll (+25-25)
  • (modified) llvm/test/CodeGen/ARM/fp16-litpool-arm.mir (+3-3)
  • (modified) llvm/test/CodeGen/ARM/fp16-litpool-thumb.mir (+3-3)
  • (modified) llvm/test/CodeGen/ARM/fp16-litpool2-arm.mir (+3-3)
  • (modified) llvm/test/CodeGen/ARM/fp16-litpool3-arm.mir (+3-3)
  • (modified) llvm/test/CodeGen/ARM/fp16-no-condition.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/fp16-v3.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/pr47454.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/store_half.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/vecreduce-fadd-legalization-soft-float.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/vecreduce-fadd-legalization-strict.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/vecreduce-fmul-legalization-soft-float.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/vecreduce-fmul-legalization-strict.ll (+2-2)
  • (modified) llvm/test/CodeGen/DirectX/all.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/any.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/atan2.ll (+8-8)
  • (modified) llvm/test/CodeGen/DirectX/degrees.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/exp.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/log.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/log10.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/radians.ll (+5-5)
  • (modified) llvm/test/CodeGen/DirectX/sign.ll (+2-2)
  • (modified) llvm/test/CodeGen/DirectX/step.ll (+4-4)
  • (modified) llvm/test/CodeGen/DirectX/vector_reduce_add.ll (+5-5)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/calling-conv.ll (+2-2)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/hfinsert.ll (+1-1)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/hfnosplat_cp.ll (+1-1)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/hfsplat.ll (+3-3)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/isel-mstore-fp16.ll (+1-1)
  • (modified) llvm/test/CodeGen/LoongArch/vararg.ll (+2-2)
  • (modified) llvm/test/CodeGen/MIR/Generic/bfloat-immediates.mir (+3-3)
  • (modified) llvm/test/CodeGen/MIR/NVPTX/floating-point-invalid-type-error.mir (+2-2)
  • (modified) llvm/test/CodeGen/Mips/msa/fexuprl.ll (+1-1)
  • (modified) llvm/test/CodeGen/NVPTX/bf16-instructions.ll (+1-1)
  • (modified) llvm/test/CodeGen/NVPTX/bf16.ll (+1-1)
  • (modified) llvm/test/CodeGen/NVPTX/half.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/2008-05-01-ppc_fp128.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/2008-07-15-Fabs.ll (+7-7)
  • (modified) llvm/test/CodeGen/PowerPC/2008-07-17-Fneg.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/2008-10-28-UnprocessedNode.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/2008-10-28-f128-i32.ll (+4-4)
  • (modified) llvm/test/CodeGen/PowerPC/2008-12-02-LegalizeTypeAssert.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/aix-complex.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/builtins-ppc-p9-f128.ll (+8-8)
  • (modified) llvm/test/CodeGen/PowerPC/bv-widen-undef.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/complex-return.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/constant-pool.ll (+8-8)
  • (modified) llvm/test/CodeGen/PowerPC/ctrloop-fp128.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/disable-ctr-ppcf128.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/f128-aggregates.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/f128-arith.ll (+4-4)
  • (modified) llvm/test/CodeGen/PowerPC/f128-compare.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/f128-conv.ll (+6-6)
  • (modified) llvm/test/CodeGen/PowerPC/f128-fma.ll (+4-4)
  • (modified) llvm/test/CodeGen/PowerPC/f128-passByValue.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/f128-truncateNconv.ll (+4-4)
  • (modified) llvm/test/CodeGen/PowerPC/float-asmprint.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/float-load-store-pair.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/fminnum.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/fp-classify.ll (+3-3)
  • (modified) llvm/test/CodeGen/PowerPC/fp128-bitcast-after-operation.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/global-address-non-got-indirect-access.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/handle-f16-storage-type.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppc32-align-long-double-sf.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppc32-constant-BE-ppcf128.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppc32-skip-regs.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppc_fp128-bcwriter.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128-2.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128-4.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128-endian.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128-freeze.mir (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128sf.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/pr15632.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/pr16556-2.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/pr16573.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/pzero-fp-xored.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/resolvefi-basereg.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/rs-undef-use.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/scalar-min-max-p10.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/std-unal-fi.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/vector-reduce-fadd.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/fp128.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/fp-constant-f16.mir (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/calling-conv-half.ll (+12-12)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/calling-conv-ilp32-ilp32f-ilp32d-common.ll (+10-10)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/calling-conv-lp64-lp64f-lp64d-common.ll (+6-6)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/splat_vector.ll (+12-12)
  • (modified) llvm/test/CodeGen/RISCV/calling-conv-ilp32-ilp32f-common.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/calling-conv-ilp32-ilp32f-ilp32d-common.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/calling-conv-ilp32e.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/fp128.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/half-zfa-fli.ll (+7-7)
  • (modified) llvm/test/CodeGen/RISCV/stack-store-check.ll (+7-7)
  • (modified) llvm/test/CodeGen/RISCV/tail-calls.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/vararg.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPARC/fp128-select.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPARC/fp128.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_subgroup_rotate/subgroup-rotate.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_uniform_group_instructions/uniform-group-instructions.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/half_extension.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/hlsl-intrinsics/rcp.ll (+4-4)
  • (modified) llvm/test/CodeGen/SPIRV/instructions/integer-casts.ll (+4-4)
  • (modified) llvm/test/CodeGen/SPIRV/pointers/OpExtInst-OpenCL_std-ptr-types.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/spec_const.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_ballot.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_clustered_reduce.ll (+4-4)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_extended_types.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_non_uniform_arithmetic.ll (+12-12)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_non_uniform_vote.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_shuffle.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_shuffle_relative.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/args-01.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/args-02.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/args-03.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/asm-10.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/asm-17.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/asm-19.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/call-03.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/call-zos-01.ll (+4-4)
  • (modified) llvm/test/CodeGen/SystemZ/call-zos-vararg.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/fp-cmp-03.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/fp-cmp-04.ll (+1-1)
diff --git a/clang/test/C/C11/n1396.c b/clang/test/C/C11/n1396.c
index 6f76cfe9594961..264c69c733cb68 100644
--- a/clang/test/C/C11/n1396.c
+++ b/clang/test/C/C11/n1396.c
@@ -31,7 +31,7 @@
 // CHECK-X64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 0xK3FFF8000000000000000
+// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], f0x3FFF8000000000000000
 // CHECK-X64-NEXT:    [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
 // CHECK-X64-NEXT:    ret float [[CONV1]]
 //
@@ -42,7 +42,7 @@
 // CHECK-AARCH64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-AARCH64-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-AARCH64-NEXT:    ret float [[CONV1]]
 //
@@ -64,7 +64,7 @@
 // CHECK-PPC32-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC32-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC32-NEXT:    ret float [[CONV1]]
 //
@@ -75,7 +75,7 @@
 // CHECK-PPC64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC64-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC64-NEXT:    ret float [[CONV1]]
 //
@@ -86,7 +86,7 @@
 // CHECK-SPARCV9-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-SPARCV9-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-SPARCV9-NEXT:    ret float [[CONV1]]
 //
@@ -102,7 +102,7 @@ float extended_float_func(float x) {
 // CHECK-X64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 0xK3FFF8000000000000000
+// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], f0x3FFF8000000000000000
 // CHECK-X64-NEXT:    [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
 // CHECK-X64-NEXT:    ret float [[CONV1]]
 //
@@ -113,7 +113,7 @@ float extended_float_func(float x) {
 // CHECK-AARCH64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-AARCH64-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-AARCH64-NEXT:    ret float [[CONV1]]
 //
@@ -135,7 +135,7 @@ float extended_float_func(float x) {
 // CHECK-PPC32-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC32-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC32-NEXT:    ret float [[CONV1]]
 //
@@ -146,7 +146,7 @@ float extended_float_func(float x) {
 // CHECK-PPC64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC64-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC64-NEXT:    ret float [[CONV1]]
 //
@@ -157,7 +157,7 @@ float extended_float_func(float x) {
 // CHECK-SPARCV9-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-SPARCV9-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-SPARCV9-NEXT:    ret float [[CONV1]]
 //
@@ -173,7 +173,7 @@ float extended_float_func_cast(float x) {
 // CHECK-X64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 0xK3FFF8000000000000000
+// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], f0x3FFF8000000000000000
 // CHECK-X64-NEXT:    [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
 // CHECK-X64-NEXT:    ret float [[CONV1]]
 //
@@ -184,7 +184,7 @@ float extended_float_func_cast(float x) {
 // CHECK-AARCH64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-AARCH64-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-AARCH64-NEXT:    ret float [[CONV1]]
 //
@@ -206,7 +206,7 @@ float extended_float_func_cast(float x) {
 // CHECK-PPC32-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC32-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC32-NEXT:    ret float [[CONV1]]
 //
@@ -217,7 +217,7 @@ float extended_float_func_cast(float x) {
 // CHECK-PPC64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC64-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC64-NEXT:    ret float [[CONV1]]
 //
@@ -228,7 +228,7 @@ float extended_float_func_cast(float x) {
 // CHECK-SPARCV9-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-SPARCV9-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-SPARCV9-NEXT:    ret float [[CONV1]]
 //
@@ -244,7 +244,7 @@ float extended_double_func(float x) {
 // CHECK-X64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 0xK3FFF8000000000000000
+// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], f0x3FFF8000000000000000
 // CHECK-X64-NEXT:    [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
 // CHECK-X64-NEXT:    ret float [[CONV1]]
 //
@@ -255,7 +255,7 @@ float extended_double_func(float x) {
 // CHECK-AARCH64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-AARCH64-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-AARCH64-NEXT:    ret float [[CONV1]]
 //
@@ -277,7 +277,7 @@ float extended_double_func(float x) {
 // CHECK-PPC32-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC32-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC32-NEXT:    ret float [[CONV1]]
 //
@@ -288,7 +288,7 @@ float extended_double_func(float x) {
 // CHECK-PPC64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC64-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC64-NEXT:    ret float [[CONV1]]
 //
@@ -299,7 +299,7 @@ float extended_double_func(float x) {
 // CHECK-SPARCV9-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-SPARCV9-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-SPARCV9-NEXT:    ret float [[CONV1]]
 //
diff --git a/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c b/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c
index 9109626cea9ca2..2c87ce32b8811b 100644
--- a/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c
+++ b/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c
@@ -12,8 +12,8 @@
 #include <arm_fp16.h>
 
 // COMMON-LABEL: test_vceqzh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp oeq half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmp.f16(half %a, half 0xH0000, metadata !"oeq", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp oeq half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmp.f16(half %a, half f0x0000, metadata !"oeq", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vceqzh_f16(float16_t a) {
@@ -21,8 +21,8 @@ uint16_t test_vceqzh_f16(float16_t a) {
 }
 
 // COMMON-LABEL: test_vcgezh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp oge half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half 0xH0000, metadata !"oge", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp oge half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half f0x0000, metadata !"oge", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vcgezh_f16(float16_t a) {
@@ -30,8 +30,8 @@ uint16_t test_vcgezh_f16(float16_t a) {
 }
 
 // COMMON-LABEL: test_vcgtzh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp ogt half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half 0xH0000, metadata !"ogt", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp ogt half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half f0x0000, metadata !"ogt", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vcgtzh_f16(float16_t a) {
@@ -39,8 +39,8 @@ uint16_t test_vcgtzh_f16(float16_t a) {
 }
 
 // COMMON-LABEL: test_vclezh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp ole half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half 0xH0000, metadata !"ole", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp ole half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half f0x0000, metadata !"ole", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vclezh_f16(float16_t a) {
@@ -48,8 +48,8 @@ uint16_t test_vclezh_f16(float16_t a) {
 }
 
 // COMMON-LABEL: test_vcltzh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp olt half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half 0xH0000, metadata !"olt", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp olt half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half f0x0000, metadata !"olt", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vcltzh_f16(float16_t a) {
diff --git a/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c b/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c
index 90ee74e459ebd4..27d60de792b074 100644
--- a/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c
+++ b/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c
@@ -15,7 +15,7 @@ float16_t test_vabsh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vceqzh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp oeq half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp oeq half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vceqzh_f16(float16_t a) {
@@ -23,7 +23,7 @@ uint16_t test_vceqzh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vcgezh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp oge half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp oge half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vcgezh_f16(float16_t a) {
@@ -31,7 +31,7 @@ uint16_t test_vcgezh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vcgtzh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp ogt half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp ogt half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vcgtzh_f16(float16_t a) {
@@ -39,7 +39,7 @@ uint16_t test_vcgtzh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vclezh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp ole half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp ole half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vclezh_f16(float16_t a) {
@@ -47,7 +47,7 @@ uint16_t test_vclezh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vcltzh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp olt half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp olt half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vcltzh_f16(float16_t a) {
diff --git a/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c b/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c
index a8fb989b64de50..b6bbff0c742f89 100644
--- a/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c
+++ b/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c
@@ -191,7 +191,7 @@ double test_double_pre_inc()
 // SAFE-NEXT:  [[ENTRY:.*:]]
 // SAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // SAFE-NEXT:    [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// SAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half 0xH3C00 seq_cst, align 2
+// SAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half f0x3C00 seq_cst, align 2
 // SAFE-NEXT:    ret half [[TMP0]]
 //
 // UNSAFE-LABEL: define dso_local half @test__Float16_post_inc(
@@ -199,7 +199,7 @@ double test_double_pre_inc()
 // UNSAFE-NEXT:  [[ENTRY:.*:]]
 // UNSAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // UNSAFE-NEXT:    [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// UNSAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half 0xH3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
+// UNSAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half f0x3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
 // UNSAFE-NEXT:    ret half [[TMP0]]
 //
 _Float16 test__Float16_post_inc()
@@ -213,7 +213,7 @@ _Float16 test__Float16_post_inc()
 // SAFE-NEXT:  [[ENTRY:.*:]]
 // SAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // SAFE-NEXT:    [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// SAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half 0xH3C00 seq_cst, align 2
+// SAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half f0x3C00 seq_cst, align 2
 // SAFE-NEXT:    ret half [[TMP0]]
 //
 // UNSAFE-LABEL: define dso_local half @test__Float16_post_dc(
@@ -221,7 +221,7 @@ _Float16 test__Float16_post_inc()
 // UNSAFE-NEXT:  [[ENTRY:.*:]]
 // UNSAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // UNSAFE-NEXT:    [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// UNSAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half 0xH3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
+// UNSAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half f0x3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
 // UNSAFE-NEXT:    ret half [[TMP0]]
 //
 _Float16 test__Float16_post_dc()
@@ -235,8 +235,8 @@ _Float16 test__Float16_post_dc()
 // SAFE-NEXT:  [[ENTRY:.*:]]
 // SAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // SAFE-NEXT:    [[RETVAL_ASCAST...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented Jan 6, 2025

@llvm/pr-subscribers-clang

Author: Joshua Cranmer (jcranmer-intel)

Changes

This adds support for the following kinds of formats:

  • Hexadecimal literals like 0x1.fp13
  • Special values +inf/-inf, +qnan/-qnan
  • NaN values with payloads like +nan(0x1)

Additionally, the floating-point hexadecimal format that records the bitpattern exactly no longer requires the 0xL or 0xK or similar code for the floating-point type. This format is removed from the documentation, but is still supported as a legacy format in the parser.


Patch is 1.28 MiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/121838.diff

532 Files Affected:

  • (modified) clang/test/C/C11/n1396.c (+20-20)
  • (modified) clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c (+10-10)
  • (modified) clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c (+5-5)
  • (modified) clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c (+12-12)
  • (modified) clang/test/CodeGen/PowerPC/ppc64-complex-parms.c (+4-4)
  • (modified) clang/test/CodeGen/RISCV/riscv64-vararg.c (+3-3)
  • (modified) clang/test/CodeGen/SystemZ/atomic_is_lock_free.c (+1-1)
  • (modified) clang/test/CodeGen/X86/Float16-arithmetic.c (+1-1)
  • (modified) clang/test/CodeGen/X86/Float16-complex.c (+29-29)
  • (modified) clang/test/CodeGen/X86/avx512fp16-builtins.c (+10-10)
  • (modified) clang/test/CodeGen/X86/avx512vlfp16-builtins.c (+11-11)
  • (modified) clang/test/CodeGen/X86/long-double-config-size.c (+2-2)
  • (modified) clang/test/CodeGen/X86/x86-atomic-long_double.c (+20-20)
  • (modified) clang/test/CodeGen/X86/x86_64-longdouble.c (+4-4)
  • (modified) clang/test/CodeGen/atomic.c (+2-2)
  • (modified) clang/test/CodeGen/builtin-complex.c (+2-2)
  • (modified) clang/test/CodeGen/builtin_Float16.c (+4-4)
  • (modified) clang/test/CodeGen/builtins-elementwise-math.c (+1-1)
  • (modified) clang/test/CodeGen/builtins-nvptx.c (+8-8)
  • (modified) clang/test/CodeGen/builtins.c (+9-9)
  • (modified) clang/test/CodeGen/catch-undef-behavior.c (+2-2)
  • (modified) clang/test/CodeGen/const-init.c (+1-1)
  • (modified) clang/test/CodeGen/fp16-ops-strictfp.c (+7-7)
  • (modified) clang/test/CodeGen/fp16-ops.c (+3-3)
  • (modified) clang/test/CodeGen/isfpclass.c (+1-1)
  • (modified) clang/test/CodeGen/math-builtins-long.c (+8-8)
  • (modified) clang/test/CodeGen/mingw-long-double.c (+4-4)
  • (modified) clang/test/CodeGen/spir-half-type.cpp (+20-20)
  • (modified) clang/test/CodeGenCUDA/types.cu (+1-1)
  • (modified) clang/test/CodeGenCXX/auto-var-init.cpp (+7-7)
  • (modified) clang/test/CodeGenCXX/cxx11-user-defined-literal.cpp (+1-1)
  • (modified) clang/test/CodeGenCXX/float128-declarations.cpp (+24-24)
  • (modified) clang/test/CodeGenCXX/float16-declarations.cpp (+16-16)
  • (modified) clang/test/CodeGenCXX/ibm128-declarations.cpp (+1-1)
  • (modified) clang/test/CodeGenHLSL/builtins/rcp.hlsl (+4-4)
  • (modified) clang/test/CodeGenOpenCL/amdgpu-alignment.cl (+4-4)
  • (modified) clang/test/CodeGenOpenCL/half.cl (+4-4)
  • (modified) clang/test/Frontend/fixed_point_conversions_half.c (+9-9)
  • (modified) clang/test/Headers/__clang_hip_math_deprecated.hip (+2-2)
  • (modified) clang/test/OpenMP/atomic_capture_codegen.cpp (+1-1)
  • (modified) clang/test/OpenMP/atomic_update_codegen.cpp (+1-1)
  • (modified) llvm/docs/LangRef.rst (+37-30)
  • (modified) llvm/include/llvm/AsmParser/LLLexer.h (+1)
  • (modified) llvm/include/llvm/AsmParser/LLToken.h (+2)
  • (modified) llvm/lib/AsmParser/LLLexer.cpp (+159-37)
  • (modified) llvm/lib/AsmParser/LLParser.cpp (+32-2)
  • (modified) llvm/lib/CodeGen/MIRParser/MILexer.cpp (+18)
  • (modified) llvm/lib/IR/AsmWriter.cpp (+4-9)
  • (modified) llvm/lib/Support/APFloat.cpp (+1-1)
  • (modified) llvm/test/Analysis/CostModel/AArch64/arith-fp.ll (+3-3)
  • (modified) llvm/test/Analysis/CostModel/AArch64/insert-extract.ll (+4-4)
  • (modified) llvm/test/Analysis/CostModel/AArch64/reduce-fadd.ll (+48-48)
  • (modified) llvm/test/Analysis/CostModel/AMDGPU/fdiv.ll (+80-80)
  • (modified) llvm/test/Analysis/CostModel/ARM/divrem.ll (+40-40)
  • (modified) llvm/test/Analysis/CostModel/ARM/reduce-fp.ll (+48-48)
  • (modified) llvm/test/Analysis/CostModel/RISCV/phi-const.ll (+1-1)
  • (modified) llvm/test/Analysis/CostModel/RISCV/reduce-fadd.ll (+140-140)
  • (modified) llvm/test/Analysis/CostModel/RISCV/reduce-fmul.ll (+126-126)
  • (modified) llvm/test/Analysis/CostModel/RISCV/rvv-phi-const.ll (+3-3)
  • (modified) llvm/test/Analysis/Lint/scalable.ll (+1-1)
  • (modified) llvm/test/Assembler/bfloat.ll (+13-13)
  • (modified) llvm/test/Assembler/constant-splat.ll (+10-10)
  • (added) llvm/test/Assembler/float-literals.ll (+40)
  • (modified) llvm/test/Assembler/half-constprop.ll (+3-3)
  • (modified) llvm/test/Assembler/half-conv.ll (+1-1)
  • (modified) llvm/test/Assembler/invalid-fp80hex.ll (+1-1)
  • (modified) llvm/test/Assembler/short-hexpair.ll (+1-1)
  • (modified) llvm/test/Assembler/unnamed.ll (+1-1)
  • (modified) llvm/test/Bitcode/compatibility-3.8.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility-3.9.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility-4.0.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility-5.0.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility-6.0.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility.ll (+2-2)
  • (modified) llvm/test/Bitcode/constant-splat.ll (+10-10)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/arm64-irtranslator.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fabs.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-flog2.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fminimum-fmaximum.mir (+8-8)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fminnum-fmaxnum.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fneg.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fptrunc.mir (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fsqrt.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/fp128-legalize-crash-pr35690.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/legalize-fp128-fconstant.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/legalize-fp16-fconstant.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/prelegalizer-combiner-select-to-fminmax.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/select-fp16-fconstant.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-aapcs.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-build-vector.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-fp-imm-size.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-fp-imm.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-fp128.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/bf16-imm.ll (+8-8)
  • (modified) llvm/test/CodeGen/AArch64/bf16-instructions.ll (+3-3)
  • (modified) llvm/test/CodeGen/AArch64/bf16-v4-instructions.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/bf16.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/f16-imm.ll (+8-8)
  • (modified) llvm/test/CodeGen/AArch64/f16-instructions.ll (+3-3)
  • (modified) llvm/test/CodeGen/AArch64/fcopysign-noneon.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/fp16-v4-instructions.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/fp16-vector-nvcast.ll (+6-6)
  • (modified) llvm/test/CodeGen/AArch64/fp16_intrinsic_lane.ll (+15-15)
  • (modified) llvm/test/CodeGen/AArch64/fp16_intrinsic_scalar_1op.ll (+5-5)
  • (modified) llvm/test/CodeGen/AArch64/half.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/isinf.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/mattr-all.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/sve-pred-selectop3.ll (+6-6)
  • (modified) llvm/test/CodeGen/AArch64/vecreduce-fadd-legalization-strict.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/vecreduce-fadd-legalization.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/vecreduce-fmul-legalization-strict.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/amdgpu-prelegalizer-combiner-crash.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fcanonicalize.mir (+12-12)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fdiv-sqrt-to-rsq.mir (+11-11)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-foldable-fneg.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fsub-fneg.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-rsq.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslate-bf16.ll (+6-6)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslator-atomicrmw.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslator-call.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fconstant.mir (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fcos.mir (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fdiv.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fmaxnum.mir (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fminnum.mir (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fsin.mir (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-intrinsic-round.mir (+36-36)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-sitofp.mir (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-uitofp.mir (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/llvm.amdgcn.wqm.demote.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-clamp-fmed3-const.mir (+5-5)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-clamp-minmax-const.mir (+20-20)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-fmed3-minmax-const.mir (+20-20)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/regbankselect-default.mir (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgpu-codegenprepare-fold-binop-select.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-pow.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-rootn.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/br_cc.f16.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/build-vector-insert-elt-infloop.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/dagcombine-fmul-sel.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/extract-subvector-16bit.ll (+6-6)
  • (modified) llvm/test/CodeGen/AMDGPU/fcanonicalize.f16.ll (+18-18)
  • (modified) llvm/test/CodeGen/AMDGPU/flat-offset-bug.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/fma.f16.ll (+10-10)
  • (modified) llvm/test/CodeGen/AMDGPU/fmul-to-ldexp.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/fneg-combines.f16.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/fneg-combines.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/fneg-combines.new.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/fold-imm-f16-f32.mir (+11-11)
  • (modified) llvm/test/CodeGen/AMDGPU/fold-int-pow2-with-fmul-or-fdiv.ll (+7-7)
  • (modified) llvm/test/CodeGen/AMDGPU/fp-classify.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/fract-match.ll (+30-30)
  • (modified) llvm/test/CodeGen/AMDGPU/imm16.ll (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/immv216.ll (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/inline-constraints.ll (+7-7)
  • (modified) llvm/test/CodeGen/AMDGPU/insert_vector_elt.v2bf16.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/insert_vector_elt.v2i16.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.amdgcn.wqm.demote.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/mad-mix.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/mai-inline.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/mixed-vmem-types.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/multi-divergent-exit-region.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/pack.v2f16.ll (+5-5)
  • (modified) llvm/test/CodeGen/AMDGPU/pk_max_f16_literal.ll (+10-10)
  • (modified) llvm/test/CodeGen/AMDGPU/private-memory-atomics.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/promote-alloca-vector-to-vector.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/select-fabs-fneg-extract.f16.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/select-fabs-fneg-extract.v2f16.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/select.f16.ll (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/simplify-libcalls.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/arm-half-promote.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/armv8.2a-fp16-vector-intrinsics.ll (+4-4)
  • (modified) llvm/test/CodeGen/ARM/bf16-imm.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/const-load-align-thumb.mir (+2-2)
  • (modified) llvm/test/CodeGen/ARM/constant-island-SOImm-limit16.mir (+2-2)
  • (modified) llvm/test/CodeGen/ARM/fp16-bitcast.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/fp16-instructions.ll (+25-25)
  • (modified) llvm/test/CodeGen/ARM/fp16-litpool-arm.mir (+3-3)
  • (modified) llvm/test/CodeGen/ARM/fp16-litpool-thumb.mir (+3-3)
  • (modified) llvm/test/CodeGen/ARM/fp16-litpool2-arm.mir (+3-3)
  • (modified) llvm/test/CodeGen/ARM/fp16-litpool3-arm.mir (+3-3)
  • (modified) llvm/test/CodeGen/ARM/fp16-no-condition.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/fp16-v3.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/pr47454.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/store_half.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/vecreduce-fadd-legalization-soft-float.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/vecreduce-fadd-legalization-strict.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/vecreduce-fmul-legalization-soft-float.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/vecreduce-fmul-legalization-strict.ll (+2-2)
  • (modified) llvm/test/CodeGen/DirectX/all.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/any.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/atan2.ll (+8-8)
  • (modified) llvm/test/CodeGen/DirectX/degrees.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/exp.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/log.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/log10.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/radians.ll (+5-5)
  • (modified) llvm/test/CodeGen/DirectX/sign.ll (+2-2)
  • (modified) llvm/test/CodeGen/DirectX/step.ll (+4-4)
  • (modified) llvm/test/CodeGen/DirectX/vector_reduce_add.ll (+5-5)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/calling-conv.ll (+2-2)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/hfinsert.ll (+1-1)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/hfnosplat_cp.ll (+1-1)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/hfsplat.ll (+3-3)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/isel-mstore-fp16.ll (+1-1)
  • (modified) llvm/test/CodeGen/LoongArch/vararg.ll (+2-2)
  • (modified) llvm/test/CodeGen/MIR/Generic/bfloat-immediates.mir (+3-3)
  • (modified) llvm/test/CodeGen/MIR/NVPTX/floating-point-invalid-type-error.mir (+2-2)
  • (modified) llvm/test/CodeGen/Mips/msa/fexuprl.ll (+1-1)
  • (modified) llvm/test/CodeGen/NVPTX/bf16-instructions.ll (+1-1)
  • (modified) llvm/test/CodeGen/NVPTX/bf16.ll (+1-1)
  • (modified) llvm/test/CodeGen/NVPTX/half.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/2008-05-01-ppc_fp128.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/2008-07-15-Fabs.ll (+7-7)
  • (modified) llvm/test/CodeGen/PowerPC/2008-07-17-Fneg.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/2008-10-28-UnprocessedNode.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/2008-10-28-f128-i32.ll (+4-4)
  • (modified) llvm/test/CodeGen/PowerPC/2008-12-02-LegalizeTypeAssert.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/aix-complex.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/builtins-ppc-p9-f128.ll (+8-8)
  • (modified) llvm/test/CodeGen/PowerPC/bv-widen-undef.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/complex-return.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/constant-pool.ll (+8-8)
  • (modified) llvm/test/CodeGen/PowerPC/ctrloop-fp128.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/disable-ctr-ppcf128.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/f128-aggregates.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/f128-arith.ll (+4-4)
  • (modified) llvm/test/CodeGen/PowerPC/f128-compare.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/f128-conv.ll (+6-6)
  • (modified) llvm/test/CodeGen/PowerPC/f128-fma.ll (+4-4)
  • (modified) llvm/test/CodeGen/PowerPC/f128-passByValue.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/f128-truncateNconv.ll (+4-4)
  • (modified) llvm/test/CodeGen/PowerPC/float-asmprint.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/float-load-store-pair.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/fminnum.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/fp-classify.ll (+3-3)
  • (modified) llvm/test/CodeGen/PowerPC/fp128-bitcast-after-operation.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/global-address-non-got-indirect-access.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/handle-f16-storage-type.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppc32-align-long-double-sf.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppc32-constant-BE-ppcf128.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppc32-skip-regs.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppc_fp128-bcwriter.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128-2.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128-4.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128-endian.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128-freeze.mir (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128sf.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/pr15632.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/pr16556-2.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/pr16573.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/pzero-fp-xored.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/resolvefi-basereg.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/rs-undef-use.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/scalar-min-max-p10.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/std-unal-fi.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/vector-reduce-fadd.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/fp128.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/fp-constant-f16.mir (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/calling-conv-half.ll (+12-12)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/calling-conv-ilp32-ilp32f-ilp32d-common.ll (+10-10)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/calling-conv-lp64-lp64f-lp64d-common.ll (+6-6)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/splat_vector.ll (+12-12)
  • (modified) llvm/test/CodeGen/RISCV/calling-conv-ilp32-ilp32f-common.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/calling-conv-ilp32-ilp32f-ilp32d-common.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/calling-conv-ilp32e.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/fp128.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/half-zfa-fli.ll (+7-7)
  • (modified) llvm/test/CodeGen/RISCV/stack-store-check.ll (+7-7)
  • (modified) llvm/test/CodeGen/RISCV/tail-calls.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/vararg.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPARC/fp128-select.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPARC/fp128.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_subgroup_rotate/subgroup-rotate.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_uniform_group_instructions/uniform-group-instructions.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/half_extension.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/hlsl-intrinsics/rcp.ll (+4-4)
  • (modified) llvm/test/CodeGen/SPIRV/instructions/integer-casts.ll (+4-4)
  • (modified) llvm/test/CodeGen/SPIRV/pointers/OpExtInst-OpenCL_std-ptr-types.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/spec_const.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_ballot.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_clustered_reduce.ll (+4-4)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_extended_types.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_non_uniform_arithmetic.ll (+12-12)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_non_uniform_vote.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_shuffle.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_shuffle_relative.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/args-01.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/args-02.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/args-03.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/asm-10.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/asm-17.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/asm-19.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/call-03.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/call-zos-01.ll (+4-4)
  • (modified) llvm/test/CodeGen/SystemZ/call-zos-vararg.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/fp-cmp-03.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/fp-cmp-04.ll (+1-1)
diff --git a/clang/test/C/C11/n1396.c b/clang/test/C/C11/n1396.c
index 6f76cfe9594961..264c69c733cb68 100644
--- a/clang/test/C/C11/n1396.c
+++ b/clang/test/C/C11/n1396.c
@@ -31,7 +31,7 @@
 // CHECK-X64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 0xK3FFF8000000000000000
+// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], f0x3FFF8000000000000000
 // CHECK-X64-NEXT:    [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
 // CHECK-X64-NEXT:    ret float [[CONV1]]
 //
@@ -42,7 +42,7 @@
 // CHECK-AARCH64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-AARCH64-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-AARCH64-NEXT:    ret float [[CONV1]]
 //
@@ -64,7 +64,7 @@
 // CHECK-PPC32-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC32-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC32-NEXT:    ret float [[CONV1]]
 //
@@ -75,7 +75,7 @@
 // CHECK-PPC64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC64-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC64-NEXT:    ret float [[CONV1]]
 //
@@ -86,7 +86,7 @@
 // CHECK-SPARCV9-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-SPARCV9-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-SPARCV9-NEXT:    ret float [[CONV1]]
 //
@@ -102,7 +102,7 @@ float extended_float_func(float x) {
 // CHECK-X64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 0xK3FFF8000000000000000
+// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], f0x3FFF8000000000000000
 // CHECK-X64-NEXT:    [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
 // CHECK-X64-NEXT:    ret float [[CONV1]]
 //
@@ -113,7 +113,7 @@ float extended_float_func(float x) {
 // CHECK-AARCH64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-AARCH64-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-AARCH64-NEXT:    ret float [[CONV1]]
 //
@@ -135,7 +135,7 @@ float extended_float_func(float x) {
 // CHECK-PPC32-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC32-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC32-NEXT:    ret float [[CONV1]]
 //
@@ -146,7 +146,7 @@ float extended_float_func(float x) {
 // CHECK-PPC64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC64-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC64-NEXT:    ret float [[CONV1]]
 //
@@ -157,7 +157,7 @@ float extended_float_func(float x) {
 // CHECK-SPARCV9-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-SPARCV9-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-SPARCV9-NEXT:    ret float [[CONV1]]
 //
@@ -173,7 +173,7 @@ float extended_float_func_cast(float x) {
 // CHECK-X64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 0xK3FFF8000000000000000
+// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], f0x3FFF8000000000000000
 // CHECK-X64-NEXT:    [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
 // CHECK-X64-NEXT:    ret float [[CONV1]]
 //
@@ -184,7 +184,7 @@ float extended_float_func_cast(float x) {
 // CHECK-AARCH64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-AARCH64-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-AARCH64-NEXT:    ret float [[CONV1]]
 //
@@ -206,7 +206,7 @@ float extended_float_func_cast(float x) {
 // CHECK-PPC32-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC32-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC32-NEXT:    ret float [[CONV1]]
 //
@@ -217,7 +217,7 @@ float extended_float_func_cast(float x) {
 // CHECK-PPC64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC64-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC64-NEXT:    ret float [[CONV1]]
 //
@@ -228,7 +228,7 @@ float extended_float_func_cast(float x) {
 // CHECK-SPARCV9-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-SPARCV9-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-SPARCV9-NEXT:    ret float [[CONV1]]
 //
@@ -244,7 +244,7 @@ float extended_double_func(float x) {
 // CHECK-X64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 0xK3FFF8000000000000000
+// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], f0x3FFF8000000000000000
 // CHECK-X64-NEXT:    [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
 // CHECK-X64-NEXT:    ret float [[CONV1]]
 //
@@ -255,7 +255,7 @@ float extended_double_func(float x) {
 // CHECK-AARCH64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-AARCH64-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-AARCH64-NEXT:    ret float [[CONV1]]
 //
@@ -277,7 +277,7 @@ float extended_double_func(float x) {
 // CHECK-PPC32-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC32-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC32-NEXT:    ret float [[CONV1]]
 //
@@ -288,7 +288,7 @@ float extended_double_func(float x) {
 // CHECK-PPC64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC64-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC64-NEXT:    ret float [[CONV1]]
 //
@@ -299,7 +299,7 @@ float extended_double_func(float x) {
 // CHECK-SPARCV9-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-SPARCV9-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-SPARCV9-NEXT:    ret float [[CONV1]]
 //
diff --git a/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c b/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c
index 9109626cea9ca2..2c87ce32b8811b 100644
--- a/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c
+++ b/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c
@@ -12,8 +12,8 @@
 #include <arm_fp16.h>
 
 // COMMON-LABEL: test_vceqzh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp oeq half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmp.f16(half %a, half 0xH0000, metadata !"oeq", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp oeq half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmp.f16(half %a, half f0x0000, metadata !"oeq", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vceqzh_f16(float16_t a) {
@@ -21,8 +21,8 @@ uint16_t test_vceqzh_f16(float16_t a) {
 }
 
 // COMMON-LABEL: test_vcgezh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp oge half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half 0xH0000, metadata !"oge", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp oge half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half f0x0000, metadata !"oge", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vcgezh_f16(float16_t a) {
@@ -30,8 +30,8 @@ uint16_t test_vcgezh_f16(float16_t a) {
 }
 
 // COMMON-LABEL: test_vcgtzh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp ogt half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half 0xH0000, metadata !"ogt", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp ogt half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half f0x0000, metadata !"ogt", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vcgtzh_f16(float16_t a) {
@@ -39,8 +39,8 @@ uint16_t test_vcgtzh_f16(float16_t a) {
 }
 
 // COMMON-LABEL: test_vclezh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp ole half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half 0xH0000, metadata !"ole", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp ole half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half f0x0000, metadata !"ole", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vclezh_f16(float16_t a) {
@@ -48,8 +48,8 @@ uint16_t test_vclezh_f16(float16_t a) {
 }
 
 // COMMON-LABEL: test_vcltzh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp olt half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half 0xH0000, metadata !"olt", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp olt half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half f0x0000, metadata !"olt", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vcltzh_f16(float16_t a) {
diff --git a/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c b/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c
index 90ee74e459ebd4..27d60de792b074 100644
--- a/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c
+++ b/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c
@@ -15,7 +15,7 @@ float16_t test_vabsh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vceqzh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp oeq half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp oeq half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vceqzh_f16(float16_t a) {
@@ -23,7 +23,7 @@ uint16_t test_vceqzh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vcgezh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp oge half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp oge half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vcgezh_f16(float16_t a) {
@@ -31,7 +31,7 @@ uint16_t test_vcgezh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vcgtzh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp ogt half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp ogt half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vcgtzh_f16(float16_t a) {
@@ -39,7 +39,7 @@ uint16_t test_vcgtzh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vclezh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp ole half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp ole half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vclezh_f16(float16_t a) {
@@ -47,7 +47,7 @@ uint16_t test_vclezh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vcltzh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp olt half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp olt half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vcltzh_f16(float16_t a) {
diff --git a/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c b/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c
index a8fb989b64de50..b6bbff0c742f89 100644
--- a/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c
+++ b/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c
@@ -191,7 +191,7 @@ double test_double_pre_inc()
 // SAFE-NEXT:  [[ENTRY:.*:]]
 // SAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // SAFE-NEXT:    [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// SAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half 0xH3C00 seq_cst, align 2
+// SAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half f0x3C00 seq_cst, align 2
 // SAFE-NEXT:    ret half [[TMP0]]
 //
 // UNSAFE-LABEL: define dso_local half @test__Float16_post_inc(
@@ -199,7 +199,7 @@ double test_double_pre_inc()
 // UNSAFE-NEXT:  [[ENTRY:.*:]]
 // UNSAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // UNSAFE-NEXT:    [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// UNSAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half 0xH3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
+// UNSAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half f0x3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
 // UNSAFE-NEXT:    ret half [[TMP0]]
 //
 _Float16 test__Float16_post_inc()
@@ -213,7 +213,7 @@ _Float16 test__Float16_post_inc()
 // SAFE-NEXT:  [[ENTRY:.*:]]
 // SAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // SAFE-NEXT:    [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// SAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half 0xH3C00 seq_cst, align 2
+// SAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half f0x3C00 seq_cst, align 2
 // SAFE-NEXT:    ret half [[TMP0]]
 //
 // UNSAFE-LABEL: define dso_local half @test__Float16_post_dc(
@@ -221,7 +221,7 @@ _Float16 test__Float16_post_inc()
 // UNSAFE-NEXT:  [[ENTRY:.*:]]
 // UNSAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // UNSAFE-NEXT:    [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// UNSAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half 0xH3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
+// UNSAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half f0x3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
 // UNSAFE-NEXT:    ret half [[TMP0]]
 //
 _Float16 test__Float16_post_dc()
@@ -235,8 +235,8 @@ _Float16 test__Float16_post_dc()
 // SAFE-NEXT:  [[ENTRY:.*:]]
 // SAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // SAFE-NEXT:    [[RETVAL_ASCAST...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented Jan 6, 2025

@llvm/pr-subscribers-backend-amdgpu

Author: Joshua Cranmer (jcranmer-intel)

Changes

This adds support for the following kinds of formats:

  • Hexadecimal literals like 0x1.fp13
  • Special values +inf/-inf, +qnan/-qnan
  • NaN values with payloads like +nan(0x1)

Additionally, the floating-point hexadecimal format that records the bitpattern exactly no longer requires the 0xL or 0xK or similar code for the floating-point type. This format is removed from the documentation, but is still supported as a legacy format in the parser.


Patch is 1.28 MiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/121838.diff

532 Files Affected:

  • (modified) clang/test/C/C11/n1396.c (+20-20)
  • (modified) clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c (+10-10)
  • (modified) clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c (+5-5)
  • (modified) clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c (+12-12)
  • (modified) clang/test/CodeGen/PowerPC/ppc64-complex-parms.c (+4-4)
  • (modified) clang/test/CodeGen/RISCV/riscv64-vararg.c (+3-3)
  • (modified) clang/test/CodeGen/SystemZ/atomic_is_lock_free.c (+1-1)
  • (modified) clang/test/CodeGen/X86/Float16-arithmetic.c (+1-1)
  • (modified) clang/test/CodeGen/X86/Float16-complex.c (+29-29)
  • (modified) clang/test/CodeGen/X86/avx512fp16-builtins.c (+10-10)
  • (modified) clang/test/CodeGen/X86/avx512vlfp16-builtins.c (+11-11)
  • (modified) clang/test/CodeGen/X86/long-double-config-size.c (+2-2)
  • (modified) clang/test/CodeGen/X86/x86-atomic-long_double.c (+20-20)
  • (modified) clang/test/CodeGen/X86/x86_64-longdouble.c (+4-4)
  • (modified) clang/test/CodeGen/atomic.c (+2-2)
  • (modified) clang/test/CodeGen/builtin-complex.c (+2-2)
  • (modified) clang/test/CodeGen/builtin_Float16.c (+4-4)
  • (modified) clang/test/CodeGen/builtins-elementwise-math.c (+1-1)
  • (modified) clang/test/CodeGen/builtins-nvptx.c (+8-8)
  • (modified) clang/test/CodeGen/builtins.c (+9-9)
  • (modified) clang/test/CodeGen/catch-undef-behavior.c (+2-2)
  • (modified) clang/test/CodeGen/const-init.c (+1-1)
  • (modified) clang/test/CodeGen/fp16-ops-strictfp.c (+7-7)
  • (modified) clang/test/CodeGen/fp16-ops.c (+3-3)
  • (modified) clang/test/CodeGen/isfpclass.c (+1-1)
  • (modified) clang/test/CodeGen/math-builtins-long.c (+8-8)
  • (modified) clang/test/CodeGen/mingw-long-double.c (+4-4)
  • (modified) clang/test/CodeGen/spir-half-type.cpp (+20-20)
  • (modified) clang/test/CodeGenCUDA/types.cu (+1-1)
  • (modified) clang/test/CodeGenCXX/auto-var-init.cpp (+7-7)
  • (modified) clang/test/CodeGenCXX/cxx11-user-defined-literal.cpp (+1-1)
  • (modified) clang/test/CodeGenCXX/float128-declarations.cpp (+24-24)
  • (modified) clang/test/CodeGenCXX/float16-declarations.cpp (+16-16)
  • (modified) clang/test/CodeGenCXX/ibm128-declarations.cpp (+1-1)
  • (modified) clang/test/CodeGenHLSL/builtins/rcp.hlsl (+4-4)
  • (modified) clang/test/CodeGenOpenCL/amdgpu-alignment.cl (+4-4)
  • (modified) clang/test/CodeGenOpenCL/half.cl (+4-4)
  • (modified) clang/test/Frontend/fixed_point_conversions_half.c (+9-9)
  • (modified) clang/test/Headers/__clang_hip_math_deprecated.hip (+2-2)
  • (modified) clang/test/OpenMP/atomic_capture_codegen.cpp (+1-1)
  • (modified) clang/test/OpenMP/atomic_update_codegen.cpp (+1-1)
  • (modified) llvm/docs/LangRef.rst (+37-30)
  • (modified) llvm/include/llvm/AsmParser/LLLexer.h (+1)
  • (modified) llvm/include/llvm/AsmParser/LLToken.h (+2)
  • (modified) llvm/lib/AsmParser/LLLexer.cpp (+159-37)
  • (modified) llvm/lib/AsmParser/LLParser.cpp (+32-2)
  • (modified) llvm/lib/CodeGen/MIRParser/MILexer.cpp (+18)
  • (modified) llvm/lib/IR/AsmWriter.cpp (+4-9)
  • (modified) llvm/lib/Support/APFloat.cpp (+1-1)
  • (modified) llvm/test/Analysis/CostModel/AArch64/arith-fp.ll (+3-3)
  • (modified) llvm/test/Analysis/CostModel/AArch64/insert-extract.ll (+4-4)
  • (modified) llvm/test/Analysis/CostModel/AArch64/reduce-fadd.ll (+48-48)
  • (modified) llvm/test/Analysis/CostModel/AMDGPU/fdiv.ll (+80-80)
  • (modified) llvm/test/Analysis/CostModel/ARM/divrem.ll (+40-40)
  • (modified) llvm/test/Analysis/CostModel/ARM/reduce-fp.ll (+48-48)
  • (modified) llvm/test/Analysis/CostModel/RISCV/phi-const.ll (+1-1)
  • (modified) llvm/test/Analysis/CostModel/RISCV/reduce-fadd.ll (+140-140)
  • (modified) llvm/test/Analysis/CostModel/RISCV/reduce-fmul.ll (+126-126)
  • (modified) llvm/test/Analysis/CostModel/RISCV/rvv-phi-const.ll (+3-3)
  • (modified) llvm/test/Analysis/Lint/scalable.ll (+1-1)
  • (modified) llvm/test/Assembler/bfloat.ll (+13-13)
  • (modified) llvm/test/Assembler/constant-splat.ll (+10-10)
  • (added) llvm/test/Assembler/float-literals.ll (+40)
  • (modified) llvm/test/Assembler/half-constprop.ll (+3-3)
  • (modified) llvm/test/Assembler/half-conv.ll (+1-1)
  • (modified) llvm/test/Assembler/invalid-fp80hex.ll (+1-1)
  • (modified) llvm/test/Assembler/short-hexpair.ll (+1-1)
  • (modified) llvm/test/Assembler/unnamed.ll (+1-1)
  • (modified) llvm/test/Bitcode/compatibility-3.8.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility-3.9.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility-4.0.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility-5.0.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility-6.0.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility.ll (+2-2)
  • (modified) llvm/test/Bitcode/constant-splat.ll (+10-10)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/arm64-irtranslator.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fabs.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-flog2.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fminimum-fmaximum.mir (+8-8)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fminnum-fmaxnum.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fneg.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fptrunc.mir (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fsqrt.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/fp128-legalize-crash-pr35690.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/legalize-fp128-fconstant.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/legalize-fp16-fconstant.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/prelegalizer-combiner-select-to-fminmax.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/select-fp16-fconstant.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-aapcs.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-build-vector.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-fp-imm-size.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-fp-imm.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-fp128.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/bf16-imm.ll (+8-8)
  • (modified) llvm/test/CodeGen/AArch64/bf16-instructions.ll (+3-3)
  • (modified) llvm/test/CodeGen/AArch64/bf16-v4-instructions.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/bf16.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/f16-imm.ll (+8-8)
  • (modified) llvm/test/CodeGen/AArch64/f16-instructions.ll (+3-3)
  • (modified) llvm/test/CodeGen/AArch64/fcopysign-noneon.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/fp16-v4-instructions.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/fp16-vector-nvcast.ll (+6-6)
  • (modified) llvm/test/CodeGen/AArch64/fp16_intrinsic_lane.ll (+15-15)
  • (modified) llvm/test/CodeGen/AArch64/fp16_intrinsic_scalar_1op.ll (+5-5)
  • (modified) llvm/test/CodeGen/AArch64/half.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/isinf.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/mattr-all.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/sve-pred-selectop3.ll (+6-6)
  • (modified) llvm/test/CodeGen/AArch64/vecreduce-fadd-legalization-strict.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/vecreduce-fadd-legalization.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/vecreduce-fmul-legalization-strict.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/amdgpu-prelegalizer-combiner-crash.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fcanonicalize.mir (+12-12)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fdiv-sqrt-to-rsq.mir (+11-11)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-foldable-fneg.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fsub-fneg.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-rsq.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslate-bf16.ll (+6-6)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslator-atomicrmw.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslator-call.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fconstant.mir (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fcos.mir (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fdiv.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fmaxnum.mir (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fminnum.mir (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fsin.mir (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-intrinsic-round.mir (+36-36)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-sitofp.mir (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-uitofp.mir (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/llvm.amdgcn.wqm.demote.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-clamp-fmed3-const.mir (+5-5)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-clamp-minmax-const.mir (+20-20)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-fmed3-minmax-const.mir (+20-20)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/regbankselect-default.mir (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgpu-codegenprepare-fold-binop-select.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-pow.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-rootn.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/br_cc.f16.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/build-vector-insert-elt-infloop.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/dagcombine-fmul-sel.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/extract-subvector-16bit.ll (+6-6)
  • (modified) llvm/test/CodeGen/AMDGPU/fcanonicalize.f16.ll (+18-18)
  • (modified) llvm/test/CodeGen/AMDGPU/flat-offset-bug.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/fma.f16.ll (+10-10)
  • (modified) llvm/test/CodeGen/AMDGPU/fmul-to-ldexp.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/fneg-combines.f16.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/fneg-combines.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/fneg-combines.new.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/fold-imm-f16-f32.mir (+11-11)
  • (modified) llvm/test/CodeGen/AMDGPU/fold-int-pow2-with-fmul-or-fdiv.ll (+7-7)
  • (modified) llvm/test/CodeGen/AMDGPU/fp-classify.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/fract-match.ll (+30-30)
  • (modified) llvm/test/CodeGen/AMDGPU/imm16.ll (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/immv216.ll (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/inline-constraints.ll (+7-7)
  • (modified) llvm/test/CodeGen/AMDGPU/insert_vector_elt.v2bf16.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/insert_vector_elt.v2i16.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.amdgcn.wqm.demote.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/mad-mix.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/mai-inline.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/mixed-vmem-types.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/multi-divergent-exit-region.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/pack.v2f16.ll (+5-5)
  • (modified) llvm/test/CodeGen/AMDGPU/pk_max_f16_literal.ll (+10-10)
  • (modified) llvm/test/CodeGen/AMDGPU/private-memory-atomics.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/promote-alloca-vector-to-vector.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/select-fabs-fneg-extract.f16.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/select-fabs-fneg-extract.v2f16.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/select.f16.ll (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/simplify-libcalls.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/arm-half-promote.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/armv8.2a-fp16-vector-intrinsics.ll (+4-4)
  • (modified) llvm/test/CodeGen/ARM/bf16-imm.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/const-load-align-thumb.mir (+2-2)
  • (modified) llvm/test/CodeGen/ARM/constant-island-SOImm-limit16.mir (+2-2)
  • (modified) llvm/test/CodeGen/ARM/fp16-bitcast.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/fp16-instructions.ll (+25-25)
  • (modified) llvm/test/CodeGen/ARM/fp16-litpool-arm.mir (+3-3)
  • (modified) llvm/test/CodeGen/ARM/fp16-litpool-thumb.mir (+3-3)
  • (modified) llvm/test/CodeGen/ARM/fp16-litpool2-arm.mir (+3-3)
  • (modified) llvm/test/CodeGen/ARM/fp16-litpool3-arm.mir (+3-3)
  • (modified) llvm/test/CodeGen/ARM/fp16-no-condition.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/fp16-v3.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/pr47454.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/store_half.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/vecreduce-fadd-legalization-soft-float.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/vecreduce-fadd-legalization-strict.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/vecreduce-fmul-legalization-soft-float.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/vecreduce-fmul-legalization-strict.ll (+2-2)
  • (modified) llvm/test/CodeGen/DirectX/all.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/any.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/atan2.ll (+8-8)
  • (modified) llvm/test/CodeGen/DirectX/degrees.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/exp.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/log.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/log10.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/radians.ll (+5-5)
  • (modified) llvm/test/CodeGen/DirectX/sign.ll (+2-2)
  • (modified) llvm/test/CodeGen/DirectX/step.ll (+4-4)
  • (modified) llvm/test/CodeGen/DirectX/vector_reduce_add.ll (+5-5)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/calling-conv.ll (+2-2)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/hfinsert.ll (+1-1)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/hfnosplat_cp.ll (+1-1)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/hfsplat.ll (+3-3)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/isel-mstore-fp16.ll (+1-1)
  • (modified) llvm/test/CodeGen/LoongArch/vararg.ll (+2-2)
  • (modified) llvm/test/CodeGen/MIR/Generic/bfloat-immediates.mir (+3-3)
  • (modified) llvm/test/CodeGen/MIR/NVPTX/floating-point-invalid-type-error.mir (+2-2)
  • (modified) llvm/test/CodeGen/Mips/msa/fexuprl.ll (+1-1)
  • (modified) llvm/test/CodeGen/NVPTX/bf16-instructions.ll (+1-1)
  • (modified) llvm/test/CodeGen/NVPTX/bf16.ll (+1-1)
  • (modified) llvm/test/CodeGen/NVPTX/half.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/2008-05-01-ppc_fp128.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/2008-07-15-Fabs.ll (+7-7)
  • (modified) llvm/test/CodeGen/PowerPC/2008-07-17-Fneg.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/2008-10-28-UnprocessedNode.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/2008-10-28-f128-i32.ll (+4-4)
  • (modified) llvm/test/CodeGen/PowerPC/2008-12-02-LegalizeTypeAssert.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/aix-complex.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/builtins-ppc-p9-f128.ll (+8-8)
  • (modified) llvm/test/CodeGen/PowerPC/bv-widen-undef.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/complex-return.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/constant-pool.ll (+8-8)
  • (modified) llvm/test/CodeGen/PowerPC/ctrloop-fp128.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/disable-ctr-ppcf128.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/f128-aggregates.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/f128-arith.ll (+4-4)
  • (modified) llvm/test/CodeGen/PowerPC/f128-compare.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/f128-conv.ll (+6-6)
  • (modified) llvm/test/CodeGen/PowerPC/f128-fma.ll (+4-4)
  • (modified) llvm/test/CodeGen/PowerPC/f128-passByValue.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/f128-truncateNconv.ll (+4-4)
  • (modified) llvm/test/CodeGen/PowerPC/float-asmprint.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/float-load-store-pair.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/fminnum.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/fp-classify.ll (+3-3)
  • (modified) llvm/test/CodeGen/PowerPC/fp128-bitcast-after-operation.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/global-address-non-got-indirect-access.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/handle-f16-storage-type.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppc32-align-long-double-sf.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppc32-constant-BE-ppcf128.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppc32-skip-regs.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppc_fp128-bcwriter.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128-2.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128-4.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128-endian.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128-freeze.mir (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128sf.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/pr15632.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/pr16556-2.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/pr16573.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/pzero-fp-xored.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/resolvefi-basereg.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/rs-undef-use.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/scalar-min-max-p10.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/std-unal-fi.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/vector-reduce-fadd.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/fp128.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/fp-constant-f16.mir (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/calling-conv-half.ll (+12-12)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/calling-conv-ilp32-ilp32f-ilp32d-common.ll (+10-10)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/calling-conv-lp64-lp64f-lp64d-common.ll (+6-6)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/splat_vector.ll (+12-12)
  • (modified) llvm/test/CodeGen/RISCV/calling-conv-ilp32-ilp32f-common.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/calling-conv-ilp32-ilp32f-ilp32d-common.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/calling-conv-ilp32e.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/fp128.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/half-zfa-fli.ll (+7-7)
  • (modified) llvm/test/CodeGen/RISCV/stack-store-check.ll (+7-7)
  • (modified) llvm/test/CodeGen/RISCV/tail-calls.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/vararg.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPARC/fp128-select.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPARC/fp128.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_subgroup_rotate/subgroup-rotate.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_uniform_group_instructions/uniform-group-instructions.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/half_extension.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/hlsl-intrinsics/rcp.ll (+4-4)
  • (modified) llvm/test/CodeGen/SPIRV/instructions/integer-casts.ll (+4-4)
  • (modified) llvm/test/CodeGen/SPIRV/pointers/OpExtInst-OpenCL_std-ptr-types.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/spec_const.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_ballot.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_clustered_reduce.ll (+4-4)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_extended_types.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_non_uniform_arithmetic.ll (+12-12)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_non_uniform_vote.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_shuffle.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_shuffle_relative.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/args-01.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/args-02.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/args-03.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/asm-10.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/asm-17.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/asm-19.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/call-03.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/call-zos-01.ll (+4-4)
  • (modified) llvm/test/CodeGen/SystemZ/call-zos-vararg.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/fp-cmp-03.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/fp-cmp-04.ll (+1-1)
diff --git a/clang/test/C/C11/n1396.c b/clang/test/C/C11/n1396.c
index 6f76cfe9594961..264c69c733cb68 100644
--- a/clang/test/C/C11/n1396.c
+++ b/clang/test/C/C11/n1396.c
@@ -31,7 +31,7 @@
 // CHECK-X64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 0xK3FFF8000000000000000
+// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], f0x3FFF8000000000000000
 // CHECK-X64-NEXT:    [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
 // CHECK-X64-NEXT:    ret float [[CONV1]]
 //
@@ -42,7 +42,7 @@
 // CHECK-AARCH64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-AARCH64-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-AARCH64-NEXT:    ret float [[CONV1]]
 //
@@ -64,7 +64,7 @@
 // CHECK-PPC32-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC32-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC32-NEXT:    ret float [[CONV1]]
 //
@@ -75,7 +75,7 @@
 // CHECK-PPC64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC64-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC64-NEXT:    ret float [[CONV1]]
 //
@@ -86,7 +86,7 @@
 // CHECK-SPARCV9-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-SPARCV9-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-SPARCV9-NEXT:    ret float [[CONV1]]
 //
@@ -102,7 +102,7 @@ float extended_float_func(float x) {
 // CHECK-X64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 0xK3FFF8000000000000000
+// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], f0x3FFF8000000000000000
 // CHECK-X64-NEXT:    [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
 // CHECK-X64-NEXT:    ret float [[CONV1]]
 //
@@ -113,7 +113,7 @@ float extended_float_func(float x) {
 // CHECK-AARCH64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-AARCH64-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-AARCH64-NEXT:    ret float [[CONV1]]
 //
@@ -135,7 +135,7 @@ float extended_float_func(float x) {
 // CHECK-PPC32-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC32-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC32-NEXT:    ret float [[CONV1]]
 //
@@ -146,7 +146,7 @@ float extended_float_func(float x) {
 // CHECK-PPC64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC64-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC64-NEXT:    ret float [[CONV1]]
 //
@@ -157,7 +157,7 @@ float extended_float_func(float x) {
 // CHECK-SPARCV9-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-SPARCV9-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-SPARCV9-NEXT:    ret float [[CONV1]]
 //
@@ -173,7 +173,7 @@ float extended_float_func_cast(float x) {
 // CHECK-X64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 0xK3FFF8000000000000000
+// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], f0x3FFF8000000000000000
 // CHECK-X64-NEXT:    [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
 // CHECK-X64-NEXT:    ret float [[CONV1]]
 //
@@ -184,7 +184,7 @@ float extended_float_func_cast(float x) {
 // CHECK-AARCH64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-AARCH64-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-AARCH64-NEXT:    ret float [[CONV1]]
 //
@@ -206,7 +206,7 @@ float extended_float_func_cast(float x) {
 // CHECK-PPC32-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC32-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC32-NEXT:    ret float [[CONV1]]
 //
@@ -217,7 +217,7 @@ float extended_float_func_cast(float x) {
 // CHECK-PPC64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC64-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC64-NEXT:    ret float [[CONV1]]
 //
@@ -228,7 +228,7 @@ float extended_float_func_cast(float x) {
 // CHECK-SPARCV9-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-SPARCV9-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-SPARCV9-NEXT:    ret float [[CONV1]]
 //
@@ -244,7 +244,7 @@ float extended_double_func(float x) {
 // CHECK-X64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 0xK3FFF8000000000000000
+// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], f0x3FFF8000000000000000
 // CHECK-X64-NEXT:    [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
 // CHECK-X64-NEXT:    ret float [[CONV1]]
 //
@@ -255,7 +255,7 @@ float extended_double_func(float x) {
 // CHECK-AARCH64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-AARCH64-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-AARCH64-NEXT:    ret float [[CONV1]]
 //
@@ -277,7 +277,7 @@ float extended_double_func(float x) {
 // CHECK-PPC32-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC32-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC32-NEXT:    ret float [[CONV1]]
 //
@@ -288,7 +288,7 @@ float extended_double_func(float x) {
 // CHECK-PPC64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC64-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC64-NEXT:    ret float [[CONV1]]
 //
@@ -299,7 +299,7 @@ float extended_double_func(float x) {
 // CHECK-SPARCV9-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-SPARCV9-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-SPARCV9-NEXT:    ret float [[CONV1]]
 //
diff --git a/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c b/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c
index 9109626cea9ca2..2c87ce32b8811b 100644
--- a/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c
+++ b/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c
@@ -12,8 +12,8 @@
 #include <arm_fp16.h>
 
 // COMMON-LABEL: test_vceqzh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp oeq half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmp.f16(half %a, half 0xH0000, metadata !"oeq", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp oeq half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmp.f16(half %a, half f0x0000, metadata !"oeq", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vceqzh_f16(float16_t a) {
@@ -21,8 +21,8 @@ uint16_t test_vceqzh_f16(float16_t a) {
 }
 
 // COMMON-LABEL: test_vcgezh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp oge half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half 0xH0000, metadata !"oge", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp oge half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half f0x0000, metadata !"oge", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vcgezh_f16(float16_t a) {
@@ -30,8 +30,8 @@ uint16_t test_vcgezh_f16(float16_t a) {
 }
 
 // COMMON-LABEL: test_vcgtzh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp ogt half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half 0xH0000, metadata !"ogt", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp ogt half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half f0x0000, metadata !"ogt", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vcgtzh_f16(float16_t a) {
@@ -39,8 +39,8 @@ uint16_t test_vcgtzh_f16(float16_t a) {
 }
 
 // COMMON-LABEL: test_vclezh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp ole half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half 0xH0000, metadata !"ole", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp ole half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half f0x0000, metadata !"ole", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vclezh_f16(float16_t a) {
@@ -48,8 +48,8 @@ uint16_t test_vclezh_f16(float16_t a) {
 }
 
 // COMMON-LABEL: test_vcltzh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp olt half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half 0xH0000, metadata !"olt", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp olt half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half f0x0000, metadata !"olt", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vcltzh_f16(float16_t a) {
diff --git a/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c b/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c
index 90ee74e459ebd4..27d60de792b074 100644
--- a/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c
+++ b/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c
@@ -15,7 +15,7 @@ float16_t test_vabsh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vceqzh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp oeq half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp oeq half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vceqzh_f16(float16_t a) {
@@ -23,7 +23,7 @@ uint16_t test_vceqzh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vcgezh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp oge half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp oge half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vcgezh_f16(float16_t a) {
@@ -31,7 +31,7 @@ uint16_t test_vcgezh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vcgtzh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp ogt half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp ogt half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vcgtzh_f16(float16_t a) {
@@ -39,7 +39,7 @@ uint16_t test_vcgtzh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vclezh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp ole half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp ole half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vclezh_f16(float16_t a) {
@@ -47,7 +47,7 @@ uint16_t test_vclezh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vcltzh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp olt half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp olt half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vcltzh_f16(float16_t a) {
diff --git a/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c b/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c
index a8fb989b64de50..b6bbff0c742f89 100644
--- a/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c
+++ b/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c
@@ -191,7 +191,7 @@ double test_double_pre_inc()
 // SAFE-NEXT:  [[ENTRY:.*:]]
 // SAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // SAFE-NEXT:    [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// SAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half 0xH3C00 seq_cst, align 2
+// SAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half f0x3C00 seq_cst, align 2
 // SAFE-NEXT:    ret half [[TMP0]]
 //
 // UNSAFE-LABEL: define dso_local half @test__Float16_post_inc(
@@ -199,7 +199,7 @@ double test_double_pre_inc()
 // UNSAFE-NEXT:  [[ENTRY:.*:]]
 // UNSAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // UNSAFE-NEXT:    [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// UNSAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half 0xH3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
+// UNSAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half f0x3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
 // UNSAFE-NEXT:    ret half [[TMP0]]
 //
 _Float16 test__Float16_post_inc()
@@ -213,7 +213,7 @@ _Float16 test__Float16_post_inc()
 // SAFE-NEXT:  [[ENTRY:.*:]]
 // SAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // SAFE-NEXT:    [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// SAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half 0xH3C00 seq_cst, align 2
+// SAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half f0x3C00 seq_cst, align 2
 // SAFE-NEXT:    ret half [[TMP0]]
 //
 // UNSAFE-LABEL: define dso_local half @test__Float16_post_dc(
@@ -221,7 +221,7 @@ _Float16 test__Float16_post_inc()
 // UNSAFE-NEXT:  [[ENTRY:.*:]]
 // UNSAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // UNSAFE-NEXT:    [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// UNSAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half 0xH3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
+// UNSAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half f0x3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
 // UNSAFE-NEXT:    ret half [[TMP0]]
 //
 _Float16 test__Float16_post_dc()
@@ -235,8 +235,8 @@ _Float16 test__Float16_post_dc()
 // SAFE-NEXT:  [[ENTRY:.*:]]
 // SAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // SAFE-NEXT:    [[RETVAL_ASCAST...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented Jan 6, 2025

@llvm/pr-subscribers-backend-aarch64

Author: Joshua Cranmer (jcranmer-intel)

Changes

This adds support for the following kinds of formats:

  • Hexadecimal literals like 0x1.fp13
  • Special values +inf/-inf, +qnan/-qnan
  • NaN values with payloads like +nan(0x1)

Additionally, the floating-point hexadecimal format that records the bitpattern exactly no longer requires the 0xL or 0xK or similar code for the floating-point type. This format is removed from the documentation, but is still supported as a legacy format in the parser.


Patch is 1.28 MiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/121838.diff

532 Files Affected:

  • (modified) clang/test/C/C11/n1396.c (+20-20)
  • (modified) clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c (+10-10)
  • (modified) clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c (+5-5)
  • (modified) clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c (+12-12)
  • (modified) clang/test/CodeGen/PowerPC/ppc64-complex-parms.c (+4-4)
  • (modified) clang/test/CodeGen/RISCV/riscv64-vararg.c (+3-3)
  • (modified) clang/test/CodeGen/SystemZ/atomic_is_lock_free.c (+1-1)
  • (modified) clang/test/CodeGen/X86/Float16-arithmetic.c (+1-1)
  • (modified) clang/test/CodeGen/X86/Float16-complex.c (+29-29)
  • (modified) clang/test/CodeGen/X86/avx512fp16-builtins.c (+10-10)
  • (modified) clang/test/CodeGen/X86/avx512vlfp16-builtins.c (+11-11)
  • (modified) clang/test/CodeGen/X86/long-double-config-size.c (+2-2)
  • (modified) clang/test/CodeGen/X86/x86-atomic-long_double.c (+20-20)
  • (modified) clang/test/CodeGen/X86/x86_64-longdouble.c (+4-4)
  • (modified) clang/test/CodeGen/atomic.c (+2-2)
  • (modified) clang/test/CodeGen/builtin-complex.c (+2-2)
  • (modified) clang/test/CodeGen/builtin_Float16.c (+4-4)
  • (modified) clang/test/CodeGen/builtins-elementwise-math.c (+1-1)
  • (modified) clang/test/CodeGen/builtins-nvptx.c (+8-8)
  • (modified) clang/test/CodeGen/builtins.c (+9-9)
  • (modified) clang/test/CodeGen/catch-undef-behavior.c (+2-2)
  • (modified) clang/test/CodeGen/const-init.c (+1-1)
  • (modified) clang/test/CodeGen/fp16-ops-strictfp.c (+7-7)
  • (modified) clang/test/CodeGen/fp16-ops.c (+3-3)
  • (modified) clang/test/CodeGen/isfpclass.c (+1-1)
  • (modified) clang/test/CodeGen/math-builtins-long.c (+8-8)
  • (modified) clang/test/CodeGen/mingw-long-double.c (+4-4)
  • (modified) clang/test/CodeGen/spir-half-type.cpp (+20-20)
  • (modified) clang/test/CodeGenCUDA/types.cu (+1-1)
  • (modified) clang/test/CodeGenCXX/auto-var-init.cpp (+7-7)
  • (modified) clang/test/CodeGenCXX/cxx11-user-defined-literal.cpp (+1-1)
  • (modified) clang/test/CodeGenCXX/float128-declarations.cpp (+24-24)
  • (modified) clang/test/CodeGenCXX/float16-declarations.cpp (+16-16)
  • (modified) clang/test/CodeGenCXX/ibm128-declarations.cpp (+1-1)
  • (modified) clang/test/CodeGenHLSL/builtins/rcp.hlsl (+4-4)
  • (modified) clang/test/CodeGenOpenCL/amdgpu-alignment.cl (+4-4)
  • (modified) clang/test/CodeGenOpenCL/half.cl (+4-4)
  • (modified) clang/test/Frontend/fixed_point_conversions_half.c (+9-9)
  • (modified) clang/test/Headers/__clang_hip_math_deprecated.hip (+2-2)
  • (modified) clang/test/OpenMP/atomic_capture_codegen.cpp (+1-1)
  • (modified) clang/test/OpenMP/atomic_update_codegen.cpp (+1-1)
  • (modified) llvm/docs/LangRef.rst (+37-30)
  • (modified) llvm/include/llvm/AsmParser/LLLexer.h (+1)
  • (modified) llvm/include/llvm/AsmParser/LLToken.h (+2)
  • (modified) llvm/lib/AsmParser/LLLexer.cpp (+159-37)
  • (modified) llvm/lib/AsmParser/LLParser.cpp (+32-2)
  • (modified) llvm/lib/CodeGen/MIRParser/MILexer.cpp (+18)
  • (modified) llvm/lib/IR/AsmWriter.cpp (+4-9)
  • (modified) llvm/lib/Support/APFloat.cpp (+1-1)
  • (modified) llvm/test/Analysis/CostModel/AArch64/arith-fp.ll (+3-3)
  • (modified) llvm/test/Analysis/CostModel/AArch64/insert-extract.ll (+4-4)
  • (modified) llvm/test/Analysis/CostModel/AArch64/reduce-fadd.ll (+48-48)
  • (modified) llvm/test/Analysis/CostModel/AMDGPU/fdiv.ll (+80-80)
  • (modified) llvm/test/Analysis/CostModel/ARM/divrem.ll (+40-40)
  • (modified) llvm/test/Analysis/CostModel/ARM/reduce-fp.ll (+48-48)
  • (modified) llvm/test/Analysis/CostModel/RISCV/phi-const.ll (+1-1)
  • (modified) llvm/test/Analysis/CostModel/RISCV/reduce-fadd.ll (+140-140)
  • (modified) llvm/test/Analysis/CostModel/RISCV/reduce-fmul.ll (+126-126)
  • (modified) llvm/test/Analysis/CostModel/RISCV/rvv-phi-const.ll (+3-3)
  • (modified) llvm/test/Analysis/Lint/scalable.ll (+1-1)
  • (modified) llvm/test/Assembler/bfloat.ll (+13-13)
  • (modified) llvm/test/Assembler/constant-splat.ll (+10-10)
  • (added) llvm/test/Assembler/float-literals.ll (+40)
  • (modified) llvm/test/Assembler/half-constprop.ll (+3-3)
  • (modified) llvm/test/Assembler/half-conv.ll (+1-1)
  • (modified) llvm/test/Assembler/invalid-fp80hex.ll (+1-1)
  • (modified) llvm/test/Assembler/short-hexpair.ll (+1-1)
  • (modified) llvm/test/Assembler/unnamed.ll (+1-1)
  • (modified) llvm/test/Bitcode/compatibility-3.8.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility-3.9.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility-4.0.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility-5.0.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility-6.0.ll (+2-2)
  • (modified) llvm/test/Bitcode/compatibility.ll (+2-2)
  • (modified) llvm/test/Bitcode/constant-splat.ll (+10-10)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/arm64-irtranslator.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fabs.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-flog2.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fminimum-fmaximum.mir (+8-8)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fminnum-fmaxnum.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fneg.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fptrunc.mir (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/combine-fsqrt.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/fp128-legalize-crash-pr35690.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/legalize-fp128-fconstant.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/legalize-fp16-fconstant.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/prelegalizer-combiner-select-to-fminmax.mir (+4-4)
  • (modified) llvm/test/CodeGen/AArch64/GlobalISel/select-fp16-fconstant.mir (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-aapcs.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-build-vector.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-fp-imm-size.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-fp-imm.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/arm64-fp128.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/bf16-imm.ll (+8-8)
  • (modified) llvm/test/CodeGen/AArch64/bf16-instructions.ll (+3-3)
  • (modified) llvm/test/CodeGen/AArch64/bf16-v4-instructions.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/bf16.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/f16-imm.ll (+8-8)
  • (modified) llvm/test/CodeGen/AArch64/f16-instructions.ll (+3-3)
  • (modified) llvm/test/CodeGen/AArch64/fcopysign-noneon.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/fp16-v4-instructions.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/fp16-vector-nvcast.ll (+6-6)
  • (modified) llvm/test/CodeGen/AArch64/fp16_intrinsic_lane.ll (+15-15)
  • (modified) llvm/test/CodeGen/AArch64/fp16_intrinsic_scalar_1op.ll (+5-5)
  • (modified) llvm/test/CodeGen/AArch64/half.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/isinf.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/mattr-all.ll (+1-1)
  • (modified) llvm/test/CodeGen/AArch64/sve-pred-selectop3.ll (+6-6)
  • (modified) llvm/test/CodeGen/AArch64/vecreduce-fadd-legalization-strict.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/vecreduce-fadd-legalization.ll (+2-2)
  • (modified) llvm/test/CodeGen/AArch64/vecreduce-fmul-legalization-strict.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/amdgpu-prelegalizer-combiner-crash.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fcanonicalize.mir (+12-12)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fdiv-sqrt-to-rsq.mir (+11-11)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-foldable-fneg.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-fsub-fneg.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/combine-rsq.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslate-bf16.ll (+6-6)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslator-atomicrmw.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/irtranslator-call.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fconstant.mir (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fcos.mir (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fdiv.mir (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fmaxnum.mir (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fminnum.mir (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-fsin.mir (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-intrinsic-round.mir (+36-36)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-sitofp.mir (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/legalize-uitofp.mir (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/llvm.amdgcn.wqm.demote.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-clamp-fmed3-const.mir (+5-5)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-clamp-minmax-const.mir (+20-20)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/regbankcombiner-fmed3-minmax-const.mir (+20-20)
  • (modified) llvm/test/CodeGen/AMDGPU/GlobalISel/regbankselect-default.mir (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgcn.bitcast.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgpu-codegenprepare-fold-binop-select.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-pow.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/amdgpu-simplify-libcall-rootn.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/br_cc.f16.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/build-vector-insert-elt-infloop.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/dagcombine-fmul-sel.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/extract-subvector-16bit.ll (+6-6)
  • (modified) llvm/test/CodeGen/AMDGPU/fcanonicalize.f16.ll (+18-18)
  • (modified) llvm/test/CodeGen/AMDGPU/flat-offset-bug.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/fma.f16.ll (+10-10)
  • (modified) llvm/test/CodeGen/AMDGPU/fmul-to-ldexp.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/fneg-combines.f16.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/fneg-combines.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/fneg-combines.new.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/fold-imm-f16-f32.mir (+11-11)
  • (modified) llvm/test/CodeGen/AMDGPU/fold-int-pow2-with-fmul-or-fdiv.ll (+7-7)
  • (modified) llvm/test/CodeGen/AMDGPU/fp-classify.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/fract-match.ll (+30-30)
  • (modified) llvm/test/CodeGen/AMDGPU/imm16.ll (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/immv216.ll (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/inline-constraints.ll (+7-7)
  • (modified) llvm/test/CodeGen/AMDGPU/insert_vector_elt.v2bf16.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/insert_vector_elt.v2i16.ll (+3-3)
  • (modified) llvm/test/CodeGen/AMDGPU/llvm.amdgcn.wqm.demote.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/mad-mix.ll (+4-4)
  • (modified) llvm/test/CodeGen/AMDGPU/mai-inline.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/mixed-vmem-types.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/multi-divergent-exit-region.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/pack.v2f16.ll (+5-5)
  • (modified) llvm/test/CodeGen/AMDGPU/pk_max_f16_literal.ll (+10-10)
  • (modified) llvm/test/CodeGen/AMDGPU/private-memory-atomics.ll (+1-1)
  • (modified) llvm/test/CodeGen/AMDGPU/promote-alloca-vector-to-vector.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/select-fabs-fneg-extract.f16.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/select-fabs-fneg-extract.v2f16.ll (+2-2)
  • (modified) llvm/test/CodeGen/AMDGPU/select.f16.ll (+8-8)
  • (modified) llvm/test/CodeGen/AMDGPU/simplify-libcalls.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/arm-half-promote.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/armv8.2a-fp16-vector-intrinsics.ll (+4-4)
  • (modified) llvm/test/CodeGen/ARM/bf16-imm.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/const-load-align-thumb.mir (+2-2)
  • (modified) llvm/test/CodeGen/ARM/constant-island-SOImm-limit16.mir (+2-2)
  • (modified) llvm/test/CodeGen/ARM/fp16-bitcast.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/fp16-instructions.ll (+25-25)
  • (modified) llvm/test/CodeGen/ARM/fp16-litpool-arm.mir (+3-3)
  • (modified) llvm/test/CodeGen/ARM/fp16-litpool-thumb.mir (+3-3)
  • (modified) llvm/test/CodeGen/ARM/fp16-litpool2-arm.mir (+3-3)
  • (modified) llvm/test/CodeGen/ARM/fp16-litpool3-arm.mir (+3-3)
  • (modified) llvm/test/CodeGen/ARM/fp16-no-condition.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/fp16-v3.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/pr47454.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/store_half.ll (+1-1)
  • (modified) llvm/test/CodeGen/ARM/vecreduce-fadd-legalization-soft-float.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/vecreduce-fadd-legalization-strict.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/vecreduce-fmul-legalization-soft-float.ll (+2-2)
  • (modified) llvm/test/CodeGen/ARM/vecreduce-fmul-legalization-strict.ll (+2-2)
  • (modified) llvm/test/CodeGen/DirectX/all.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/any.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/atan2.ll (+8-8)
  • (modified) llvm/test/CodeGen/DirectX/degrees.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/exp.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/log.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/log10.ll (+1-1)
  • (modified) llvm/test/CodeGen/DirectX/radians.ll (+5-5)
  • (modified) llvm/test/CodeGen/DirectX/sign.ll (+2-2)
  • (modified) llvm/test/CodeGen/DirectX/step.ll (+4-4)
  • (modified) llvm/test/CodeGen/DirectX/vector_reduce_add.ll (+5-5)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/calling-conv.ll (+2-2)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/hfinsert.ll (+1-1)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/hfnosplat_cp.ll (+1-1)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/hfsplat.ll (+3-3)
  • (modified) llvm/test/CodeGen/Hexagon/autohvx/isel-mstore-fp16.ll (+1-1)
  • (modified) llvm/test/CodeGen/LoongArch/vararg.ll (+2-2)
  • (modified) llvm/test/CodeGen/MIR/Generic/bfloat-immediates.mir (+3-3)
  • (modified) llvm/test/CodeGen/MIR/NVPTX/floating-point-invalid-type-error.mir (+2-2)
  • (modified) llvm/test/CodeGen/Mips/msa/fexuprl.ll (+1-1)
  • (modified) llvm/test/CodeGen/NVPTX/bf16-instructions.ll (+1-1)
  • (modified) llvm/test/CodeGen/NVPTX/bf16.ll (+1-1)
  • (modified) llvm/test/CodeGen/NVPTX/half.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/2008-05-01-ppc_fp128.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/2008-07-15-Fabs.ll (+7-7)
  • (modified) llvm/test/CodeGen/PowerPC/2008-07-17-Fneg.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/2008-10-28-UnprocessedNode.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/2008-10-28-f128-i32.ll (+4-4)
  • (modified) llvm/test/CodeGen/PowerPC/2008-12-02-LegalizeTypeAssert.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/aix-complex.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/builtins-ppc-p9-f128.ll (+8-8)
  • (modified) llvm/test/CodeGen/PowerPC/bv-widen-undef.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/complex-return.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/constant-pool.ll (+8-8)
  • (modified) llvm/test/CodeGen/PowerPC/ctrloop-fp128.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/disable-ctr-ppcf128.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/f128-aggregates.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/f128-arith.ll (+4-4)
  • (modified) llvm/test/CodeGen/PowerPC/f128-compare.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/f128-conv.ll (+6-6)
  • (modified) llvm/test/CodeGen/PowerPC/f128-fma.ll (+4-4)
  • (modified) llvm/test/CodeGen/PowerPC/f128-passByValue.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/f128-truncateNconv.ll (+4-4)
  • (modified) llvm/test/CodeGen/PowerPC/float-asmprint.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/float-load-store-pair.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/fminnum.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/fp-classify.ll (+3-3)
  • (modified) llvm/test/CodeGen/PowerPC/fp128-bitcast-after-operation.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/global-address-non-got-indirect-access.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/handle-f16-storage-type.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppc32-align-long-double-sf.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppc32-constant-BE-ppcf128.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppc32-skip-regs.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppc_fp128-bcwriter.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128-2.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128-4.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128-endian.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128-freeze.mir (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/ppcf128sf.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/pr15632.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/pr16556-2.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/pr16573.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/pzero-fp-xored.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/resolvefi-basereg.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/rs-undef-use.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/scalar-min-max-p10.ll (+2-2)
  • (modified) llvm/test/CodeGen/PowerPC/std-unal-fi.ll (+1-1)
  • (modified) llvm/test/CodeGen/PowerPC/vector-reduce-fadd.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/fp128.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/instruction-select/fp-constant-f16.mir (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/calling-conv-half.ll (+12-12)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/calling-conv-ilp32-ilp32f-ilp32d-common.ll (+10-10)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/calling-conv-lp64-lp64f-lp64d-common.ll (+6-6)
  • (modified) llvm/test/CodeGen/RISCV/GlobalISel/irtranslator/splat_vector.ll (+12-12)
  • (modified) llvm/test/CodeGen/RISCV/calling-conv-ilp32-ilp32f-common.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/calling-conv-ilp32-ilp32f-ilp32d-common.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/calling-conv-ilp32e.ll (+4-4)
  • (modified) llvm/test/CodeGen/RISCV/fp128.ll (+2-2)
  • (modified) llvm/test/CodeGen/RISCV/half-zfa-fli.ll (+7-7)
  • (modified) llvm/test/CodeGen/RISCV/stack-store-check.ll (+7-7)
  • (modified) llvm/test/CodeGen/RISCV/tail-calls.ll (+1-1)
  • (modified) llvm/test/CodeGen/RISCV/vararg.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPARC/fp128-select.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPARC/fp128.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_subgroup_rotate/subgroup-rotate.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/extensions/SPV_KHR_uniform_group_instructions/uniform-group-instructions.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/half_extension.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/hlsl-intrinsics/rcp.ll (+4-4)
  • (modified) llvm/test/CodeGen/SPIRV/instructions/integer-casts.ll (+4-4)
  • (modified) llvm/test/CodeGen/SPIRV/pointers/OpExtInst-OpenCL_std-ptr-types.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/spec_const.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_ballot.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_clustered_reduce.ll (+4-4)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_extended_types.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_non_uniform_arithmetic.ll (+12-12)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_non_uniform_vote.ll (+1-1)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_shuffle.ll (+2-2)
  • (modified) llvm/test/CodeGen/SPIRV/transcoding/sub_group_shuffle_relative.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/args-01.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/args-02.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/args-03.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/asm-10.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/asm-17.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/asm-19.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/call-03.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/call-zos-01.ll (+4-4)
  • (modified) llvm/test/CodeGen/SystemZ/call-zos-vararg.ll (+2-2)
  • (modified) llvm/test/CodeGen/SystemZ/fp-cmp-03.ll (+1-1)
  • (modified) llvm/test/CodeGen/SystemZ/fp-cmp-04.ll (+1-1)
diff --git a/clang/test/C/C11/n1396.c b/clang/test/C/C11/n1396.c
index 6f76cfe9594961..264c69c733cb68 100644
--- a/clang/test/C/C11/n1396.c
+++ b/clang/test/C/C11/n1396.c
@@ -31,7 +31,7 @@
 // CHECK-X64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 0xK3FFF8000000000000000
+// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], f0x3FFF8000000000000000
 // CHECK-X64-NEXT:    [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
 // CHECK-X64-NEXT:    ret float [[CONV1]]
 //
@@ -42,7 +42,7 @@
 // CHECK-AARCH64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-AARCH64-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-AARCH64-NEXT:    ret float [[CONV1]]
 //
@@ -64,7 +64,7 @@
 // CHECK-PPC32-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC32-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC32-NEXT:    ret float [[CONV1]]
 //
@@ -75,7 +75,7 @@
 // CHECK-PPC64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC64-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC64-NEXT:    ret float [[CONV1]]
 //
@@ -86,7 +86,7 @@
 // CHECK-SPARCV9-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-SPARCV9-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-SPARCV9-NEXT:    ret float [[CONV1]]
 //
@@ -102,7 +102,7 @@ float extended_float_func(float x) {
 // CHECK-X64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 0xK3FFF8000000000000000
+// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], f0x3FFF8000000000000000
 // CHECK-X64-NEXT:    [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
 // CHECK-X64-NEXT:    ret float [[CONV1]]
 //
@@ -113,7 +113,7 @@ float extended_float_func(float x) {
 // CHECK-AARCH64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-AARCH64-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-AARCH64-NEXT:    ret float [[CONV1]]
 //
@@ -135,7 +135,7 @@ float extended_float_func(float x) {
 // CHECK-PPC32-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC32-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC32-NEXT:    ret float [[CONV1]]
 //
@@ -146,7 +146,7 @@ float extended_float_func(float x) {
 // CHECK-PPC64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC64-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC64-NEXT:    ret float [[CONV1]]
 //
@@ -157,7 +157,7 @@ float extended_float_func(float x) {
 // CHECK-SPARCV9-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-SPARCV9-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-SPARCV9-NEXT:    ret float [[CONV1]]
 //
@@ -173,7 +173,7 @@ float extended_float_func_cast(float x) {
 // CHECK-X64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 0xK3FFF8000000000000000
+// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], f0x3FFF8000000000000000
 // CHECK-X64-NEXT:    [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
 // CHECK-X64-NEXT:    ret float [[CONV1]]
 //
@@ -184,7 +184,7 @@ float extended_float_func_cast(float x) {
 // CHECK-AARCH64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-AARCH64-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-AARCH64-NEXT:    ret float [[CONV1]]
 //
@@ -206,7 +206,7 @@ float extended_float_func_cast(float x) {
 // CHECK-PPC32-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC32-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC32-NEXT:    ret float [[CONV1]]
 //
@@ -217,7 +217,7 @@ float extended_float_func_cast(float x) {
 // CHECK-PPC64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC64-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC64-NEXT:    ret float [[CONV1]]
 //
@@ -228,7 +228,7 @@ float extended_float_func_cast(float x) {
 // CHECK-SPARCV9-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-SPARCV9-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-SPARCV9-NEXT:    ret float [[CONV1]]
 //
@@ -244,7 +244,7 @@ float extended_double_func(float x) {
 // CHECK-X64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-X64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to x86_fp80
-// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], 0xK3FFF8000000000000000
+// CHECK-X64-NEXT:    [[MUL:%.*]] = fmul x86_fp80 [[CONV]], f0x3FFF8000000000000000
 // CHECK-X64-NEXT:    [[CONV1:%.*]] = fptrunc x86_fp80 [[MUL]] to float
 // CHECK-X64-NEXT:    ret float [[CONV1]]
 //
@@ -255,7 +255,7 @@ float extended_double_func(float x) {
 // CHECK-AARCH64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-AARCH64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-AARCH64-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-AARCH64-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-AARCH64-NEXT:    ret float [[CONV1]]
 //
@@ -277,7 +277,7 @@ float extended_double_func(float x) {
 // CHECK-PPC32-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC32-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC32-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC32-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC32-NEXT:    ret float [[CONV1]]
 //
@@ -288,7 +288,7 @@ float extended_double_func(float x) {
 // CHECK-PPC64-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-PPC64-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to ppc_fp128
-// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], 0xM3FF00000000000000000000000000000
+// CHECK-PPC64-NEXT:    [[MUL:%.*]] = fmul ppc_fp128 [[CONV]], f0x00000000000000003FF0000000000000
 // CHECK-PPC64-NEXT:    [[CONV1:%.*]] = fptrunc ppc_fp128 [[MUL]] to float
 // CHECK-PPC64-NEXT:    ret float [[CONV1]]
 //
@@ -299,7 +299,7 @@ float extended_double_func(float x) {
 // CHECK-SPARCV9-NEXT:    store float [[X]], ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[TMP0:%.*]] = load float, ptr [[X_ADDR]], align 4
 // CHECK-SPARCV9-NEXT:    [[CONV:%.*]] = fpext float [[TMP0]] to fp128
-// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], 0xL00000000000000003FFF000000000000
+// CHECK-SPARCV9-NEXT:    [[MUL:%.*]] = fmul fp128 [[CONV]], f0x3FFF0000000000000000000000000000
 // CHECK-SPARCV9-NEXT:    [[CONV1:%.*]] = fptrunc fp128 [[MUL]] to float
 // CHECK-SPARCV9-NEXT:    ret float [[CONV1]]
 //
diff --git a/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c b/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c
index 9109626cea9ca2..2c87ce32b8811b 100644
--- a/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c
+++ b/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics-constrained.c
@@ -12,8 +12,8 @@
 #include <arm_fp16.h>
 
 // COMMON-LABEL: test_vceqzh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp oeq half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmp.f16(half %a, half 0xH0000, metadata !"oeq", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp oeq half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmp.f16(half %a, half f0x0000, metadata !"oeq", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vceqzh_f16(float16_t a) {
@@ -21,8 +21,8 @@ uint16_t test_vceqzh_f16(float16_t a) {
 }
 
 // COMMON-LABEL: test_vcgezh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp oge half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half 0xH0000, metadata !"oge", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp oge half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half f0x0000, metadata !"oge", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vcgezh_f16(float16_t a) {
@@ -30,8 +30,8 @@ uint16_t test_vcgezh_f16(float16_t a) {
 }
 
 // COMMON-LABEL: test_vcgtzh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp ogt half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half 0xH0000, metadata !"ogt", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp ogt half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half f0x0000, metadata !"ogt", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vcgtzh_f16(float16_t a) {
@@ -39,8 +39,8 @@ uint16_t test_vcgtzh_f16(float16_t a) {
 }
 
 // COMMON-LABEL: test_vclezh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp ole half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half 0xH0000, metadata !"ole", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp ole half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half f0x0000, metadata !"ole", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vclezh_f16(float16_t a) {
@@ -48,8 +48,8 @@ uint16_t test_vclezh_f16(float16_t a) {
 }
 
 // COMMON-LABEL: test_vcltzh_f16
-// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp olt half %a, 0xH0000
-// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half 0xH0000, metadata !"olt", metadata !"fpexcept.strict")
+// UNCONSTRAINED:  [[TMP1:%.*]] = fcmp olt half %a, f0x0000
+// CONSTRAINED:    [[TMP1:%.*]] = call i1 @llvm.experimental.constrained.fcmps.f16(half %a, half f0x0000, metadata !"olt", metadata !"fpexcept.strict")
 // COMMONIR:       [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // COMMONIR:       ret i16 [[TMP2]]
 uint16_t test_vcltzh_f16(float16_t a) {
diff --git a/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c b/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c
index 90ee74e459ebd4..27d60de792b074 100644
--- a/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c
+++ b/clang/test/CodeGen/AArch64/v8.2a-fp16-intrinsics.c
@@ -15,7 +15,7 @@ float16_t test_vabsh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vceqzh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp oeq half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp oeq half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vceqzh_f16(float16_t a) {
@@ -23,7 +23,7 @@ uint16_t test_vceqzh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vcgezh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp oge half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp oge half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vcgezh_f16(float16_t a) {
@@ -31,7 +31,7 @@ uint16_t test_vcgezh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vcgtzh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp ogt half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp ogt half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vcgtzh_f16(float16_t a) {
@@ -39,7 +39,7 @@ uint16_t test_vcgtzh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vclezh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp ole half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp ole half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vclezh_f16(float16_t a) {
@@ -47,7 +47,7 @@ uint16_t test_vclezh_f16(float16_t a) {
 }
 
 // CHECK-LABEL: test_vcltzh_f16
-// CHECK:  [[TMP1:%.*]] = fcmp olt half %a, 0xH0000
+// CHECK:  [[TMP1:%.*]] = fcmp olt half %a, f0x0000
 // CHECK:  [[TMP2:%.*]] = sext i1 [[TMP1]] to i16
 // CHECK:  ret i16 [[TMP2]]
 uint16_t test_vcltzh_f16(float16_t a) {
diff --git a/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c b/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c
index a8fb989b64de50..b6bbff0c742f89 100644
--- a/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c
+++ b/clang/test/CodeGen/AMDGPU/amdgpu-atomic-float.c
@@ -191,7 +191,7 @@ double test_double_pre_inc()
 // SAFE-NEXT:  [[ENTRY:.*:]]
 // SAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // SAFE-NEXT:    [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// SAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half 0xH3C00 seq_cst, align 2
+// SAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half f0x3C00 seq_cst, align 2
 // SAFE-NEXT:    ret half [[TMP0]]
 //
 // UNSAFE-LABEL: define dso_local half @test__Float16_post_inc(
@@ -199,7 +199,7 @@ double test_double_pre_inc()
 // UNSAFE-NEXT:  [[ENTRY:.*:]]
 // UNSAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // UNSAFE-NEXT:    [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// UNSAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half 0xH3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
+// UNSAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fadd ptr addrspacecast (ptr addrspace(1) @test__Float16_post_inc.n to ptr), half f0x3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
 // UNSAFE-NEXT:    ret half [[TMP0]]
 //
 _Float16 test__Float16_post_inc()
@@ -213,7 +213,7 @@ _Float16 test__Float16_post_inc()
 // SAFE-NEXT:  [[ENTRY:.*:]]
 // SAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // SAFE-NEXT:    [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// SAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half 0xH3C00 seq_cst, align 2
+// SAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half f0x3C00 seq_cst, align 2
 // SAFE-NEXT:    ret half [[TMP0]]
 //
 // UNSAFE-LABEL: define dso_local half @test__Float16_post_dc(
@@ -221,7 +221,7 @@ _Float16 test__Float16_post_inc()
 // UNSAFE-NEXT:  [[ENTRY:.*:]]
 // UNSAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // UNSAFE-NEXT:    [[RETVAL_ASCAST:%.*]] = addrspacecast ptr addrspace(5) [[RETVAL]] to ptr
-// UNSAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half 0xH3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
+// UNSAFE-NEXT:    [[TMP0:%.*]] = atomicrmw fsub ptr addrspacecast (ptr addrspace(1) @test__Float16_post_dc.n to ptr), half f0x3C00 seq_cst, align 2, !amdgpu.no.fine.grained.memory [[META3]]
 // UNSAFE-NEXT:    ret half [[TMP0]]
 //
 _Float16 test__Float16_post_dc()
@@ -235,8 +235,8 @@ _Float16 test__Float16_post_dc()
 // SAFE-NEXT:  [[ENTRY:.*:]]
 // SAFE-NEXT:    [[RETVAL:%.*]] = alloca half, align 2, addrspace(5)
 // SAFE-NEXT:    [[RETVAL_ASCAST...
[truncated]

Copy link

github-actions bot commented Jan 6, 2025

✅ With the latest revision this PR passed the C/C++ code formatter.

@jcranmer-intel jcranmer-intel changed the title [AsmParser] Revamp how floating-point literals in LLVM IR. [IR][AsmParser] Revamp how floating-point literals in LLVM IR. Jan 7, 2025
TokStart[1] == '0' && TokStart[2] == 'x' &&
isxdigit(static_cast<unsigned char>(TokStart[3]))) {
int len = CurPtr-TokStart-3;
bool IsFloatConst = TokStart[0] == 'f';
int len = CurPtr - TokStart - 3;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I know it's from the old code but since we're changing it anyway could you make it Len? Also, why int rather than unsigned or size_t

}
case lltok::FloatHexLiteral: {
assert(ExpectedTy && "Need type to parse float values");
auto &Semantics = ExpectedTy->getFltSemantics();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: const auto &

Copy link
Contributor

@nikic nikic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As you still support the legacy format, could you please restrict this PR to only the parser changes, and leave the printer changes (and the mass test update they require) to a followup?

@jcranmer-intel
Copy link
Contributor Author

As you still support the legacy format, could you please restrict this PR to only the parser changes, and leave the printer changes (and the mass test update they require) to a followup?

Sure, I can do it. I made them two separate in the commits partly for that reason.

@jayfoad
Copy link
Contributor

jayfoad commented Jan 7, 2025

[IR][AsmParser] Revamp how floating-point literals in LLVM IR.

"how floating-point literals" doesn't read right to me - is there a word missing?

@jcranmer-intel jcranmer-intel changed the title [IR][AsmParser] Revamp how floating-point literals in LLVM IR. [IR][AsmParser] Revamp how floating-point literals work in LLVM IR. Jan 8, 2025
@lei137
Copy link
Contributor

lei137 commented Jan 9, 2025

My build on Linux PPC failed with:

~/llvm-project/clang/lib/CodeGen/CodeGenFunction.cpp:2089:11: error: enumeration value 'SpellingNotCalculated' not handled in switch [-Werror,-Ws     witch]
 2089 |   switch (HLSLControlFlowAttr) {
           |           ^~~~~~~~~~~~~~~~~~~
1 error generated.

| | required, as is one or more leading digits before |
| | the decimal point. |
+---------------+---------------------------------------------------+
| ``-0x1.fp13`` | Common hexadecimal literal. Signs are optional. |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we switch the default syntax to hex float? Not as part of this PR, it would be more disruptive

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was discussed on the RFC for this in discourse, but there wasn't entirely a consensus on doing so.

I did start working on a follow-up PR to extend the smart-output to more cases, but in going through and manually fixing the loads of broken tests that aren't automatically updateable, I have found that the current logic leaves a lot to be desired.

This adds support for the following kinds of formats:
* Hexadecimal literals like 0x1.fp13
* Special values +inf/-inf, +qnan/-qnan
* NaN values with payloads like +nan(0x1)

Additionally, the floating-point hexadecimal format that records the
bitpattern exactly no longer requires the 0xL or 0xK or similar code for
the floating-point type. This format is removed from the documentation,
but is still supported as a legacy format in the parser.
@jcranmer-intel
Copy link
Contributor Author

While working on other changes, I noticed that the APFloat::convertFromString wasn't working the way I thought it did, and it can only produce an snan value if you have a string that begins snan. So I made a change to the IR from the original proposal to distinguish between qnan and snan using nan/snan rather than high bit of the payload.

Copy link
Contributor

@jayfoad jayfoad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Additionally, the floating-point hexadecimal format that records the bitpattern exactly no longer requires the 0xL or 0xK or similar code for the floating-point type. This format is removed from the documentation, but is still supported as a legacy format in the parser.

Personally I'd prefer that anything that's supported is documented, even if it is documented as deprecated.

Comment on lines +4602 to +4603
using the default rounding mode (round to nearest, half to even). String
conversions that underflow to 0 or overflow to infinity are not permitted.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Allowing rounding seems nice. Any particular reason not to allow overflow/underflow? Just being conservative?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My main motivation was to keep the parsing code as strict as possible, so that if you saw a constant in the code, you could be certain you knew what it was. Despite the existing documentation, we already allow inexact conversions from decimal strings to double (we check for exactness on conversion of the resulting double to the actual type, though).

There's an argument to be made that allowing 0.1 as a constant, even if we didn't already allow it; I don't see a strong argument for allowing 1e99999 or 1e-99999 when we have easy syntax for infinity already.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I always thought the parser behavior was ensuring the constant was exact with no rounding. If it wasn't doing that, I'd consider it to be a bug and we shouldn't be more lax

@@ -4608,31 +4610,40 @@ Simple Constants
The identifier '``none``' is recognized as an empty token constant
and must be of :ref:`token type <t_token>`.

The one non-intuitive notation for constants is the hexadecimal form of
floating-point constants. For example, the form
'``double 0x432ff973cafa8000``' is equivalent to (but harder to read
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you removing this old double 0x432ff973cafa8000 syntax? So is this change not backwards compatible?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The original parsing code is still being kept, which is why this patch can go in without a few hundred test changes.

The documentation I removed partially because it's deprecated, partially because it's flat out wrong, and partially because describing the correct behavior is annoying (e.g., 0xM and 0xL doesn't work like the documentation suggests).

| | as hexadecimal (not including the quiet bit as |
| | part of the payload). The sign is required. |
+----------------+---------------------------------------------------+
| ``+snan(0x1)`` | sNaN value with a particular payload, specified |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do nan and snan take an explicit payload but qnan does not?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The original proposal was qnan for the preferred qNaN, and nan(...) for every other NaN value. I discovered last night that APFloat::convertFromString didn't allow nan(...) to produce an sNaN value, and after staring at the IEEE 754 and C23 specifications for a bit to look at what they wanted for string->NaN conversions, I concluded that it was better to explicitly call out an snan(...) string than to make nan(...) produce a qNaN value.

There's not much keeping qnan from having a payload parameter, except that the APFloat::convertFromString doesn't support it. That's changeable, but the IEEE 754 specification I noticed doesn't ever use qnan for a qNaN string, so it doesn't entirely feel right to me to change APFloat::convertFromString to allow it.

FWIW, I also expect that virtually every NaN in practice ends up being +qnan or -qnan anyways.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we spell out this assumes the 2008 snan quiet bit pattern?

@jcranmer-intel
Copy link
Contributor Author

Review ping.

Comment on lines +4602 to +4603
using the default rounding mode (round to nearest, half to even). String
conversions that underflow to 0 or overflow to infinity are not permitted.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I always thought the parser behavior was ensuring the constant was exact with no rounding. If it wasn't doing that, I'd consider it to be a bug and we shouldn't be more lax

| | as hexadecimal (not including the quiet bit as |
| | part of the payload). The sign is required. |
+----------------+---------------------------------------------------+
| ``+snan(0x1)`` | sNaN value with a particular payload, specified |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we spell out this assumes the 2008 snan quiet bit pattern?

const auto &Semantics = ExpectedTy->getFltSemantics();
const APInt &Bits = Lex.getAPSIntVal();
if (APFloat::getSizeInBits(Semantics) != Bits.getBitWidth())
return error(ID.Loc, "float hex literal has incorrect number of bits");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a test for this case? The one place the message appears seems to be an accidental change in a MIR test

// literals. Underflow is thrown when the result is denormal, so to allow
// denormals, only reject underflowing literals that resulted in a zero.
if (*Except & APFloat::opOverflow)
return error(ID.Loc, "floating point constant overflowed type");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the prevailing spelling hyphenates floating-point

EXPECT_TRUE(
cast<ConstantFP>(V)->isExactlyValue(APFloat::getNaN(Float, false, 1)));
EXPECT_TRUE(!cast<ConstantFP>(V)->getValue().isSignaling());

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test with the degenerate FP types? ppc / x86, maybe bfloat?

ASSERT_TRUE(V);
EXPECT_TRUE(V->getType()->isFP128Ty());
ASSERT_TRUE(isa<ConstantFP>(V));
EXPECT_TRUE(cast<ConstantFP>(V)->isExactlyValue(-0.0));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought we had an isNegZero helper now

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: No status
Development

Successfully merging this pull request may close these issues.

7 participants