Skip to content
This repository was archived by the owner on Feb 21, 2026. It is now read-only.
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 10 additions & 4 deletions clang/include/clang/CIR/Dialect/IR/CIROps.td
Original file line number Diff line number Diff line change
Expand Up @@ -580,6 +580,12 @@ def CIR_AllocaOp : CIR_Op<"alloca", [
The presence of the `const` attribute indicates that the local variable is
declared with C/C++ `const` keyword.

The presence of the `tmp` attribute indicates that the allocation
represents a compiler-generated temporary (e.g., from MaterializeTemporaryExpr,
aggregate temporaries, reference binding temporaries). This is used by the
lifetime checker to identify temporaries that may be skipped for certain
analyses.

The `dynAllocSize` specifies the size to dynamically allocate on the stack
and ignores the allocation size based on the original type. This is useful
when handling VLAs and is omitted when declaring regular local variables.
Expand All @@ -604,6 +610,7 @@ def CIR_AllocaOp : CIR_Op<"alloca", [
StrAttr:$name,
UnitAttr:$init,
UnitAttr:$constant,
UnitAttr:$tmp,
ConfinedAttr<OptionalAttr<I64Attr>, [IntMinValue<0>]>:$alignment,
OptionalAttr<ArrayAttr>:$annotations,
OptionalAttr<ASTVarDeclInterface>:$ast
Expand Down Expand Up @@ -637,13 +644,12 @@ def CIR_AllocaOp : CIR_Op<"alloca", [
bool isDynamic() { return (bool)getDynAllocSize(); }
}];

// Custom parse/print allows flags in any order and supports tmp-only or
// init/const-only spellings, which the compact format can't express.
let assemblyFormat = [{
$allocaType `,` qualified(type($addr)) `,`
($dynAllocSize^ `:` type($dynAllocSize) `,`)?
`[` $name
(`,` `init` $init^)?
(`,` `const` $constant^)?
`]`
custom<AllocaNameAndFlags>($name, $init, $constant, $tmp)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't init be orthogonal to tmp? Not sure I understand why they need to be exclusive (and if that's the case we should have had a verifier to guarantee), can you elaborate?

Seems like adding another (, tmp $tmp^)? here would simplify parsing/printing significantly.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't init be orthogonal to tmp? Not sure I understand why they need to be exclusive (and if that's the case we should have had a verifier to guarantee), can you elaborate?

I agree they should be orthogonal, not mutually exclusive. With the custom parser we can already represent both together (["name", init, tmp]), so this is not a verifier/exclusivity issue. The remaining question is semantic policy in CodeGen: making this fully orthogonal in emitted IR would require setting init on compiler-generated temporaries too, which broadens the meaning of init.

Seems like adding another (, tmp $tmp^)? here would simplify parsing/printing significantly.

(init?, const?, tmp?) was actually my first approach. I avoided it because implementing it cleanly required changing init semantics and marking compiler-generated temporaries with init as well.

Even after doing that, there is still an MLIR ODS parsing limitation with adjacent optional groups that share the same leading ,: forms like ["name", init, tmp] fail when const is absent. So this is not only a semantic concern; it also has a parser robustness issue in this format.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The remaining question is semantic policy in CodeGen: making this fully orthogonal in emitted IR would require setting init on compiler-generated temporaries too, which broadens the meaning of init.

How, you can always setInit in the appropriate place, this functionality shouldn't be part of createTemp*, but can operate on the result / use side.

implementing it cleanly required changing init semantics and marking compiler-generated temporaries with init as well.

I don't see how that is true, what am I missing?

Even after doing that, there is still an MLIR ODS parsing limitation with adjacent optional groups that share the same leading ,: forms like ["name", init, tmp] fail when const is absent. So this is not only a semantic concern; it also has a parser robustness issue in this format.

The const thing looks like a silly limitation, perhaps that should be fixed.

($annotations^)?
(`ast` $ast^)? attr-dict
}];
Expand Down
7 changes: 4 additions & 3 deletions clang/lib/CIR/CodeGen/CIRGenAtomic.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -263,7 +263,7 @@ class AtomicInfo {
// This function emits any expression (scalar, complex, or aggregate)
// into a temporary alloca.
static Address emitValToTemp(CIRGenFunction &CGF, Expr *E) {
Address DeclPtr = CGF.CreateMemTemp(
Address DeclPtr = CGF.CreateMemTempWithName(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I understand why all these changes to CreateMemTempWithName, can't you just change CreateMemTemp* to tag the allocas as "tmp"? The name of the function is already enforcing the semantics here :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I understand why all these changes to CreateMemTempWithName, can't you just change CreateMemTemp* to tag the allocas as tmp? The name of the function is already enforcing the semantics here :)

I introduced the naming split intentionally to keep API intent explicit:

  • *WithName: explicit caller-provided names.
  • *WithAutoName: compiler-generated ref.tmp* / agg.tmp*.

If API intent clarity is valuable here, we can keep this split.

If you prefer less API surface, I can also remove the split and follow your suggestion directly: keep CreateMemTemp*/CreateAggTemp*, tag tmp inside CreateMemTemp*, and drop the extra auto-name helpers. That is simpler, but it also tags more scratch allocas as tmp.

@bcardosolopes what do you prefer?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, please follow my suggestion then!

E->getType(), CGF.getLoc(E->getSourceRange()), ".atomictmp");
CGF.emitAnyExprToMem(E, DeclPtr, E->getType().getQualifiers(),
/*Init*/ true);
Expand Down Expand Up @@ -322,7 +322,7 @@ Address AtomicInfo::convertToAtomicIntPointer(Address Addr) const {
}

Address AtomicInfo::CreateTempAlloca() const {
Address TempAlloca = CGF.CreateMemTemp(
Address TempAlloca = CGF.CreateMemTempWithName(
(LVal.isBitField() && ValueSizeInBits > AtomicSizeInBits) ? ValueTy
: AtomicTy,
getAtomicAlignment(), loc, "atomic-temp");
Expand Down Expand Up @@ -1031,7 +1031,8 @@ RValue CIRGenFunction::emitAtomicExpr(AtomicExpr *E) {
if (ShouldCastToIntPtrTy)
Dest = Atomics.castToAtomicIntPointer(Dest);
} else if (E->isCmpXChg())
Dest = CreateMemTemp(RValTy, getLoc(E->getSourceRange()), "cmpxchg.bool");
Dest = CreateMemTempWithName(RValTy, getLoc(E->getSourceRange()),
"cmpxchg.bool");
else if (!RValTy->isVoidType()) {
Dest = Atomics.CreateTempAlloca();
if (ShouldCastToIntPtrTy)
Expand Down
4 changes: 2 additions & 2 deletions clang/lib/CIR/CodeGen/CIRGenBuiltinX86.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -570,7 +570,7 @@ mlir::Value CIRGenFunction::emitX86BuiltinExpr(unsigned BuiltinID,
case X86::BI_mm_setcsr:
case X86::BI__builtin_ia32_ldmxcsr: {
Address tmp =
CreateMemTemp(E->getArg(0)->getType(), getLoc(E->getExprLoc()));
CreateMemTempWithName(E->getArg(0)->getType(), getLoc(E->getExprLoc()));
builder.createStore(getLoc(E->getExprLoc()), Ops[0], tmp);
return cir::LLVMIntrinsicCallOp::create(
builder, getLoc(E->getExprLoc()),
Expand All @@ -580,7 +580,7 @@ mlir::Value CIRGenFunction::emitX86BuiltinExpr(unsigned BuiltinID,
}
case X86::BI_mm_getcsr:
case X86::BI__builtin_ia32_stmxcsr: {
Address tmp = CreateMemTemp(E->getType(), getLoc(E->getExprLoc()));
Address tmp = CreateMemTempWithName(E->getType(), getLoc(E->getExprLoc()));
cir::LLVMIntrinsicCallOp::create(builder, getLoc(E->getExprLoc()),
builder.getStringAttr("x86.sse.stmxcsr"),
builder.getVoidTy(), tmp.getPointer())
Expand Down
12 changes: 6 additions & 6 deletions clang/lib/CIR/CodeGen/CIRGenCall.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -499,7 +499,7 @@ RValue CIRGenFunction::emitCall(const CIRGenFunctionInfo &CallInfo,
// FIXME: Avoid the conversion through memory if possible.
Address Src = Address::invalid();
if (!I->isAggregate()) {
Src = CreateMemTemp(I->Ty, loc, "coerce");
Src = CreateMemTempWithName(I->Ty, loc, "coerce");
I->copyInto(*this, Src, loc);
} else {
Src = I->hasLValue() ? I->getKnownLValue().getAddress()
Expand Down Expand Up @@ -698,7 +698,7 @@ RValue CIRGenFunction::emitCall(const CIRGenFunctionInfo &CallInfo,
bool DestIsVolatile = ReturnValue.isVolatile();

if (!DestPtr.isValid()) {
DestPtr = CreateMemTemp(RetTy, callLoc, getCounterAggTmpAsString());
DestPtr = CreateAggTempAddressWithAutoName(RetTy, callLoc);
DestIsVolatile = false;
}

Expand All @@ -721,7 +721,7 @@ RValue CIRGenFunction::emitCall(const CIRGenFunctionInfo &CallInfo,
Address DestPtr = ReturnValue.getValue();

if (!DestPtr.isValid())
DestPtr = CreateMemTemp(RetTy, callLoc, "tmp.try.call.res");
DestPtr = CreateMemTempWithName(RetTy, callLoc, "tmp.try.call.res");

return getRValueThroughMemory(callLoc, builder, Results[0], DestPtr);
}
Expand Down Expand Up @@ -832,8 +832,8 @@ RValue CIRGenFunction::emitAnyExprToTemp(const Expr *E) {
AggValueSlot AggSlot = AggValueSlot::ignored();

if (hasAggregateEvaluationKind(E->getType()))
AggSlot = CreateAggTemp(E->getType(), getLoc(E->getSourceRange()),
getCounterAggTmpAsString());
AggSlot =
CreateAggTempWithAutoName(E->getType(), getLoc(E->getSourceRange()));

return emitAnyExpr(E, AggSlot);
}
Expand Down Expand Up @@ -1413,7 +1413,7 @@ CIRGenTypes::arrangeFunctionDeclaration(const FunctionDecl *FD) {
RValue CallArg::getRValue(CIRGenFunction &CGF, mlir::Location loc) const {
if (!HasLV)
return RV;
LValue Copy = CGF.makeAddrLValue(CGF.CreateMemTemp(Ty, loc), Ty);
LValue Copy = CGF.makeAddrLValue(CGF.CreateMemTempWithName(Ty, loc), Ty);
CGF.emitAggregateCopy(Copy, LV, Ty, AggValueSlot::DoesNotOverlap,
LV.isVolatile());
IsUsed = true;
Expand Down
4 changes: 2 additions & 2 deletions clang/lib/CIR/CodeGen/CIRGenClass.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1046,8 +1046,8 @@ void CIRGenFunction::emitLambdaDelegatingInvokeBody(const CXXMethodDecl *MD) {

QualType LambdaType = getContext().getCanonicalTagType(Lambda);
QualType ThisType = getContext().getPointerType(LambdaType);
Address ThisPtr =
CreateMemTemp(LambdaType, getLoc(MD->getSourceRange()), "unused.capture");
Address ThisPtr = CreateMemTempWithName(
LambdaType, getLoc(MD->getSourceRange()), "unused.capture");
CallArgs.add(RValue::get(ThisPtr.getPointer()), ThisType);

// Add the rest of the parameters.
Expand Down
129 changes: 82 additions & 47 deletions clang/lib/CIR/CodeGen/CIRGenExpr.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -492,8 +492,8 @@ LValue CIRGenFunction::emitCompoundLiteralLValue(const CompoundLiteralExpr *E) {
llvm_unreachable("NYI");
}

Address DeclPtr = CreateMemTemp(E->getType(), getLoc(E->getSourceRange()),
".compoundliteral");
Address DeclPtr = CreateMemTempWithName(
E->getType(), getLoc(E->getSourceRange()), ".compoundliteral");
const Expr *InitExpr = E->getInitializer();
LValue Result = makeAddrLValue(DeclPtr, E->getType(), AlignmentSource::Decl);

Expand Down Expand Up @@ -722,7 +722,7 @@ void CIRGenFunction::emitStoreOfScalar(mlir::Value value, Address addr,
// Update the alloca with more info on initialization.
assert(addr.getPointer() && "expected pointer to exist");
auto SrcAlloca = addr.getDefiningOp<cir::AllocaOp>();
if (currVarDecl && SrcAlloca) {
if (currVarDecl && SrcAlloca && !SrcAlloca.getTmpAttr()) {
const VarDecl *VD = currVarDecl;
assert(VD && "VarDecl expected");
SrcAlloca.setInit(VD->hasInit());
Expand Down Expand Up @@ -1360,7 +1360,7 @@ LValue CIRGenFunction::emitExtVectorElementExpr(const ExtVectorElementExpr *E) {

// Store the vector to memory (because LValue wants an address).
QualType BaseTy = E->getBase()->getType();
Address VecMem = CreateMemTemp(BaseTy, Vec.getLoc(), "tmp");
Address VecMem = CreateMemTempWithName(BaseTy, Vec.getLoc(), "tmp");
builder.createStore(Vec.getLoc(), Vec, VecMem);
base = makeAddrLValue(VecMem, BaseTy, AlignmentSource::Decl);
}
Expand Down Expand Up @@ -1550,8 +1550,8 @@ RValue CIRGenFunction::emitAnyExpr(const Expr *E, AggValueSlot aggSlot,
return RValue::getComplex(emitComplexExpr(E));
case cir::TEK_Aggregate: {
if (!ignoreResult && aggSlot.isIgnored())
aggSlot = CreateAggTemp(E->getType(), getLoc(E->getSourceRange()),
getCounterAggTmpAsString());
aggSlot =
CreateAggTempWithAutoName(E->getType(), getLoc(E->getSourceRange()));
emitAggExpr(E, aggSlot);
return aggSlot.asRValue();
}
Expand Down Expand Up @@ -2451,8 +2451,8 @@ static Address createReferenceTemporary(CIRGenFunction &CGF,
mlir::OpBuilder::InsertPoint ip;
if (extDeclAlloca)
ip = {extDeclAlloca->getBlock(), extDeclAlloca->getIterator()};
return CGF.CreateMemTemp(Ty, CGF.getLoc(M->getSourceRange()),
CGF.getCounterRefTmpAsString(), Alloca, ip);
return CGF.CreateRefTempWithAutoName(Ty, CGF.getLoc(M->getSourceRange()),
Alloca, ip);
}
case SD_Thread:
case SD_Static: {
Expand Down Expand Up @@ -3056,7 +3056,8 @@ mlir::Value CIRGenFunction::emitOpOnBoolExpr(mlir::Location loc,
mlir::Value CIRGenFunction::emitAlloca(StringRef name, mlir::Type ty,
mlir::Location loc, CharUnits alignment,
bool insertIntoFnEntryBlock,
mlir::Value arraySize) {
mlir::Value arraySize,
bool isTemporary) {
mlir::Block *entryBlock = insertIntoFnEntryBlock
? getCurFunctionEntryBlock()
: currLexScope->getEntryBlock();
Expand All @@ -3072,13 +3073,15 @@ mlir::Value CIRGenFunction::emitAlloca(StringRef name, mlir::Type ty,
}

return emitAlloca(name, ty, loc, alignment,
builder.getBestAllocaInsertPoint(entryBlock), arraySize);
builder.getBestAllocaInsertPoint(entryBlock), arraySize,
isTemporary);
}

mlir::Value CIRGenFunction::emitAlloca(StringRef name, mlir::Type ty,
mlir::Location loc, CharUnits alignment,
mlir::OpBuilder::InsertPoint ip,
mlir::Value arraySize) {
mlir::Value arraySize,
bool isTemporary) {
// CIR uses its own alloca AS rather than follow the target data layout like
// original CodeGen. The data layout awareness should be done in the lowering
// pass instead.
Expand All @@ -3091,20 +3094,27 @@ mlir::Value CIRGenFunction::emitAlloca(StringRef name, mlir::Type ty,
builder.restoreInsertionPoint(ip);
addr = builder.createAlloca(loc, /*addr type*/ localVarPtrTy,
/*var type*/ ty, name, alignIntAttr, arraySize);
auto alloca = addr.getDefiningOp<cir::AllocaOp>();

if (currVarDecl) {
auto alloca = addr.getDefiningOp<cir::AllocaOp>();
alloca.setAstAttr(ASTVarDeclAttr::get(&getMLIRContext(), currVarDecl));
}

// Set temporary attribute based on semantic information.
// Currently used for ref.tmp*/agg.tmp*; other scratch temps are unmarked.
if (isTemporary)
alloca.setTmpAttr(builder.getUnitAttr());
}
return addr;
}

mlir::Value CIRGenFunction::emitAlloca(StringRef name, QualType ty,
mlir::Location loc, CharUnits alignment,
bool insertIntoFnEntryBlock,
mlir::Value arraySize) {
mlir::Value arraySize,
bool isTemporary) {
return emitAlloca(name, convertType(ty), loc, alignment,
insertIntoFnEntryBlock, arraySize);
insertIntoFnEntryBlock, arraySize, isTemporary);
}

mlir::Value CIRGenFunction::emitLoadOfScalar(LValue lvalue,
Expand Down Expand Up @@ -3262,34 +3272,61 @@ void CIRGenFunction::emitUnreachable(SourceLocation Loc) {
// CIR builder helpers
//===----------------------------------------------------------------------===//

Address CIRGenFunction::CreateMemTemp(QualType Ty, mlir::Location Loc,
const Twine &Name, Address *Alloca,
mlir::OpBuilder::InsertPoint ip) {
Address CIRGenFunction::CreateMemTempWithName(QualType Ty, mlir::Location Loc,
const Twine &Name,
Address *Alloca,
mlir::OpBuilder::InsertPoint ip,
bool isTemporary) {
// FIXME: Should we prefer the preferred type alignment here?
return CreateMemTemp(Ty, getContext().getTypeAlignInChars(Ty), Loc, Name,
Alloca, ip);
return CreateMemTempWithName(Ty, getContext().getTypeAlignInChars(Ty), Loc,
Name, Alloca, ip, isTemporary);
}

Address CIRGenFunction::CreateMemTemp(QualType Ty, CharUnits Align,
mlir::Location Loc, const Twine &Name,
Address *Alloca,
mlir::OpBuilder::InsertPoint ip) {
Address CIRGenFunction::CreateMemTempWithName(
QualType Ty, CharUnits Align, mlir::Location Loc, const Twine &Name,
Address *Alloca, mlir::OpBuilder::InsertPoint ip, bool isTemporary) {
// CreateMemTempWithName may be used for compiler-generated temporaries
// (ref.tmp*/agg.tmp*). Other scratch allocas are not marked temporary yet.
Address Result =
CreateTempAlloca(convertTypeForMem(Ty), /*destAS=*/{}, Align, Loc, Name,
/*ArraySize=*/nullptr, Alloca, ip);
/*ArraySize=*/nullptr, Alloca, ip, isTemporary);
if (Ty->isConstantMatrixType()) {
assert(0 && "NYI");
}
return Result;
}

Address
CIRGenFunction::CreateRefTempWithAutoName(QualType Ty, mlir::Location Loc,
Address *Alloca,
mlir::OpBuilder::InsertPoint ip) {
return CreateMemTempWithName(Ty, Loc, getCounterRefTmpAsString(), Alloca, ip,
/*isTemporary=*/true);
}

Address CIRGenFunction::CreateAggTempAddressWithAutoName(
QualType Ty, mlir::Location Loc, Address *Alloca,
mlir::OpBuilder::InsertPoint ip) {
return CreateMemTempWithName(Ty, Loc, getCounterAggTmpAsString(), Alloca, ip,
/*isTemporary=*/true);
}

AggValueSlot CIRGenFunction::CreateAggTempWithAutoName(QualType Ty,
mlir::Location Loc,
Address *Alloca) {
return CreateAggTempWithName(Ty, Loc, getCounterAggTmpAsString(), Alloca);
}

/// This creates a alloca and inserts it into the entry block of the
/// current region.
Address CIRGenFunction::CreateTempAllocaWithoutCast(
mlir::Type Ty, CharUnits Align, mlir::Location Loc, const Twine &Name,
mlir::Value ArraySize, mlir::OpBuilder::InsertPoint ip) {
auto Alloca = ip.isSet() ? CreateTempAlloca(Ty, Loc, Name, ip, ArraySize)
: CreateTempAlloca(Ty, Loc, Name, ArraySize);
mlir::Value ArraySize, mlir::OpBuilder::InsertPoint ip, bool isTemporary) {
auto Alloca =
ip.isSet()
? CreateTempAlloca(Ty, Loc, Name, ip, ArraySize, isTemporary)
: CreateTempAlloca(Ty, Loc, Name, ArraySize,
/*insertIntoFnEntryBlock=*/false, isTemporary);
Alloca.setAlignmentAttr(CGM.getSize(Align));
return Address(Alloca, Ty, Align);
}
Expand Down Expand Up @@ -3340,9 +3377,9 @@ Address CIRGenFunction::maybeCastStackAddressSpace(
Address CIRGenFunction::CreateTempAlloca(
mlir::Type Ty, mlir::ptr::MemorySpaceAttrInterface destAS, CharUnits Align,
mlir::Location Loc, const Twine &Name, mlir::Value ArraySize,
Address *AllocaAddr, mlir::OpBuilder::InsertPoint ip) {
Address Alloca =
CreateTempAllocaWithoutCast(Ty, Align, Loc, Name, ArraySize, ip);
Address *AllocaAddr, mlir::OpBuilder::InsertPoint ip, bool isTemporary) {
Address Alloca = CreateTempAllocaWithoutCast(Ty, Align, Loc, Name, ArraySize,
ip, isTemporary);
if (AllocaAddr)
*AllocaAddr = Alloca;
return maybeCastStackAddressSpace(Alloca, destAS, ArraySize);
Expand All @@ -3352,42 +3389,40 @@ Address CIRGenFunction::CreateTempAlloca(mlir::Type Ty, CharUnits Align,
mlir::Location Loc, const Twine &Name,
mlir::Value ArraySize,
Address *AllocaAddr,
mlir::OpBuilder::InsertPoint ip) {
mlir::OpBuilder::InsertPoint ip,
bool isTemporary) {
return CreateTempAlloca(Ty, /*destAS=*/{}, Align, Loc, Name, ArraySize,
AllocaAddr, ip);
AllocaAddr, ip, isTemporary);
}

/// This creates an alloca and inserts it into the entry block if \p ArraySize
/// is nullptr, otherwise inserts it at the current insertion point of the
/// builder.
cir::AllocaOp CIRGenFunction::CreateTempAlloca(mlir::Type Ty,
mlir::Location Loc,
const Twine &Name,
mlir::Value ArraySize,
bool insertIntoFnEntryBlock) {
cir::AllocaOp CIRGenFunction::CreateTempAlloca(
mlir::Type Ty, mlir::Location Loc, const Twine &Name, mlir::Value ArraySize,
bool insertIntoFnEntryBlock, bool isTemporary) {
return emitAlloca(Name.str(), Ty, Loc, CharUnits(), insertIntoFnEntryBlock,
ArraySize)
ArraySize, isTemporary)
.getDefiningOp<cir::AllocaOp>();
}

/// This creates an alloca and inserts it into the provided insertion point
cir::AllocaOp CIRGenFunction::CreateTempAlloca(mlir::Type Ty,
mlir::Location Loc,
const Twine &Name,
mlir::OpBuilder::InsertPoint ip,
mlir::Value ArraySize) {
cir::AllocaOp CIRGenFunction::CreateTempAlloca(
mlir::Type Ty, mlir::Location Loc, const Twine &Name,
mlir::OpBuilder::InsertPoint ip, mlir::Value ArraySize, bool isTemporary) {
assert(ip.isSet() && "Insertion point is not set");
return emitAlloca(Name.str(), Ty, Loc, CharUnits(), ip, ArraySize)
return emitAlloca(Name.str(), Ty, Loc, CharUnits(), ip, ArraySize,
isTemporary)
.getDefiningOp<cir::AllocaOp>();
}

/// Just like CreateTempAlloca above, but place the alloca into the function
/// entry basic block instead.
cir::AllocaOp CIRGenFunction::CreateTempAllocaInFnEntryBlock(
mlir::Type Ty, mlir::Location Loc, const Twine &Name,
mlir::Value ArraySize) {
mlir::Type Ty, mlir::Location Loc, const Twine &Name, mlir::Value ArraySize,
bool isTemporary) {
return CreateTempAlloca(Ty, Loc, Name, ArraySize,
/*insertIntoFnEntryBlock=*/true);
/*insertIntoFnEntryBlock=*/true, isTemporary);
}

/// Given an object of the given canonical type, can we safely copy a
Expand Down
Loading