Skip to content

ci: add typo ci check and fix typos #4375

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 26, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions .github/workflows/bk-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -483,6 +483,14 @@ jobs:
if: cancelled()
run: ./dev/ci-tool print_thread_dumps

typo-check:
name: Typo Check
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Check typos
uses: crate-ci/typos@master

owasp-dependency-check:
name: OWASP Dependency Check
runs-on: ubuntu-latest
Expand Down
56 changes: 56 additions & 0 deletions .typos.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
#

[default.extend-words]
# abbr
"ba" = "ba"
"bve" = "bve"
"cace" = "cace"
"cann" = "cann"
"dbe" = "dbe"
"entrys" = "entrys"
"fo" = "fo"
"ine" = "ine"
"isse" = "isse"
"mor" = "mor"
"nwe" = "nwe"
"nd" = "nd"
"nin" = "nin"
"oce" = "oce"
"ot" = "ot"
"ser" = "ser"
"shouldnot" = "shouldnot"
"tio" = "tio"
"ue" = "ue"
# keep for comptability
"deleteable" = "deleteable"
"infinit" = "infinit"
"explict" = "explict"
"uninitalize" = "uninitalize"
# keyword fp
"guage" = "guage"
"passin" = "passin"
"testng" = "testng"
"vertx" = "vertx"
"verticle" = "verticle"

[files]
extend-exclude = [
"bookkeeper-server/src/test/java/org/apache/bookkeeper/meta/TestLedgerMetadataSerDe.java",
]
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ public static <ReturnT> CompletableFuture<ReturnT> run(
* @param task a task to execute.
* @param scheduler scheduler to schedule the task and complete the futures.
* @param key the submit key for the scheduler.
* @param <ReturnT> the return tye.
* @param <ReturnT> the return type.
* @return future represents the result of the task with retries.
*/
public static <ReturnT> CompletableFuture<ReturnT> run(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -323,7 +323,7 @@ int runCmd(CommandLine cmdLine) throws Exception {
}

/**
* Intializes new cluster by creating required znodes for the cluster. If
* Initializes new cluster by creating required znodes for the cluster. If
* ledgersrootpath is already existing then it will error out. If for any
* reason it errors out while creating znodes for the cluster, then before
* running initnewcluster again, try nuking existing cluster by running
Expand Down Expand Up @@ -704,7 +704,7 @@ int runCmd(CommandLine cmdLine) throws Exception {

ReadLedgerCommand cmd = new ReadLedgerCommand(entryFormatter, ledgerIdFormatter);
ReadLedgerCommand.ReadLedgerFlags flags = new ReadLedgerCommand.ReadLedgerFlags();
flags.bookieAddresss(bookieAddress);
flags.bookieAddress(bookieAddress);
flags.firstEntryId(firstEntry);
flags.forceRecovery(forceRecovery);
flags.lastEntryId(lastEntry);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,7 @@ class EntryLogsPerLedgerCounter {
* 'expiry duration' and 'maximumSize' will be set to
* entryLogPerLedgerCounterLimitsMultFactor times of
* 'ledgerIdEntryLogMap' cache limits. This is needed because entries
* from 'ledgerIdEntryLogMap' can be removed from cache becasue of
* from 'ledgerIdEntryLogMap' can be removed from cache because of
* accesstime expiry or cache size limits, but to know the actual number
* of entrylogs per ledger, we should maintain this count for long time.
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -137,8 +137,8 @@ BufferedLogChannel createNewLogForCompaction(File dirForNextEntryLog) throws IOE
}
}

void setWritingLogId(long lodId) {
this.writingLogId = lodId;
void setWritingLogId(long logId) {
this.writingLogId = logId;
}

void setWritingCompactingLogId(long logId) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -569,7 +569,7 @@ public void onRotateEntryLog() {
// for interleaved ledger storage, we request a checkpoint when rotating a entry log file.
// the checkpoint represent the point that all the entries added before this point are already
// in ledger storage and ready to be synced to disk.
// TODO: we could consider remove checkpointSource and checkpointSouce#newCheckpoint
// TODO: we could consider remove checkpointSource and checkpointSource#newCheckpoint
// later if we provide kind of LSN (Log/Journal Squeuence Number)
// mechanism when adding entry. {@link https://github.com/apache/bookkeeper/issues/279}
Checkpoint checkpoint = checkpointSource.newCheckpoint();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -200,20 +200,20 @@ public List<File> getWritableLedgerDirsForNewLog() throws NoWritableLedgerDirExc

List<File> getDirsAboveUsableThresholdSize(long thresholdSize, boolean loggingNoWritable)
throws NoWritableLedgerDirException {
List<File> fullLedgerDirsToAccomodate = new ArrayList<File>();
List<File> fullLedgerDirsToAccommodate = new ArrayList<File>();
for (File dir: this.ledgerDirectories) {
// Pick dirs which can accommodate little more than thresholdSize
if (dir.getUsableSpace() > thresholdSize) {
fullLedgerDirsToAccomodate.add(dir);
fullLedgerDirsToAccommodate.add(dir);
}
}

if (!fullLedgerDirsToAccomodate.isEmpty()) {
if (!fullLedgerDirsToAccommodate.isEmpty()) {
if (loggingNoWritable) {
LOG.info("No writable ledger dirs below diskUsageThreshold. "
+ "But Dirs that can accommodate {} are: {}", thresholdSize, fullLedgerDirsToAccomodate);
+ "But Dirs that can accommodate {} are: {}", thresholdSize, fullLedgerDirsToAccommodate);
}
return fullLedgerDirsToAccomodate;
return fullLedgerDirsToAccommodate;
}

// We will reach here when we find no ledgerDir which has atleast
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -124,13 +124,13 @@ private void check(final LedgerDirsManager ldm) {
}
}

List<File> fullfilledDirs = new ArrayList<File>(ldm.getFullFilledLedgerDirs());
List<File> fulfilledDirs = new ArrayList<File>(ldm.getFullFilledLedgerDirs());
boolean makeWritable = ldm.hasWritableLedgerDirs();

// When bookie is in READONLY mode, i.e there are no writableLedgerDirs:
// - Update fullfilledDirs disk usage.
// - Update fulfilledDirs disk usage.
// - If the total disk usage is below DiskLowWaterMarkUsageThreshold
// add fullfilledDirs back to writableLedgerDirs list if their usage is < conf.getDiskUsageThreshold.
// add fulfilledDirs back to writableLedgerDirs list if their usage is < conf.getDiskUsageThreshold.
try {
if (!makeWritable) {
float totalDiskUsage = diskChecker.getTotalDiskUsage(ldm.getAllLedgerDirs());
Expand All @@ -144,7 +144,7 @@ private void check(final LedgerDirsManager ldm) {
}
}
// Update all full-filled disk space usage
for (File dir : fullfilledDirs) {
for (File dir : fulfilledDirs) {
try {
diskUsages.put(dir, diskChecker.checkDir(dir));
if (makeWritable) {
Expand Down Expand Up @@ -254,7 +254,7 @@ private void checkDirs(final LedgerDirsManager ldm)

private void validateThreshold(float diskSpaceThreshold, float diskSpaceLwmThreshold) {
if (diskSpaceThreshold <= 0 || diskSpaceThreshold >= 1 || diskSpaceLwmThreshold - diskSpaceThreshold > 1e-6) {
throw new IllegalArgumentException("Disk space threashold: "
throw new IllegalArgumentException("Disk space threshold: "
+ diskSpaceThreshold + " and lwm threshold: " + diskSpaceLwmThreshold
+ " are not valid. Should be > 0 and < 1 and diskSpaceThreshold >= diskSpaceLwmThreshold");
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@
*
* <p>Uses the specified amount of memory and pairs it with a hashmap.
*
* <p>The memory is splitted in multiple segments that are used in a
* <p>The memory is split in multiple segments that are used in a
* ring-buffer fashion. When the read cache is full, the oldest segment
* is cleared and rotated to make space for new entries to be added to
* the read cache.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -706,7 +706,7 @@ public MetadataClientDriver getMetadataClientDriver() {
* cheap to compute but does not protect against byzantine bookies (i.e., a
* bookie might report fake bytes and a matching CRC32). The MAC code is more
* expensive to compute, but is protected by a password, i.e., a bookie can't
* report fake bytes with a mathching MAC unless it knows the password.
* report fake bytes with a matching MAC unless it knows the password.
* The CRC32C, which use SSE processor instruction, has better performance than CRC32.
* Legacy DigestType for backward compatibility. If we want to add new DigestType,
* we should add it in here, client.api.DigestType and DigestType in DataFormats.proto.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1297,7 +1297,7 @@ public Boolean apply(MetadataBookieDriver driver) {
}

/**
* Intializes new cluster by creating required znodes for the cluster. If
* Initializes new cluster by creating required znodes for the cluster. If
* ledgersrootpath is already existing then it will error out.
*
* @param conf
Expand Down Expand Up @@ -1569,7 +1569,7 @@ public void triggerAudit()
* Triggers AuditTask by resetting lostBookieRecoveryDelay and then make
* sure the ledgers stored in the given decommissioning bookie are properly
* replicated and they are not underreplicated because of the given bookie.
* This method waits untill there are no underreplicatedledgers because of this
* This method waits until there are no underreplicatedledgers because of this
* bookie. If the given Bookie is not shutdown yet, then it will throw
* BKIllegalOpException.
*
Expand Down Expand Up @@ -1612,7 +1612,7 @@ public void decommissionBookie(BookieId bookieAddress)
Set<Long> ledgersStoredInThisBookie = bookieToLedgersMap.get(bookieAddress.toString());
if ((ledgersStoredInThisBookie != null) && (!ledgersStoredInThisBookie.isEmpty())) {
/*
* wait untill all the ledgers are replicated to other
* wait until all the ledgers are replicated to other
* bookies by making sure that these ledgers metadata don't
* contain this bookie as part of their ensemble.
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
import org.apache.bookkeeper.net.BookieId;

/**
* This interface determins how entries are distributed among bookies.
* This interface determines how entries are distributed among bookies.
*
* <p>Every entry gets replicated to some number of replicas. The first replica for
* an entry is given a replicaIndex of 0, and so on. To distribute write load,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -369,7 +369,7 @@ default void updateBookieInfo(Map<BookieId, BookieInfo> bookieInfoMap) {
*
* <p>The default implementation will pick a bookie randomly from the ensemble.
* Other placement policies will be able to do better decisions based on
* additional informations (eg: rack or region awareness).
* additional information (eg: rack or region awareness).
*
* @param metadata
* the {@link LedgerMetadata} object
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -233,11 +233,11 @@ void replicate(final LedgerHandle lh, final LedgerFragment lf,
final Set<BookieId> targetBookieAddresses,
final BiConsumer<Long, Long> onReadEntryFailureCallback)
throws InterruptedException {
Set<LedgerFragment> partionedFragments = splitIntoSubFragments(lh, lf,
Set<LedgerFragment> partitionedFragments = splitIntoSubFragments(lh, lf,
bkc.getConf().getRereplicationEntryBatchSize());
LOG.info("Replicating fragment {} in {} sub fragments.",
lf, partionedFragments.size());
replicateNextBatch(lh, partionedFragments.iterator(),
lf, partitionedFragments.size());
replicateNextBatch(lh, partitionedFragments.iterator(),
ledgerFragmentMcb, targetBookieAddresses, onReadEntryFailureCallback);
}

Expand Down Expand Up @@ -559,7 +559,7 @@ private void updateAverageEntrySize(int toSendSize) {
/**
* Callback for recovery of a single ledger fragment. Once the fragment has
* had all entries replicated, update the ensemble in zookeeper. Once
* finished propogate callback up to ledgerFragmentsMcb which should be a
* finished propagate callback up to ledgerFragmentsMcb which should be a
* multicallback responsible for all fragments in a single ledger
*/
static class SingleFragmentCallback implements AsyncCallback.VoidCallback {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -785,7 +785,7 @@ public void readComplete(int rc, LedgerHandle lh, Enumeration<LedgerEntry> seq,
* Read a sequence of entries asynchronously, allowing to read after the LastAddConfirmed range.
* <br>This is the same of
* {@link #asyncReadEntries(long, long, ReadCallback, Object) }
* but it lets the client read without checking the local value of LastAddConfirmed, so that it is possibile to
* but it lets the client read without checking the local value of LastAddConfirmed, so that it is possible to
* read entries for which the writer has not received the acknowledge yet. <br>
* For entries which are within the range 0..LastAddConfirmed BookKeeper guarantees that the writer has successfully
* received the acknowledge.<br>
Expand Down Expand Up @@ -1009,7 +1009,7 @@ private CompletableFuture<LedgerEntries> batchReadEntriesInternalAsync(long star
* Read a sequence of entries asynchronously, allowing to read after the LastAddConfirmed range.
* <br>This is the same of
* {@link #asyncReadEntries(long, long, ReadCallback, Object) }
* but it lets the client read without checking the local value of LastAddConfirmed, so that it is possibile to
* but it lets the client read without checking the local value of LastAddConfirmed, so that it is possible to
* read entries for which the writer has not received the acknowledge yet. <br>
* For entries which are within the range 0..LastAddConfirmed BookKeeper guarantees that the writer has successfully
* received the acknowledge.<br>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ class WeightedRandomSelectionImpl<T> implements WeightedRandomSelection<T> {
Double randomMax;
int maxProbabilityMultiplier;
Map<T, WeightedObject> map;
TreeMap<Double, T> cummulativeMap = new TreeMap<Double, T>();
TreeMap<Double, T> cumulativeMap = new TreeMap<Double, T>();
ReadWriteLock rwLock = new ReentrantReadWriteLock(true);

WeightedRandomSelectionImpl() {
Expand Down Expand Up @@ -120,10 +120,10 @@ public int compare(WeightedObject o1, WeightedObject o2) {
// The probability of picking a bookie randomly is defaultPickProbability
// but we change that priority by looking at the weight that each bookie
// carries.
TreeMap<Double, T> tmpCummulativeMap = new TreeMap<Double, T>();
TreeMap<Double, T> tmpCumulativeMap = new TreeMap<Double, T>();
Double key = 0.0;
for (Map.Entry<T, Double> e : weightMap.entrySet()) {
tmpCummulativeMap.put(key, e.getKey());
tmpCumulativeMap.put(key, e.getKey());
if (LOG.isDebugEnabled()) {
LOG.debug("Key: {} Value: {} AssignedKey: {} AssignedWeight: {}",
e.getKey(), e.getValue(), key, e.getValue());
Expand All @@ -134,7 +134,7 @@ public int compare(WeightedObject o1, WeightedObject o2) {
rwLock.writeLock().lock();
try {
this.map = map;
cummulativeMap = tmpCummulativeMap;
cumulativeMap = tmpCumulativeMap;
randomMax = key;
} finally {
rwLock.writeLock().unlock();
Expand All @@ -148,8 +148,8 @@ public T getNextRandom() {
// pick a random number between 0 and randMax
Double randomNum = randomMax * Math.random();
// find the nearest key in the map corresponding to the randomNum
Double key = cummulativeMap.floorKey(randomNum);
return cummulativeMap.get(key);
Double key = cumulativeMap.floorKey(randomNum);
return cumulativeMap.get(key);
} finally {
rwLock.readLock().unlock();
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ public interface OpenBuilder extends OpBuilder<ReadHandle> {
OpenBuilder withRecovery(boolean recovery);

/**
* Sets the password to be used to open the ledger. It defauls to an empty password
* Sets the password to be used to open the ledger. It defaults to an empty password
*
* @param password the password to unlock the ledger
*
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ default LedgerEntries batchRead(long startEntry, int maxCount, long maxSize)
* Read a sequence of entries asynchronously, allowing to read after the LastAddConfirmed range.
* <br>This is the same of
* {@link #read(long, long) }
* but it lets the client read without checking the local value of LastAddConfirmed, so that it is possibile to
* but it lets the client read without checking the local value of LastAddConfirmed, so that it is possible to
* read entries for which the writer has not received the acknowledge yet. <br>
* For entries which are within the range 0..LastAddConfirmed BookKeeper guarantees that the writer has successfully
* received the acknowledge.<br>
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@
import org.apache.bookkeeper.common.concurrent.FutureUtils;

/**
* Provide write access to a ledger. Using WriteAdvHandler the writer MUST explictly set an entryId. Beware that the
* Provide write access to a ledger. Using WriteAdvHandler the writer MUST explicitly set an entryId. Beware that the
* write for a given entryId will be acknowledged if and only if all entries up to entryId - 1 have been acknowledged
* too (expected from entryId 0)
*
Expand Down
Loading
Loading