Dutch Finance Minister Joins Australia in Warning Public of Cryptocurrency Scams 1068

Amid the rising number of crypto-related scams that are coming about with the ever-increasing popularity of cryptocurrencies, Finance Minister Wopke Hoekstra outlined his concerns over the recent rapid boom in the popularity of the coins in a six-page letter to the Dutch House of Representatives and the Senate.

Hoekstra argued that there has been little time to understand and react to the changing landscape, and that the current supervision and regulatory framework is ill-equipped to deal with it; because of the cross-border nature of the technology and markets, closing those gaps requires a unified approach across governments and borders.

Like many other policymakers, Hoekstra sees the value in promoting and developing the technology behind cryptocurrencies — specifically blockchain technology. However, in addition to the concern over fraud and hacking, the minister also expressed concern over the immature and unregulated nature of the market and how to better inform consumers of the potential risks.

Elsewhere

The Australian Taxation Office (ATO) has also issued a warning to the public: be wary of scammers impersonating the ATO and demanding Bitcoin or other cryptocurrencies as a form of payment for fake tax debts. Officials in other countries are calling governments and citizens alike to be wary of cryptocurrencies, too. According to a statement from the agency, so far, $50,000 has been paid in Bitcoin to scammers claiming to be its representatives — and this number is sure to increase.

Kath Anderson, the Assistant Commissioner of the ATO, describes the situation as follows: “Cryptocurrency operates in a virtual world, and once the scammers receive payment, it’s virtually impossible to get it back. Scammers are constantly adapting their methods to maximize their chances of picking your pocket. Unfortunately, it was inevitable that scammers would target cryptocurrency given its current popularity and anonymity.”

In attempts to decrease the likelihood of this continuing to happen, the ATO is warning taxpayers that scammers are constantly changing their methods — now looking to increase their gains using cryptocurrencies. In 2017, the ATO received almost 80,000 reports of scams and more than $2.4 million dollars were lost to scammers that claimed to be from the agency. Another strange aspect is that almost 1/3 of victims didn’t use cash or cryptocurrencies but reportedly paid their scammers in iTunes gift cards — $900,000 worth.

The ATO reiterates in its statement that phone calls that threaten with legal procedures or calling the police, are not from the ATO office. The agency also suggests that scammers will try to steal personal and private information like home addresses, first and last names, bank account numbers and other sensitive data — information the ATO won’t be calling to ask for over the phone.

“If you receive a phone call out of the blue, threatening police or legal action if you don’t pay a debt, or the person calling you is rude and aggressive, hang up, it won’t be the ATO. Any call-back number provided should be checked via an independent internet search to ensure you are calling the ATO,” reads the statement.

Previous ArticleNext Article

Leave a Reply

BTCT Announces Completion of $1 Million Ethereum Strategic Reserve 861

BTC Digital Ltd. (“BTCT” or the “Company”), a leading blockchain technology company, today announced that it has established a strategic reserve of $1 million in Ethereum (ETH). This milestone marks the first phase of BTCT’s broader initiative to deepen its exposure to Ethereum’s on–chain financial infrastructure and to position the Company for long–term growth in the emerging digital–asset era.

“As the stablecoin market continues its explosive growth, Ethereum has emerged as the foundation of on–chain USD settlement and value transfer,” said Mr. Siguang Peng, CEO of BTCT. “By securing an initial $1 million ETH reserve today—and with plans to scale that position—we are proactively positioning ourselves for decentralized finance, stablecoin issuance, and asset tokenization. This strategic move strengthens our technological edge, enhances market confidence, and optimizes capital deployment.”

Key Drivers of BTCT’s Ethereum Reserve Strategy:

  • Dominant Stablecoin Infrastructure: Over half of all major stablecoins—including USDT and USDC—operate on Ethereum, making it the centerpiece of decentralized USD issuance and settlement.
  • High–Volume On–Chain Transactions: Stablecoin activity has pushed Ethereum’s on–chain transaction volume to levels approaching traditional financial rails, underscoring its growing importance in global payments and liquidity networks.
  • Collateral and Security Dynamics: As decentralized finance and real–world asset tokenization expand, increased ETH staking and collateralization will further reduce circulating supply, reinforcing network security and value.
  • Institutional Adoption: A number of mining and blockchain firms have already begun integrating ETH into their reserve portfolios, leveraging smart contracts for yield–generation, collateral, and cross–chain financial products.
  • Regulatory and Technical Tailwinds: U.S. regulatory clarity, the forthcoming Pectra upgrade, and mature Layer–2 scaling solutions will significantly boost Ethereum’s throughput, cost–efficiency, and compliance readiness.

Looking Ahead

Building on its origins in large–scale crypto mining, BTCT is undergoing a strategic evolution from “hash–rate provider” to “on–chain financial infrastructure participant.” The Company intends to continuously augment its ETH holdings in alignment with market developments and network upgrades. BTCT believes that Ethereum, akin to “digital gold,” will remain indispensable—not only as a stablecoin settlement hub but also as a catalyst for decentralized payments, asset tokenization, and the next wave of global financial interoperability.

About BTC Digital Ltd.

BTC Digital Ltd. is a blockchain technology company, with a long-term strategy to create value across the metaverse, blockchain and cryptocurrency mining industry. The Company is committed to developing blockchain related businesses in North America, including cryptocurrency mining, mining farm construction, mining pool and data center operation, and miner accessories business.

For more information, please visit: https://btct.investorroom.com/

Ramp Introduces AI Agents to Automate Finance Operations 929

Ramp, the leading financial operations platform, announced its first AI agents, agents for controllers, to automatically enforce company expense policies, eliminate unauthorized spending, and prevent fraud. This is the first in a series of powerful and specialized agents Ramp is releasing this year to further reduce manual tasks faced by finance teams.

Finance teams are being asked to do more with less, yet the function remains largely manual. Teams using legacy platforms today spend up to 70% of their time on tasks like expense review, policy enforcement, and compliance audits. As a result, 59% of professionals in controllership roles report making several errors each month.

Ramp’s agents for controllers solve these problems by eliminating redundant tasks, working autonomously to review expenses and enforce policy.

Built on Ramp Intelligence, powered by OpenAI’s reasoning models

Ramp’s agents for controllers apply context-aware, human-like reasoning to manage entire workflows independently and proactively. Unlike traditional automation that relies on basic rules and conditional logic, these agents reason and act on behalf of the finance team, working independently to enforce spend policies at scale, immediately prevent violations, and continuously improve company spending guidelines. Ramp agents are meticulous, auditable, and consistent, escalating issues when needed and providing a clear audit trail for every decision. Early customers reported 99% accuracy in expense approvals.

“Before Ramp agents, we manually reviewed 100% of transactions. Now, Ramp agents take the first pass and flag what actually needs our attention. Every decision Ramp makes is logged with a clear audit trail. Accuracy matters, and Ramp consistently gets it right. We’ve seen fewer errors, faster reviews, and stronger policy enforcement across the board,” said Richard Gobea, Finance Manager at Quora.

Ramp agents are powered by Ramp Intelligence, Ramp’s AI platform that automates expense reporting, data entry, contract review, and accounting accuracy. Agents learn and adapt directly from company policies and feedback from users to:

  • Approve low-risk expenses or provide a recommendation with rationale to the approver
  • Alert of suspicious receipts and invoices
  • Answer employee questions about spend policy
  • Uncover trends that signal fraud or careless spend
  • Suggest edits to company expense policies based on usage and feedback

“Ramp agents have complete knowledge of your accounting rules and expense policies that employees don’t carry in their heads, plus instant access to transaction details that finance teams would need time to gather. This lets them act faster and more accurately on every transaction,” said Karim Atiyeh, co-founder and CTO at Ramp. “This isn’t just automation. It’s intelligent reasoning that handles complex financial decisions to reduce errors, strengthen policy enforcement, and stop fraud.”

“It’s amazing to see what Ramp has built with our newest reasoning models,” said Olivier Godement, head of platform product at OpenAI. “These agents are taking care of key financial processes and, most importantly, getting them right – letting teams focus on deeper strategic work.”

The AI advantage for finance

Ramp agents put best-in-class AI into the hands of finance teams and provide a new layer of engineering support to resource-constrained teams. Ramp invests 50% of its payroll in research and development so every finance team, no matter the size, can benefit from the latest breakthroughs in AI automation and reasoning.

“Ramp takes the manual work off our plate and gives us the confidence that we’re ahead of emerging AI fraud threats before they hit us,” said Lawrence Dann-Fenwick, Head of Strategic Finance at Hex.

Finance teams at leading AI companies like Notion, Hex, Sierra, and Quora already use Ramp to move faster, operate smarter, and stretch every dollar further.

To learn more about Ramp agents visit: ramp.com/intelligence.

About Ramp

Ramp is a financial operations platform designed to save companies time and money. Our all-in-one solution combines payments, corporate cards, vendor management, procurement, travel booking, and automated bookkeeping with built-in intelligence to maximize the impact of every dollar and hour spent. Over 40,000 customers, from family farms to space startups, have saved $10 billion and 27.5 million hours with Ramp. Founded in 2019, Ramp enables tens of billions in purchases annually. Learn more at www.ramp.com.

Lumina AI Debuts RCL 2.7.0 with Native Linux Support for GPU-Free Machine Learning 830

Ground-breaking, GPU-free machine learning now installs in minutes on Ubuntu, Red Hat Enterprise Linux, and Fedora, complete with a 30-day free trial.

Lumina AI today announced the general availability of Random Contrast Learning (RCL) 2.7.0, the first production release to include a fully native Linux build of its CPU-optimized machine-learning engine. Data-science teams can train and deploy high-accuracy models directly in Linux environments, without proprietary runtimes or specialized hardware.

“Adding Linux support means users can now use our AI tools on the operating system where most AI workloads run. This makes it easier for people to integrate RCL in their existing workflows and helps more organizations get value from our technology.” – Fadi Farhat, SVP Operations

RCL 2.7.0 Highlights

  • Native support for leading Linux distributions: successfully tested on Ubuntu 22 & 24, Red Hat Enterprise Linux 9 & 10, and Fedora Workstation 42
  • Consistent command-line experience: The Linux executables prismrcl and prismrclm behave exactly like their Windows counterparts; users simply adjust file paths to Linux syntax.
  • Auto-optimize 2.5+ routine: Automatically selects the most appropriate metric—accuracy, macro-F1, weighted-F1, or Matthews correlation coefficient—based on each dataset.
  • LLM training mode: Adding the –llm flag with –readtextbyline places RCL in language-model training mode for datasets already prepared in the RCL-LLM format.
  • Broad data-type coverage: Handles images (.png), text, and tabular inputs; tabular data train effectively without prior normalization.
  • Clean upgrade path: earlier models must be retrained to ensure compatibility and auditability

“With native Linux support, RCL 2.7.0 positions Lumina at the intersection of open-source innovation and sustainable AI. We’re proving that state-of-the-art performance doesn’t require GPUs—just smart engineering on the hardware organizations already own.” – Allan Martin, CEO

Availability and 30-Day Trial

RCL 2.7.0 with native Linux support is available today. Organizations can begin a 30-day free trial at lumina247.com/prismrcl-sign-up-2-0. This press release coincides with the product launch.

About Lumina AI

Lumina AI pioneers Random Contrast Learning, an algorithm that achieves state-of-the-art accuracy with dramatically faster training times—no GPUs required. From healthcare imaging to financial fraud detection, Lumina delivers sustainable, CPU-first machine-learning solutions across Windows and Linux.

Bedroom Trader OLI: Trade the World’s Crypto Markets Without Leaving Your Pillow 675

Why Establish an Office or Construct a Lavish Trading Desk?

Bedroom Trader OLI (OLI) demonstrates that participation in global markets requires nothing more than a smartphone, a Wi‑Fi connection, and a comfortable setting.

What Is OLI?

Bedroom Trader OLI represents a meme-driven cryptocurrency collective tailored for casual traders who prioritise convenience over traditional workspaces. The initiative is grounded in three foundational principles: unrestricted accessibility, decentralised community governance, and inclusive meme culture. Trading activities may commence at any time and location, free from conventional entry barriers. Token holders are empowered to direct major organisational decisions through decentralised autonomous organisation (DAO) voting mechanisms. A culture of humorous exchange and viral creativity encourages ease of entry and fosters a welcoming environment. OLI redefines participation in cryptocurrency by transforming it into a familiar daily activity that accommodates both newcomers and experienced market participants.

Tokenomics in Plain English

The OLI token supply dynamically adjusts in response to community activity. At the close of each month, a tally of active wallets is conducted, followed by the minting of 50 to 150 billion OLI tokens based on that figure. Fifty percent is allocated as a universal reward for all holders. Every wallet in possession of OLI at the time of the monthly snapshot receives an equal share, ensuring equity and preventing concentration of tokens among large holders. The remaining fifty percent is dedicated to ecosystem development. This allocation supports airdrops, marketing initiatives, strategic partnerships, liquidity provisioning, and the operational development fund. By linking token issuance to verified engagement, OLI cultivates a self-regulating economy that prioritises collective growth.

Roadmap Highlights

Q4 2024 – Launch of the Bedroom Trading Challenge, featuring the initial airdrop, social media competitions, and a beta leaderboard.
Q1 2025 – Implementation of DAO Voting Module, enabling live on-chain proposal submissions and governance procedures.
Q2 2025 – Release of OLI Swap, offering one-tap token swaps and liquidity farming directly via mobile platforms.
Q3 2025 – Initiation of Meme Collaboration Season, including partnerships with leading meme tokens to enhance reach and utility.

Join the Pillow‑Powered Revolution

The need for conventional office settings is obsolete. With the use of a mobile device and a comfortable environment, market participation becomes straightforward, interactive, and enjoyable.

Website – https://tradeoli.io

Trustee Plus Revolution: Hundreds of Visitors Instantly Received a Fraction of Bitcoin at Money20/20. How Did It Happen? 603

This year at Money20/20 in Amsterdam, the financial community witnessed a real sensation at the booth of the crypto wallet Trustee Plus. For the first time in the event’s history, anyone could receive a fraction of Bitcoin using only a mobile phone number — even if they had never used cryptocurrency before.

This innovative technology allows users to initiate a crypto transfer to a mobile number, even if the recipient is not yet a Trustee Plus user. Once the recipient downloads the app, the funds automatically appear in their balance. The received Bitcoin can then be held, exchanged, or spent.

This solution sparked significant interest among Money20/20 attendees. According to company representatives, nearly 500 visitors enriched their crypto portfolios thanks to the Trustee Plus booth.

“One of Trustee Plus’s core missions is to unlock the potential of future finance for everyday people. We believe cryptocurrency should work much more simply and intuitively than traditional banking services. And when we hear from new users, ‘I’ll remember this gifted piece of Bitcoin for the rest of my life’, it reminds us that everything we do truly matters,” said Vadym Hrusha, Founder of Trustee Plus.

Effortless Bitcoin Spending with Trustee

Trustee Plus also enables seamless conversion of Bitcoin to euros directly within the app. The converted funds can be used in several ways: via SEPA transfers to any European IBAN, or through the Quicko Digital virtual card, which is currently available for free issuance. All card operations are commission-free.

By combining an intuitive interface, instant transfers, and the ability to use crypto in everyday transactions, Trustee Plus takes another step toward the mass adoption of digital assets in Europe’s financial ecosystem.

Earlier, the largest crypto media outlet in Eastern Europe, Incrypted, included Trustee Plus in its list of the “Top 12 Best Cryptocurrency Projects for Paying with Bitcoin or Ethereum in 2025.”

Skywork-Reward-V2: Leading the New Milestone for Open-Source Reward Models 563

In September 2024, Skywork first open-sourced the Skywork-Reward series models and related datasets. Over the past nine months, these models and data have been widely adopted by the open-source community for research and practice, with over 750,000 cumulative downloads on the HuggingFace platform, helping multiple frontier models achieve excellent results in authoritative evaluations such as RewardBench.

On July 4, 2025, Skywork continues to open-source the second-generation reward models – the Skywork-Reward-V2 series, comprising 8 reward models based on different base models of varying sizes, with parameters ranging from 600 million to 8 billion. These models have achieved top rankings across seven major mainstream reward model evaluation benchmarks.

Skywork-Reward-V2 Download Links
HuggingFace: https://huggingface.co/collections/Skywork/skywork-reward-v2-685cc86ce5d9c9e4be500c84
GitHub: https://github.com/SkyworkAI/Skywork-Reward-V2
Technical Report: https://arxiv.org/abs/2507.01352

Reward models play a crucial role in the Reinforcement Learning from Human Feedback (RLHF) process. In developing this new generation of reward models, we constructed a hybrid dataset called Skywork-SynPref-40M, containing a total of 40 million preference pairs.

To achieve large-scale, efficient data screening and filtering, Skywork specially designed a two-stage human-machine collaborative process that combines high-quality human annotation with the scalable processing capabilities of models. In this process, humans provide rigorously verified high-quality annotations, while Large Language Models (LLMs) automatically organize and expand based on human guidance.

Based on the above high-quality hybrid preference data, we developed the Skywork-Reward-V2 series, which demonstrates broad applicability and excellent performance across multiple capability dimensions, including general alignment with human preferences, objective correctness, safety, resistance to style bias, and best-of-N scaling capability. Experimental validation shows that this series of models achieved the best performance on seven mainstream reward model evaluation benchmarks.

01 Skywork-SynPref-40M: Human-Machine Collaboration for Million-Scale Human Preference Data Screening

Even the most advanced current open-source reward models still perform inadequately on most mainstream evaluation benchmarks. They fail to effectively capture the subtle and complex characteristics of human preferences, particularly when facing multi-dimensional, multi-level feedback.

Additionally, many reward models tend to excel on specific benchmark tasks but struggle to transfer to new tasks or scenarios, exhibiting obvious “overfitting” phenomena. Although existing research has attempted to improve performance through optimizing objective functions, improving model architectures, and recently emerging Generative Reward Models, the overall effectiveness remains quite limited.

We believe that the current fragility of reward models mainly stems from the limitations of existing preference datasets, which often have limited coverage, mechanical label generation methods, or lack rigorous quality control.

Therefore, in developing the new generation of reward models, we not only continued the first generation’s experience in data optimization but also introduced more diverse and larger-scale real human preference data, striving to improve data scale while maintaining data quality.

Consequently, Skywork proposes Skywork-SynPref-40M – the largest preference hybrid dataset to date, containing a total of 40 million preference sample pairs. Its core innovation lies in a “human-machine collaboration, two-stage iteration” data selection pipeline.

Stage 1: Human-Guided Small-Scale High-Quality Preference Construction

The team first constructed an unverified initial preference pool and used Large Language Models (LLMs) to generate preference-related auxiliary attributes such as task type, objectivity, and controversy. Based on this, human annotators followed a strict verification protocol and used external tools and advanced LLMs to conduct detailed reviews of partial data, ultimately constructing a small-scale but high-quality “gold standard” dataset as the basis for subsequent data generation and model evaluation.

Subsequently, we used preference labels from the gold standard data as guidance, combined with LLM large-scale generation of high-quality “silver standard” data, thus achieving data volume expansion. The team also conducted multiple rounds of iterative optimization: in each round, training reward models and identifying model weaknesses based on their performance on gold standard data; then retrieving similar samples and using multi-model consensus mechanisms for automatic annotation to further expand and enhance silver standard data. This human-machine collaborative closed-loop process continues iteratively, effectively improving the reward model’s understanding and discrimination of preferences.

Stage 2: Fully Automated Large-Scale Preference Data Expansion

After obtaining preliminary high-quality models, the second stage turns to automated large-scale data expansion. This stage no longer relies on manual review but uses trained reward models to perform consistency filtering:

  • If a sample’s label is inconsistent with the current optimal model’s prediction, or if the model’s confidence is low, LLMs are called to automatically re-annotate;
  • If the sample label is consistent with the “gold model” (i.e., a model trained only on human data) prediction and receives support from the current model or LLM, it can directly pass screening.

Through this mechanism, the team successfully screened 26 million selected data points from the original 40 million samples, achieving a good balance between preference data scale and quality while greatly reducing the human annotation burden.

02 Skywork-Reward-V2: Matching Large Model Performance with Small Model Size

Compared to the previous generation Skywork-Reward, Skywork newly released Skywork-Reward-V2 series provides 8 reward models trained based on Qwen3 and LLaMA3 series models, with parameter scales covering from 600 million to 8 billion.

On seven mainstream reward model evaluation benchmarks including Reward Bench v1/v2, PPE Preference & Correctness, RMB, RM-Bench, and JudgeBench, the Skywork-Reward-V2 series comprehensively achieved current state-of-the-art (SOTA) levels.

Compensating for Model Scale Limitations with Data Quality and Richness

Even the smallest model, Skywork-Reward-V2-Qwen3-0.6B, achieves overall performance nearly matching the previous generation’s strongest model, Skywork-Reward-Gemma-2-27B-v0.2, on average. The largest scale model, Skywork-Reward-V2-Llama-3.1-8B, achieved comprehensive superiority across all mainstream benchmark tests, becoming the currently best-performing open-source reward model overall.

Broad Coverage of Multi-Dimensional Human Preference Capabilities

Additionally, Skywork-Reward-V2 achieved leading results in multiple advanced capability evaluations, including Best-of-N (BoN) tasks, bias resistance capability testing (RM-Bench), complex instruction understanding, and truthfulness judgment (RewardBench v2), demonstrating excellent generalization ability and practicality.

Highly Scalable Data Screening Process Significantly Improves Reward Model Performance

Beyond excellent performance in evaluations, the team also found that in the “human-machine collaboration, two-stage iteration” data construction process, preference data that underwent careful screening and filtering could continuously and effectively improve reward models’ overall performance through multiple iterative training rounds, especially showing remarkable performance in the second stage’s fully automated data expansion.

In contrast, blindly expanding raw data not only fails to improve initial performance but may introduce noise and negative effects. To further validate the critical role of data quality, we conducted experiments on a subset of 16 million data points from an early version. Results showed that training an 8B-scale model using only 1.8% (about 290,000) of the high-quality data already exceeded the performance of current 70B-level SOTA reward models. This result again confirms that the Skywork-SynPref dataset not only leads in scale but also has significant advantages in data quality.

03 Welcoming a New Milestone for Open-Source Reward Models: Helping Build Future AI Infrastructure

In this research work on the second-generation reward model Skywork-Reward-V2, the team proposed Skywork-SynPref-40M, a hybrid dataset containing 40 million preference pairs (with 26 million carefully screened pairs), and Skywork-Reward-V2, a series of eight reward models with state-of-the-art performance designed for broad task applicability.

We believe this research work and the continued iteration of reward models will help advance the development of open-source reward models and more broadly promote progress in Reinforcement Learning from Human Feedback (RLHF) research. This represents an important step forward for the field and can further accelerate the prosperity of the open-source community.

The Skywork-Reward-V2 series models focus on research into scaling preference data. In the future, the team’s research scope will gradually expand to other areas that have not been fully explored, such as alternative training techniques and modeling objectives.

Meanwhile, considering recent development trends in the field – reward models and reward shaping mechanisms have become core components in today’s large-scale language model training pipelines, applicable not only to RLHF based on human preference learning and behavior guidance, but also to RLVR including mathematics, programming, or general reasoning tasks, as well as agent-based learning scenarios.

Therefore, we envision that reward models, or more broadly, unified reward systems, are poised to form the core of AI infrastructure in the future. They will no longer merely serve as evaluators of behavior or correctness, but will become the “compass” for intelligent systems navigating complex environments, helping them align with human values and continuously evolve toward more meaningful goals.

Additionally, Skywork released the world’s first deep research AI workspace agents in May, which you can experience by visiting: skywork.ai