On a Tuesday morning in March 2025, a Seoul-based engineer in his late forties walked into a meeting room and said no to $800 million. The buyer was Meta. The seller — a fabless semiconductor startup most American venture capitalists had never heard of — was FuriosaAI. Eighteen months later, the FuriosaAI Korea AI chip story has rewritten what foreign investors thought they knew about Asian deep-tech founders. The same company that turned away Mark Zuckerberg's check is now racing toward a 2027 IPO. Behind that race sit a Series D round of up to $500 million, an OpenAI partnership, and an LG enterprise deployment. For most Western readers, this is the part of Korea that does not show up on Netflix. However, it is the part that matters most for the next decade of global AI infrastructure. Furthermore, the story of how a Korean Nvidia challenger refused to sell hinges on one specific decision. That decision was made by one specific person. He is a former Samsung and AMD engineer named June Paik (백준호). This is the inside story of that decision — and what it means for foreign investors, founders, and operators watching Korea's deep-tech ascent. The $800 Million Decision That Reset the Korean AI Chip Startup Map In early 2025, FuriosaAI was raising fresh capital at a valuation of roughly 800 billion won, or about $550 million. Meanwhile, Meta was spending billions to reduce its dependence on Nvidia. As a result, the news broke in February 2025. Forbes reported that the social-media giant was in advanced acquisition talks with the Seoul-based Korean AI chip startup. The offer reportedly landed near $800 million. For a 140-person company that had raised only $115 million in equity to date, this was an extraordinary number. Specifically, it represented a roughly 45% premium over the company's then-current valuation. Moreover, it offered immediate liquidity to investors including Naver, DSC Investment, and CRIT Ventures. Then Paik said no. According to local Korean media outlets, the breakdown happened over post-acquisition strategy and organizational structure rather than price. In other words, Paik wanted to keep building. Meta wanted to absorb. The two visions did not align. Notably, the rejection was not announced through a press release. Instead, the news leaked out gradually through Korean financial media in March 2025, then surfaced in TechCrunch's coverage months later. For Korean tech watchers, the moment carried symbolic weight. South Korea has a long history of selling its best deep-tech to foreign acquirers — from semiconductor IP to mobile patents to gaming studios. As a result, when a domestic chip startup turned down Big Tech money to remain independent, the message landed beyond engineering circles. Specifically, it suggested that a new kind of Korean technology founder had arrived. From Samsung Storage to Silicon Valley: The Making of a Korean Nvidia Challenger To understand why Paik said no, it helps to retrace how he got there. Paik holds a master's degree in electrical engineering from the Georgia Institute of Technology. Before founding the company, he spent roughly three years at AMD as a software engineer working on multi-GPU stacks. Then he moved to Samsung Electronics as a hardware engineer focused on storage systems from 2013 to 2016. In short, his career sits at exactly the seam most chip startups need: hardware-software co-design. In January 2017, he left Samsung and founded FuriosaAI. The name was originally a placeholder, borrowed casually from the Mad Max character Imperator Furiosa. Eventually, it stuck — and it now signals what the team describes as relentless determination. The company's structure reflects Paik's bicultural instinct. Notably, FuriosaAI operates from two locations: a Seoul headquarters where most engineers sit, and a Santa Clara, California office that anchors customer relationships in the United States. As a result, the company can recruit Korean PhDs, American compiler engineers, and former Qualcomm and Google staff in parallel. Specifically, more than 90% of its 140-person headcount is technical. For foreign investors evaluating Asian Korean Nvidia challengers, this matters more than it sounds. Most Korean deep-tech firms historically struggled to recruit globally because their working language was Korean and their stock options were illiquid. By contrast, Paik built a company that operates fluently in both English and Korean from day one. Consequently, FuriosaAI today competes for talent against Cerebras, Groq, and SambaNova in the same hiring pool — not just against domestic chaebol affiliates. His personal stake reflects long-term commitment. According to public filings, Paik holds approximately 18.4% of FuriosaAI. At the rejected Meta valuation of $800 million, that stake was worth roughly $147 million on paper. At the current Series D target of around $2.3 billion, it is closer to $420 million. The First Chip Nobody Bought: Warboy and the Lessons of 2021 Before RNGD, there was Warboy. The first-generation Korean AI chip startup product launched in 2021. It was manufactured by Samsung Foundry on a 14-nanometer process. Furthermore, it was tuned for computer-vision inference rather than large language models. That was a reasonable bet at the time. Transformer-based LLMs had not yet broken into mainstream enterprise demand. The reception was mixed. On one hand, Warboy posted respectable scores on the MLPerf benchmarks, the industry's most credible third-party performance test. On the other hand, the chip arrived at a moment when Nvidia's A100 had just consolidated its grip on AI training workloads. As a result, large enterprise buyers stuck with what worked. Still, Warboy delivered one critical reference customer: Kakao, the Korean cloud and messaging conglomerate. Specifically, Kakao adopted Warboy for computer-vision AI workloads, giving FuriosaAI its first commercial validation. For a Korean Nvidia challenger trying to prove enterprise readiness, the Kakao deployment provided exactly the kind of credibility that pre-revenue chip startups rarely secure. Meanwhile, Paik watched OpenAI release GPT-3 and saw the future. In particular, he recognized that LLM inference would become the dominant workload of the coming decade. As a result, in late 2021 he committed the company to a clean-sheet redesign. The next chip would not be a faster Warboy. Instead, it would be built from the ground up for transformers. That decision would consume the next three years and reshape the company. The Tensor Contraction Gamble: Why the FuriosaAI RNGD Chip Looks Different The standard playbook for AI accelerators is straightforward. First, you build a fast matrix-multiplication engine. Then, you stack memory next to it. Finally, you wrap it in software that translates PyTorch into chip-native instructions. Nvidia's GPUs do this. So do most of the new American challengers like Groq, Cerebras, and SambaNova. Paik's team chose a different math. The FuriosaAI RNGD chip — pronounced "Renegade" — is built on what the company calls a Tensor Contraction Processor architecture. In simple terms, the chip treats tensor contraction differently. Tensor contraction is the core operation in deep learning. Instead of breaking it into a series of matrix multiplications, RNGD treats it as a first-class operation. As a result, RNGD requires fewer instructions to complete the same workload and minimizes data movement between the chip and memory. This sounds esoteric. However, the practical implications are dramatic. RNGD ships in a PCIe card with a 180-watt thermal footprint. By contrast, Nvidia's H100 SXM version draws up to 700 watts. Meanwhile, FuriosaAI claims its second-generation FuriosaAI RNGD chip delivers up to three times better performance per watt than the H100 on large language model workloads. According to the company, the total cost of building a server using its silicon runs roughly 50% of an equivalent Nvidia configuration. The choice of memory matters too. Specifically, RNGD integrates 48GB of HBM3 memory with 1.5 TB/s of bandwidth. For context, this is what separates FuriosaAI from most American inference-chip startups. In particular, Groq, Cerebras, and d-Matrix all opted to skip HBM in favor of SRAM-only or custom memory hierarchies. As a result, those chips struggle to run very large models on reasonable hardware footprints. Furiosa, by contrast, can run Llama 3.1 70B on just eight RNGD cards. Moreover, the chip was manufactured at TSMC on a 5-nanometer process, with mass production now underway. Each card delivers 512 TFLOPS of FP8 performance. Industry observers at HPCwire have described the architecture as one of the most distinctive bets in the inference-chip race. For more on Korea's broader semiconductor positioning, see Seoulz's earlier reporting on the Korea HBM chip war 2026. LG Said Yes, Meta Said Buy: The FuriosaAI Korea AI Chip Inflection In July 2025, three months after rejecting Meta, FuriosaAI announced a partnership with LG AI Research. Specifically, LG would deploy RNGD-powered servers to run its EXAONE 4.0 large language model platform across electronics, finance, telecommunications, and biotechnology workloads. This was not a press-release partnership. According to LG, its team had spent two years evaluating FuriosaAI's hardware. After months of rigorous testing, LG's research division reached a clear conclusion. RNGD delivered 2.25 times better inference performance per watt than the GPUs it had been using. The testing covered performance, energy efficiency, and software-stack readiness. Meanwhile, the chip met LG's latency and throughput requirements out of the box. For an enterprise buyer, that efficiency gap translates directly. Specifically, FuriosaAI reported that EXAONE running on RNGD produced approximately 3.5 times more tokens per data-center rack than the GPU configuration it replaced. As Paik told TechRadar in early 2026, the constraint matters. He explained: "Because most data center racks are restricted to under 15kW, our low power consumption is a critical breakthrough." The LG win mattered for one more reason. It was a Korean champion choosing a Korean champion — and choosing it on technical merit, not policy pressure. Furthermore, it gave FuriosaAI a flagship enterprise deployment that competing American startups could not easily match. Notably, LG also adopted FuriosaAI's hardware to support its internal AI agent, ChatExaone. The FuriosaAI Korea AI chip story now had a name brand attached. By contrast, the Meta negotiation moved in the opposite direction. Sources familiar with the talks indicate Meta was attracted to two specific things. The first was the Tensor Contraction Processor architecture. The second was the rare combination of HBM3 integration with mature manufacturing partnerships. Specifically, Meta wanted to bring those capabilities in-house to reduce its Nvidia dependency. However, Paik's team feared that absorption into Meta's chip program would freeze RNGD's enterprise commercialization. Consequently, the deal fell apart not over price but over strategic direction. The OpenAI Demo That Reframed the FuriosaAI Korea AI Chip Story On September 11, 2025, OpenAI opened its first office in Seoul. Approximately 300 industry stakeholders attended the launch. Furthermore, the event included a single hardware-company live demonstration — and it was not Nvidia. It was FuriosaAI. CTO Hanjoon Kim took the stage and ran OpenAI's open-weight gpt-oss 120B model in real time. The setup was deliberately compact: just two RNGD cards using MXFP4 precision, running a model with 120 billion parameters as a real-time chatbot. For comparison, running gpt-oss 120B on competing inference architectures typically requires substantially more silicon. Specifically, the same workload reportedly demands 64 d-Matrix chips spanning a full rack, or roughly 576 Groq chips spread across multiple racks. The demonstration was published on FuriosaAI's official blog and quickly picked up by Western tech press. OpenAI's Daniel Mirza described it as the first time an OpenAI model had run on Korean silicon. Meanwhile, Paik framed the moment in terms of the company's mission. He told the audience: "OpenAI's mission of making AGI beneficial to all of humanity aligns with our philosophy of making AI sustainable and accessible." For investors watching the FuriosaAI RNGD chip story, this was the validation moment. Specifically, three things happened at once. First, an American AI lab with global influence chose Korean hardware for its Seoul launch. Second, the demonstration ran a state-of-the-art open-weight model on a power footprint that fits inside a normal enterprise rack. Third, the event occurred just weeks after FuriosaAI closed a $125 million Series C bridge round that lifted its valuation to $735 million. In other words, the company that rejected Meta's $800 million was now worth more on paper than that offer — and growing. The Independent Path: Series D, IPO, and Global Expansion By early 2026, the FuriosaAI Korea AI chip story had become a textbook example of capital following technical validation. Specifically, the company appointed Morgan Stanley and Mirae Asset Securities as joint advisers on a Series D pre-IPO round of $300 to $500 million. According to Bloomberg and other reports, industry sources expect the new round to value FuriosaAI at roughly 3 trillion won, or approximately $2.3 billion. If the round closes at that level, two implications follow. First, FuriosaAI's valuation would rival or exceed Rebellions, the better-known sibling in Korea's AI semiconductor trinity. Second, the gap between FuriosaAI and the Meta offer would have widened to nearly three times in just over a year. The use of proceeds is concrete. Specifically, the Series D capital will fund three priorities: mass production scaling for RNGD at TSMC, R&D for a third-generation chip, and global commercial expansion. Mass-production targets are aggressive. FuriosaAI plans to ship 20,000 RNGD units in 2026, up from a few hundred in 2025. Meanwhile, the company is preparing HBM3E-based variants — Renegade+ and Renegade+ Max — that will push single-card memory capacity to roughly 144GB. The IPO timing reflects a deliberate sequencing. According to multiple sources, FuriosaAI is targeting a public listing in 2027 or 2028, after revenue has scaled and margins have stabilized. By contrast, Rebellions is on track for an earlier 2026 IPO. As a result, FuriosaAI is positioning itself as the more-deliberate, less-state-coupled player in the Korean Nvidia challenger field. International expansion is already underway. FuriosaAI signed a public-sector distribution agreement with Korean IT firm SysOne to supply RNGD cards to government AI deployments. Meanwhile, the company is pursuing customer pipelines in the United States, Saudi Arabia, and Southeast Asia. Furthermore, OpenAI's Seoul office is now testing RNGD against additional model workloads, and additional enterprise references are expected through 2026. What This Means for Korea — and for Foreign Investors The FuriosaAI Korea AI chip narrative does not exist in a vacuum. It sits at the center of a coordinated South Korean industrial policy that local press has nicknamed "K-Nvidia." Specifically, the Korean government's National Growth Fund has identified five domestic NPU developers eligible for strategic public-private investment: FuriosaAI, Rebellions, DeepX, HyperAccel, and Mobilint. Of these, the first three have absorbed nearly all the institutional capital so far. For foreign investors, the structural read is straightforward. Korea is treating AI semiconductor capability as a sovereign-level priority on par with batteries and bio. Furthermore, the policy framework explicitly prioritizes companies that maintain Korean independence rather than getting absorbed by US hyperscalers. Consequently, Paik's decision to reject Meta now looks less like a single founder's preference and more like a strategic alignment with national industrial policy. For the broader context on Korea's deep-tech investment wave, see Seoulz's coverage of the top 10 Korea scale-ups for 2026. Seoulz has also reported on the K-biotech surge reshaping global pharma. The investor signal is clearer than it has been in years. Specifically, the cap table tells the story. Samsung Foundry, SK Hynix, Naver, and the Korea Development Bank all sit on it. Meanwhile, LG and OpenAI are public reference customers. As a result, the company is no longer a venture bet. It is national infrastructure. The Risks Nobody Talks About Still, the FuriosaAI Korea AI chip thesis carries real risks that bullish coverage tends to underplay. The most acute is competitive. In November 2025, Nvidia acquired Groq for approximately $20 billion. Meanwhile, AMD absorbed Untether, and OpenAI signed a $10 billion-plus partnership with Cerebras, which is now refiling for an IPO at a $23 billion valuation. As a result, the inference-chip market is consolidating around well-capitalized players faster than any Korean independent can match dollar-for-dollar. Second, revenue remains modest. FuriosaAI shipped only a handful of pilot deployments through 2025. Specifically, its near-term revenue forecast depends heavily on LG and a small number of additional enterprise references. By contrast, Cerebras and SambaNova report revenue runs measured in hundreds of millions. Third, software ecosystem lock-in remains Nvidia's deepest moat. Specifically, CUDA is a developer-ecosystem advantage that no inference-chip startup — Korean or American — has yet meaningfully eroded. Furiosa is investing heavily in PyTorch compatibility, vLLM support, and an OpenAI-compatible API layer. Nevertheless, until the developer flywheel meaningfully shifts, every RNGD design win requires dedicated porting work. Fourth, Korean export controls and US-China tensions sit in the background. Furiosa's manufacturing depends on TSMC, which is increasingly subject to export-restriction regimes. Therefore, geopolitical disruption could quickly disrupt the company's supply chain. The Bottom Line The FuriosaAI Korea AI chip story is, at its core, a story about a single founder making a single decision against the grain of conventional Korean tech wisdom. Specifically, June Paik chose long-term independence over short-term liquidity, and chose enterprise commercialization over Big Tech absorption. As a result, the company he built now occupies a structural position in the global AI infrastructure race that no other Korean deep-tech firm can claim. Whether the bet pays off will depend on three things over the next 18 months. First, whether RNGD's mass-production ramp at TSMC delivers the 20,000-unit target. Second, whether the Series D closes at the rumored $2.3 billion valuation and attracts global anchor investors. Third, whether the 2027 IPO can sustain a public-market valuation in a sector where Nvidia's dominance has only deepened. Above all, the answer to one question is changing. When a foreign investor asks why Korean deep-tech matters, the response increasingly starts with a small Seoul company. It said no to Meta — and meant it.